Enterprise Database Server for ClearPath MCP



Similar documents
unisys ClearPath Enterprise Servers Remote Database Backup Planning and Operations Guide ClearPath MCP 14.0 April

unisys ClearPath Enterprise Servers SQL Query Processor for ClearPath MCP Installation and Operations Guide ClearPath MCP 16.0

unisys Distributed Processing Middleware Enterprise Database SQL Query Processor for ClearPath MCP Installation and Operations Guide imagine it. done.

Server Sentinel Monitored Server

Enterprise Server. Application Sentinel for SQL Server Installation and Configuration Guide. Application Sentinel 2.0 and Higher

Chapter 2: OS Overview

DMSII CLIENT ADMINISTRATOR S GUIDE

Siebel Application Deployment Manager Guide. Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013

MCP Guru Series DMSII & dbatools. Kung Lin Unisys Corporation Jim Stewart Stewart Data Tech. Ltd.

Sentinel Management Server

Server Sentinel Client Workstation

Distributed Data Processing (DDP-PPC) TCP/IP Interface COBOL

Server Management 2.0

Chapter 6, The Operating System Machine Level

unisys ClearPath Enterprise Servers Network Services Implementation Guide ClearPath MCP 15.0 April

System Monitoring and Diagnostics Guide for Siebel Business Applications. Version 7.8 April 2005

EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER

UNISYS. Server Management 2.0. Software Release Announcement. imagine it. done. Server Management 2.0 and Higher. May

Security Service tools user IDs and passwords

UNISYS. ClearPath Enterprise Servers. Authentication Sentinel for OS 2200 User Guide. ClearPath OS 2200 Release 8.2

JD Edwards World. Database Audit Manager Release A9.3 E

Distributed File Systems

IBM Cognos Controller Version New Features Guide

SAP Sybase Adaptive Server Enterprise Shrinking a Database for Storage Optimization 2013

Understanding Connected DataProtector

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014

U.S. FDA Title 21 CFR Part 11 Compliance Assessment of SAP Records Management

Siebel Installation Guide for Microsoft Windows. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014

ClearPath MCP Developer Studio

The BRU Advantage. Contact BRU Sales. Technical Support. T: F: W:

SAS 9.4 Intelligence Platform

Tivoli Storage Manager Explained

Micro Focus Database Connectors

Administration GUIDE. Exchange Database idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233

Overview. Business value

PARALLELS CLOUD STORAGE

Guide to Performance and Tuning: Query Performance and Sampled Selectivity

BrightStor ARCserve Backup for Windows

Performance Monitoring User s Manual

SQL Server Maintenance Plans

ERserver. iseries. Work management

Ckpdb and Rollforwarddb commands

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

Siebel Business Process Framework: Workflow Guide. Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013

How To Backup A Database In Navision

EView/400i Management Pack for Systems Center Operations Manager (SCOM)

Asset Track Getting Started Guide. An Introduction to Asset Track

PREPARED BY: AUDIT PROGRAM Author: Lance M. Turcato. APPROVED BY: Logical Security Operating Systems - Generic. Audit Date:

MICROS e7 Credit Card Security Best Practices

NetFlow Collection and Processing Cartridge Pack User Guide Release 6.0

Adaptive Server Enterprise

MS-40074: Microsoft SQL Server 2014 for Oracle DBAs

THIS SERVICE LEVEL AGREEMENT DEFINES THE SERVICE LEVELS PROVIDED TO YOU BY COMPANY.

Database Management Tool Software User Guide

Data Management for Portable Media Players

B.Sc (Computer Science) Database Management Systems UNIT-V

IM and Presence Disaster Recovery System

THIS SERVICE LEVEL AGREEMENT DEFINES THE SERVICE LEVELS PROVIDED TO YOU BY THE COMPANY.

Load Testing and Monitoring Web Applications in a Windows Environment

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Technical Notes. EMC NetWorker Performing Backup and Recovery of SharePoint Server by using NetWorker Module for Microsoft SQL VDI Solution

Using the Adobe Access Server for Protected Streaming

Hardware Information Managing your server, adapters, and devices ESCALA POWER5 REFERENCE 86 A1 00EW 00

ZXUN USPP. Configuration Management Description. Universal Subscriber Profile Platform. Version: V

Performance rule violations usually result in increased CPU or I/O, time to fix the mistake, and ultimately, a cost to the business unit.

DB2 Backup and Recovery

File Management. Chapter 12

Operating system Dr. Shroouq J.

Windows Server 2008 Essentials. Installation, Deployment and Management


Chapter 10. Backup and Recovery

Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering

A Practical Guide to Backup and Recovery of IBM DB2 for Linux, UNIX and Windows in SAP Environments Part 1 Backup and Recovery Overview

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

Symantec NetBackup OpenStorage Solutions Guide for Disk

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines

features at a glance

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

SAP HANA Backup and Recovery (Overview, SPS08)

RAID Basics Training Guide

Integrity 10. Curriculum Guide

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Declaration of Conformity 21 CFR Part 11 SIMATIC WinCC flexible 2007

CA XCOM Data Transport for Windows Server/Professional

Symantec NetBackup for Lotus Notes Administrator's Guide

Symantec NetBackup Vault Operator's Guide

By the Citrix Publications Department. Citrix Systems, Inc.

5-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual

S7 for Windows S7-300/400

Expert Oracle Exadata

How To Use A Microsoft Networker Module For Windows (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network

CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY

Business Intelligence Tutorial

MAS 200. MAS 200 for SQL Server Introduction and Overview

MySQL and Virtualization Guide

BrightStor ARCserve Backup for Linux

Transcription:

Enterprise Database Server for ClearPath MCP Getting Started and Installation Guide ClearPath MCP 12.0 April 2008

.

unisys imagine it. done. Enterprise Database Server for ClearPath MCP Getting Started and Installation Guide ClearPath MCP 12.0 April 2008 3850 8198 001

NO WARRANTIES OF ANY NATURE ARE EXTENDED BY THIS DOCUMENT. Any product or related information described herein is only furnished pursuant and subject to the terms and conditions of a duly executed agreement to purchase or lease equipment or to license software. The only warranties made by Unisys, if any, with respect to the products described in this document are set forth in such agreement. Unisys cannot accept any financial or other responsibility that may be the result of your use of the information in this document or software material, including direct, special, or consequential damages. You should be very careful to ensure that the use of this information and/or software material complies with the laws, rules, and regulations of the jurisdictions with respect to which it is used. The information contained herein is subject to change without notice. Revisions may be issued to advise of such changes and/or additions. Notice to U.S. Government End Users: This is commercial computer software or hardware documentation developed at private expense. Use, reproduction, or disclosure by the Government is subject to the terms of Unisys standard commercial license for the products, and where applicable, the restricted/limited rights provisions of the contract data rights clauses. Unisys and ClearPath are registered trademarks of Unisys Corporation in the United States and other countries. All other brands and products referenced in this document are acknowledged to be the trademarks or registered trademarks of their respective holders.

Enterprise Database Server for ClearPath MCP Getting Started and Installation Guide ClearPath MCP 12.0 Enterprise Database Server for ClearPath MCP Getting Started and Installation Guide ClearPath MCP 12.0 3850 8198 001 3850 8198 001 Bend here, peel upwards and apply to spine.

.

Contents Section 1. Introducing Enterprise Database Server Documentation Updates... 1 1 Enterprise Database Server... 1 2 Enterprise Database Server Extended Edition... 1 5 Database Operations Center... 1 8 Getting Acquainted with Enterprise Database Server Files... 1 9 Tailored Files... 1 11 Accessing the Database... 1 12 Beginning a Database Design... 1 14 Section 2. Defining Enterprise Database Server Database Structures Language That Defines the Database... 2 1 Overview of Enterprise Database Server Structures... 2 3 Data Sets... 2 5 Data Set Sections... 2 6 Sets... 2 9 Sectioned Sets... 2 11 Relation of Set Sections to Data Set Sections... 2 13 Subsets... 2 13 Data Items... 2 15 Global Data Items... 2 19 Accesses... 2 20 Sectioning for an Access... 2 20 Section 3. Defining Database Options in DASDL Facts About the EMPLOYEEDB Database... 3 2 Auditing the Database... 3 3 Audit Trail... 3 4 Sectioned Audit Files... 3 6 Variable Audit File Buffers... 3 7 Creating the Database... 3 9 Optional Enterprise Database Server Database Functions... 3 10 Managing Structures... 3 13 Controlling Access to Data... 3 16 Naming System Files and Tailored Files... 3 21 Audit Trail Options... 3 22 Control File Location and Usercode... 3 24 Defining the Restart Data Set... 3 25 3850 8198 001 iii

Contents Section 4. Generating a Database Generating a New Database...4 2 Checking the DASDL Syntax and Compiling the DASDL Source File...4 3 Tailored Database Files...4 7 Section 5. Populating a Database Populating the Database...5 2 Running a Batch Application Program...5 2 Section 6. Managing an Active Database Administering a Database...6 2 Maintaining a Database...6 3 Enterprise Database Server Database Services...6 4 Common Maintenance Tasks...6 6 Host System Tasks...6 6 Initializing Database Files...6 7 Maintaining the Database Control File...6 8 Keeping Track of a Processing Job...6 13 Troubleshooting...6 14 Discontinuing a Program...6 16 I/O Errors...6 16 Section 7. Backing Up a Database Making and Keeping a Recent Backup of Database Files...7 2 Summary and Order of Backup Tasks...7 5 Backing Up All or Part of the Database...7 7 Backing Up the Database by Increments...7 8 Storing the Dump...7 11 Database Activity During the Backup...7 14 Performing Online Dumps...7 15 Performing Offline Dumps...7 17 Tasks to Be Performed on an Existing Dump...7 18 Verifying a Dump...7 18 Copying or Duplicating a Dump...7 19 Backing Up Audit Files...7 20 Backing Up Database-Related Files...7 24 iv 3850 8198 001

Contents Section 8. Recovering the Database Overview... 8 2 Automatic Recovery for Audited Databases... 8 2 Automatic Single Transaction Abort Recovery... 8 3 Automatic Abort Recovery... 8 4 Monitoring an Abort Recovery... 8 5 Automatic Halt/Load Recovery... 8 8 Monitoring a Halt/Load Recovery... 8 10 Manual Recovery for Audited Databases... 8 12 Reconstructing Parts of a Database... 8 16 Reconstructing from a Backup Dump... 8 17 Reconstruction Using an Audit File Only... 8 18 Rebuilding a Database... 8 23 Rolling Back a Database... 8 25 Recovering an Unaudited Database... 8 27 Section 9. Monitoring a Database Monitoring the Database... 9 2 General Database Monitoring Tasks... 9 3 Certifying the Consistency of Database Structures... 9 4 Analyzing Logical and Physical Structures... 9 6 Acquiring Database Status and Performance Statistics... 9 8 Section 10. Using Audit Files as a Diagnostic Tool Reasons to View an Audit File... 10 3 Contents of an Audit File View... 10 4 Types of Records in an Audit File... 10 7 Requesting an Audit File View... 10 9 Ordering the Contents of a View... 10 12 Understanding Interval Types... 10 14 Selection Parameters and Examples... 10 17 Section 11. Updating and Reorganizing the Database Changing Database Structures... 11 2 Planning an Update or a Reorganization... 11 3 Online Set Garbage Collection... 11 4 Section 12. TranStamp Locking and Record Serial Numbers (RSNs) TranStamp Locking... 12 1 How Traditional Locking Works... 12 2 How TranStamp Locking Works... 12 2 Support of Traditional and TranStamp Locking... 12 3 Record Serial Numbers (RSNs)... 12 4 How the AA Word Works as a Tiebreaker... 12 5 How the RSN Works as a Tiebreaker... 12 5 3850 8198 001 v

Contents Section 13. Scenarios for Using Enterprise Database Server Extended Edition Scenario 1: Data Set Capacity Reaching Limits...13 2 Increasing Data Set Capacity by Specifying File Attributes...13 2 Logically Separating a Data Set...13 3 Dividing Data Sets into Sections...13 3 Scenario 2: Database Performance Limited by Audit Trail Throughput...13 5 Reducing the Data Being Written to the Audit Trail...13 5 Improving the Efficiency of Enterprise Database Server I/O...13 6 Increasing the Throughput of the Disk Subsystem...13 7 Scenario 3: Database Performance Limited by Set Contention...13 8 Logically Separating a Data Set...13 9 Using Multiple Sets...13 9 Using Enterprise Database Server Extended Edition Sectioned Sets...13 9 Scenario 4: A General Transaction Processing Environment...13 10 Sectioning Audit Files...13 10 Sectioning Sets...13 10 Sectioning Data Sets...13 11 Using TranStamp Locking and RSNs...13 11 Section 14. Support Policy and Release Compatibility Overview Section 15. Installation Process Overview Preparing for the Installation Process...15 2 Data Management Products...15 3 Keys File...15 4 General Installation Requirements...15 5 Products with SDF Plus Screen Interfaces...15 6 Section 16. Understanding Data Management Environment Requirements Memory Requirements...16 2 Planning for VSS-2...16 5 SDF Plus Physical Requirements...16 6 Enterprise Database Server Database Physical Limitations...16 8 Section 17. Creating a Data Management Environment Determining Your Installation Environment...17 2 Installation Overview...17 3 Loading Your Software...17 4 Verifying SDF Plus Libraries...17 4 Installing the ADDS Dictionary...17 5 vi 3850 8198 001

Contents Configuring Remote Database Backup for Use with a Nonusercoded Database... 17 7 Running Two Versions of Enterprise Database Server... 17 9 Running a Second Version of Remote Database Backup on Your System... 17 11 Configuring the Open Distributed Transaction Processing Product... 17 12 Section 18. Upgrading an ADDS Environment to a New Release Level Upgrade Overview... 18 2 Upgrading ADDS Dictionaries... 18 2 Preparing to Upgrade Your ADDS Dictionary... 18 3 Recording the Current Dictionary Properties... 18 4 Performing the Upgrade Process on Your ADDS Dictionary... 18 5 Upgrading ADDS Dictionaries with Fallback Capabilities... 18 5 Upgrading ADDS Dictionaries Without Fallback Capabilities... 18 10 Upgrading Enterprise Database Server Databases... 18 11 Backing Up an Enterprise Database Server Database... 18 12 Upgrading a Remote Database Backup Environment... 18 13 Bringing Down Your Databases... 18 14 Providing a Queue for Remote Database Backup- Related Tasks... 18 15 Modifying the RDB Support Library for a Nonusercoded Database... 18 16 Upgrading the Secondary Database... 18 17 Facilitating the NFT Task Under the AFS Mode... 18 18 Section 19. Upgrading a Non-ADDS Environment to a New Release Level Upgrade Overview... 19 2 Loading the Data Management Software... 19 2 Verifying SDF Plus Libraries... 19 3 Upgrading Enterprise Database Server Databases... 19 4 Backing Up an Enterprise Database Server Database... 19 5 Upgrading a Remote Database Backup Environment... 19 5 Bringing Down Your Databases... 19 7 Providing a Queue for Remote Database Backup- Related Tasks... 19 8 Modifying the RDB Support Library for a Nonusercoded Database... 19 9 Upgrading the Secondary Database... 19 10 Facilitating the NFT Task Under the AFS Mode... 19 11 3850 8198 001 vii

Contents Section 20. Returning to a Previous Release Level Returning to a Previous Release Overview...20 2 Returning to a Previous Release Level of ADDS...20 3 Returning to a Previous Release Level of Enterprise Database Server...20 5 Returning to a Previous Release Level of Remote Database Backup...20 6 Section 21. Installing Interim Corrections Without Closing the Database Understanding Software Updates with the DMUPDATE Utility...21 2 Types of Software Updates...21 3 Planning for a Software Update with the DMUPDATE Utility...21 3 Software Components for a Software Update...21 4 Elements of the Configuration File...21 5 Customizing Your Software Update Using the DMUPDATE Configuration File...21 7 Comment Header...21 8 Understanding the Software Update Types...21 9 Controlled Software Update...21 9 Assisted Software Update...21 11 Automatic Software Update...21 12 Files Used During a Software Update Using the DMUPDATE Utility...21 14 Performing a Software Update Using the DMUPDATE Utility...21 15 Backing Out an Installed IC...21 16 Limitations and Considerations...21 16 Software Updates That Require a DASDL or DMCONTROL Update...21 18 Including or Excluding Particular Databases During a Software Update...21 18 Conditions That Result in an Aborted Software Update Using the DMUPDATE Utility...21 19 Conditions That Result in a Skipped Database...21 19 Checking On the Success Status of the DMUPDATE Utility...21 19 Checking On the Status of Open Databases Using DMUPDATESUPPORT...21 20 Appendix A. DASDL Definition for Sample Database Appendix B. Database Specifics Chart Appendix C. SL (Support Library) System Command Associations Index... 1 viii 3850 8198 001

Figures 1 1. Sample Database Files on Primary Family Pack... 1 10 1 2. Partial List of Sample Database Files on Secondary Family Pack... 1 11 2 1. Part of a Data File... 2 4 2 2. Sample Portion of PERSON Data Set... 2 5 2 3. Partial DASDL Definition for the PERSON Data Set... 2 6 2 4. Sample of PERSON-SET Set for the PERSON Data Set... 2 10 2 5. DASDL Definition for the PERSON-SET Set... 2 11 2 6. Sample of MANAGER Subset for the PERSON Data Set... 2 14 2 7. DASDL Definition for the MANAGER Subset... 2 15 4 1. Sample DASDL Syntax Error Messages... 4 4 4 2. DASDL Code Showing Syntax Errors... 4 5 4 3. Flowchart of a DASDL Compilation... 4 8 4 4. Creation of an Empty Database... 4 8 5 1. Software Interaction When the Database Is Open... 5 8 6 1. Jobs Displayed on MARC Screen... 6 13 8 1. Flowchart of Accessroutines Actions in the Abort Recovery Process... 8 5 8 2. Flowchart of the Halt/Load Recovery Process... 8 9 10 1. Introductory Lines of Audit File View... 10 5 10 2. Comparison of Audit File View Formats... 10 6 10 3. Examples of Types of Audit File Records... 10 8 10 4. Narrowing the Focus of an Audit File View Request... 10 12 3850 8198 001 ix

Figures x 3850 8198 001

Tables 4 1. Explanations of Syntax Error Messages... 4 6 10 1. Designating Where to Send the Audit File View... 10 9 10 2. Order and Purpose of Request Parameters... 10 13 10 3. Types of Intervals for Audit File Views... 10 14 10 4. Interval Types and Examples... 10 15 10 5. Values for Date and Time in Time Interval... 10 16 10 6. Results of PRINTAUDIT Verification of Timestamps... 10 17 10 7. Selection Parameters and Examples... 10 18 A 1. Summary of Sample EMPLOYEEDB Database Structures...A 2 3850 8198 001 xi

Tables xii 3850 8198 001

Section 1 Introducing Enterprise Database Server Purpose The purpose of this guide is to help you understand and start using the Enterprise Database Server. The first 14 sections of the guide discuss basic concepts and information you need to know about the Enterprise Database Server. Sections 15 through 21 are more technical and explain the details involved in installing the Enterprise Database Server and other data management products. This guide contains information previously presented in the Enterprise Database Server Extended Edition Capabilities Overview, Getting Started with DMSII Guide, and Data Management Installation Guide. Terminology In this document, the term ClearPath MCP servers refers to ClearPath LX and CS servers, and FS and Libra Series servers. In This Section This section provides information about Enterprise Database Server Enterprise Database Server Extended Edition Database Operations Center Getting acquainted with Enterprise Database Server files Accessing the database Beginning a database design Documentation Updates This document contains all the information that was available at the time of publication. Changes identified after release of this document are included in problem list entry (PLE) 18523035. To obtain a copy of the PLE, contact your Unisys representative or access the current PLE from the Unisys Product Support Web site: http://www.support.unisys.com/all/ple/18523035 Note: If you are not logged into the Product Support site, you will be asked to do so. 3850 8198 001 1 1

Enterprise Database Server Enterprise Database Server Definition Enterprise Database Server is the database management system (DBMS) for ClearPath MCP enterprise servers. Among many database management systems available in the information technology world today, Enterprise Database Server is a mature, proven DBMS that continues to receive major feature enhancements and performance improvements. In addition to all of the Enterprise Database Server features (also known as Enterprise Database Server Standard Edition), you have the option of using the Enterprise Database Server Extended Edition features. These can be thought of as add-on features to the Enterprise Database Server Standard Edition. Enterprise Database Server Use Many large and small businesses throughout the world use Enterprise Database Server. Examples are airline reservation systems, financial institutions, retail chains, insurance companies, utilities, and government agencies. Enterprise Database Server as a DBMS As a DBMS, Enterprise Database Server facilitates Building database structures for data according to an appropriate logical model (relational, hierarchical, or network) Managing database structures Keeping structures in stable order while application programs are retrieving or changing data Enterprise Database Server Data Definition Language The data definition language that Enterprise Database Server uses is Data and Structure Definition Language (DASDL). 1 2 3850 8198 001

Enterprise Database Server Database Management Task Overview The following table lists briefly the database management tasks that Enterprise Database Server enables you to do. As these tasks are explained in various sections of this guide, you will learn more about how Enterprise Database Server software programs work. Database Management Area Tasks Software Tools Database performance Database control Data safety Database software installation Database structure definition and modification Monitoring and optimizing database performance Altering run-time performance attributes Monitoring multiprogram database access Integrity checking Preventing access to the same data by multiple applications at the same time Integrated installation of software updates with enhanced database availability during installation of an Enterprise Database Server Interim Correction (IC) or a Supplemental Support Package (SSP). Defining data structures and the data fields within them Utilities: DMMONITOR Accessroutines Visible DBS commands Database control file and Accessroutines DMUTILITY options: DBCERTIFICATION, CHECKSUM, DIGITCHECK, ADDRESSCHECK, KEYCOMPARE, INDEPENDENTTRANS, LOCK TO MODIFY DETAILS Accessroutines structure locks DMUPDATE utility DASDL compiler REORGANIZATION program Data access Modifying data structures Developing an application program to retrieve or change data Third-generation host languages: COBOL, ALGOL, FORTRAN Utilities: DMINQUIRY and DMINTERPRETER Enterprise Database OLE DB Data Provider for ClearPath MCP 3850 8198 001 1 3

Enterprise Database Server Database Management Area Tasks Software Tools Database and data security Independence of application programs from data changes Database and data recovery Data change tracking Data change integrity Having a recent copy of the database in reserve Database scalability Preventing unauthorized database access Preventing revision of application programs every time a structure changes Resuming database operations after an interruption Keeping a record of every change made to data Ensuring that update changes are applied to, or removed from, the database in their entirety Backing up the database and storing copies of audit files and all other database files Growing or shrinking the database according to business needs Host system security Guard files DASDL: Logical remap and logical database capabilities for applications DASDL: Logical remap and logical database capabilities for applications Utilities: DMRECOVERY, DMUTILITY, ACCESSROUTINES, DMDATARECOVERY, RECONSTRUCT DASDL AUDIT option and restart data set definition Accessroutines DASDL options: INDEPENDENTTRANS, REAPPLYCOMPLETED DMUTILITY commands: DUMP, VERIFYDUMP, COPYDUMP, BUILDDUMP DIRECTORY, DUPLICATEDUMP, TAPEDIRECTORY Programs: DMDUMPDIR, and COPYAUDIT DMUTILITY REORGANIZATION program 1 4 3850 8198 001

Enterprise Database Server Extended Edition Enterprise Database Server Extended Edition The goals of the Enterprise Database Server Extended Edition are Linear scalability Multiterabyte capacity Increased database availability Linear Scalability Goal Ideally, the Enterprise Database Server Extended Edition seeks to provide linear scaling, meaning that each processor added to a system yields the same performance increment as the processor that preceded it. Realistically, scaling cannot be truly linear because other factors such as hardware, the MCP, and the Enterprise Database Server itself affect the scaling results. However, the Enterprise Database Server Extended Edition provides features that allow a much closer approach to the linear scaling ideal than has been possible previously. Multiterabyte Capacity Goal The Enterprise Database Server Extended Edition provides multiterabyte data capacity at the data set level. Existing systems (including the Enterprise Database Server Standard Edition) achieve multiterabyte data capacity by requiring application logic to create multiple logical data structures to simulate capacity expansion. With the Enterprise Database Server Extended Edition, each data set can hold more than 12 terabytes of data. The Enterprise Database Server Extended Edition implements structures with large amounts of data rather than relying on application logic to piece together several structures with less capacity. Database Availability Goal One availability goal of the Enterprise Database Server Extended Edition is to reduce the amount of time during which a database or structure is unavailable because of required maintenance. The Enterprise Database Server Extended Edition addresses this goal by providing an online garbage collection facility for disjoint index sequential sets and subsets. 3850 8198 001 1 5

Enterprise Database Server Extended Edition How the Enterprise Database Server Extended Edition Achieves Its Goals The Enterprise Database Server Extended Edition revamps the handling of database structures and files to increase Parallelism and I/O throughput Capacity of individual data sets Database availability during online reorganization Enterprise Database Server Extended Edition Features To achieve its scalability, capacity, and availability goals, the Enterprise Database Server Extended Edition provides the following features: Variable audit file buffers Sectioned audit files Sectioned disjoint standard data sets Sectioned disjoint compact data sets Sectioned disjoint random data sets Sectioned disjoint direct data sets Sectioned disjoint index sequential sets TranStamp locking Record serial numbers (RSNs) Online set garbage collection Systems That Benefit from the Enterprise Database Server Extended Edition To benefit from Enterprise Database Server Extended Edition scalability and capacity features, a system must have processor and memory resources available and must exhibit one or more of the following conditions: Restricted I/O throughput in the audit trail Set throughput bottleneck Nearing data set capacity limits Processing transactions that modify large numbers of records 1 6 3850 8198 001

Enterprise Database Server Extended Edition Moving to the Enterprise Database Server Extended Edition Moving to the Enterprise Database Server Extended Edition from an existing Enterprise Database Server database is relatively simple because of the following principles of coexistence: The Enterprise Database Server Extended Edition is based on the Enterprise Database Server Standard Edition and is compatible with existing database structures. Movement to the Enterprise Database Server Extended Edition can be on a structure-by-structure basis. The Enterprise Database Server Extended Edition structures and the Enterprise Database Server Standard Edition structures can exist side by side in the same database. Transactions can span both the Enterprise Database Server Standard Edition and the Enterprise Database Server Extended Edition structures. Migration does not require changes to the logic or methodology of existing database application programs. Enterprise Database Server Extended Edition Feature Access and Use Access to specific Enterprise Database Server Extended Edition features requires that you 1. License the Enterprise Database Server Extended Edition. 2. Set the INDEPENDENTTRANS option in DASDL. 3. Explicitly activate each Enterprise Database Server Extended Edition feature for each database or for specific database structures within a database. Both Enterprise Database Server Extended Edition users and Enterprise Database Server Standard Edition users benefit from the algorithmic changes that have been made to the Enterprise Database Server data engine (Accessroutines) to support new Enterprise Database Server Extended Edition features. Enterprise Database Server Extended Edition and Other Data Management Products Enhanced Software To maintain full compatibility with the Enterprise Database Server Extended Edition, Remote Database Backup has undergone changes. Modified Software The DMINQ interface in the Accessroutines has undergone modification to enable a seamless interfacing under the Enterprise Database Server Extended Edition with the Database Interpreter software component. Database Certification has been modified for use with the Enterprise Database Server Extended Edition. 3850 8198 001 1 7

Database Operations Center Supported Software The following products are supported for use with Enterprise Database Server Extended Edition features: Database Operations Center Enterprise Database OLE DB Data Provider of ClearPath MCP Unsupported Software The following products are not supported for use with Enterprise Database Server Extended Edition features: Advanced Data Dictionary System (ADDS) Transaction Processing System (TPS) Database Operations Center Database Operations Center is a graphical user interface (GUI) that provides a clientserver front-end for Enterprise Database Server Standard Edition and Enterprise Database Server Extended Edition database utilities on ClearPath MCP servers. Database Operations Center Performs administrative functions for Enterprise Database Server Standard Edition utilities Retains the Enterprise Database Server Edition utility command line for optional use Supports Enterprise Database Server Extended Edition utility-related enhancements Runs on various Windows operating systems Conforms to the look and feel of the Client Access Services administration utilities 1 8 3850 8198 001

Getting Acquainted with Enterprise Database Server Files Getting Acquainted with Enterprise Database Server Files Files That Work with All Databases Enterprise Database Server provides standard software files that perform services and operations for all databases on your ClearPath MCP server. After Enterprise Database Server is installed, you can view a list of these files on your terminal. To do this, enter the following Command and Edit (CANDE) FILES commands. Listing Enterprise Database Server Standard Software Files The FILES DATABASE ON HUBPACK command lists Enterprise Database Server standard software files that reside on HUBPACK, the Enterprise Database Server software family for the sample EMPLOYEEDB database. The FILES *SYSTEM/= ON HUBPACK command lists Enterprise Database Server utility files among other host system software files. Listing Files for a Particular Database Overview If you have an existing database, use the CANDE FILES command to see the names of the database files with a first node of the database name. Use the database usercode and family statement. Every file connected with the database has the database name as either the first or second node of the database file. EMPLOYEEDB is the name of the sample database used for examples in this guide. The following examples use default pack locations. Your site might have set up different pack locations. In addition, your file lists can vary because of database differences and customizations at your site. Listing Primary Family Pack Files The FILES ~/EMPLOYEEDB command lists the files shown in Figure 1 1. (The tilde as the first node is a wild card.) 3850 8198 001 1 9

Getting Acquainted with Enterprise Database Server Files FILES ~/EMPLOYEEDB ~/EMPLOYEEDB ON HR File Name Filekind Records Sectors CreationTime --------------------------+------------+------------+------------+------------ DMSUPPORT/EMPLOYEEDB DCALGOLCODE 1829 1836 04/04/1997 DESCRIPTION/EMPLOYEEDB DASDLDATA 50 450 04/04/1997 RECONSTRUCT/EMPLOYEEDB DCALGOLCODE 14 18 04/04/1997 4 FILES FOUND # Figure 1 1. Sample Database Files on Primary Family Pack Figure 1 1 lists the following tailored database files that Enterprise Database Server generates during the compilation of the database definition: DMSUPPORT/EMPLOYEEDB is an object code file a library containing entry points to procedures that allow an application program to obtain Enterprise Database Server error codes at run time. Enterprise Database Server standard software also uses this library. DESCRIPTION/EMPLOYEEDB is a data file containing information used when Enterprise Database Server compiles all tailored software and all Enterprise Database Server user-language programs for a particular database. RECONSTRUCT/EMPLOYEEDB is generated only if the database is audited. This object code file enables a row reconstruction operation. Listing Secondary Family Pack Files The FILES EMPLOYEEDB ON HUBPACK command lists files that hold your data and associated files (see Figure 1 2). 1 10 3850 8198 001

Getting Acquainted with Enterprise Database Server Files (SYSDBA) ON HUBPACK. EMPLOYEEDB.. RST... DATA : DBRESTARTSET... RESSET : DBDATA.. FAMILY... DATA : DBDATA... FAMILY-SET : DBDATA.. PERSON... DATA : DBDATA... EMP-MGR : DBDATA... MANAGER : DBDATA... EMPLOYEE : DBDATA... PERSON-SET : DBDATA... PROJECT-EMPLOYEE : DBDATA... PREVIOUS-EMPLOYEE : DBDATA.. CONTROL : DBDATA.. PROJECT... DATA : DBDATA... PROJECT-SET : DBDATA... SUPER-PROJECTS : DBDATA.. EDUCATION... DATA : DBDATA Figure 1 2. Partial List of Sample Database Files on Secondary Family Pack Figure 1 2 lists files of type DBDATA that hold the data for the sample EMPLOYEEDB database. The file of type DBRESTARTSET identifies the restart data set used by Enterprise Database Server recovery and by the applications when they resume processing after a database recovery. Tailored Files EMPLOYEEDB/CONTROL controls database operation by Verifying compatibility between tailored software and database files Verifying that all data files are at the same level of update Storing audit control information, dynamic database parameters, and other information for use by Enterprise Database Server and application software programs Controlling locking and unlocking of the database During the compilation of the database definition, standard Enterprise Database Server programs tailor several files for use with the particular database only. These files are known as tailored software. The following table lists the tailored database files for the sample EMPLOYEEDB database and some of their characteristics. 3850 8198 001 1 11

Accessing the Database Default Software Name Descriptive File Name File Characteristics and Purpose DESCRIPTION/ EMPLOYEEDB EMPLOYEEDB/ CONTROL DMSUPPORT/ EMPLOYEEDB RECONSTRUCT/ EMPLOYEEDB Description file Static Changed only by DASDL compilation Serves as a basis for other tailored files Used by application compilers Control file Dynamic Continually accessed and changed during database access Monitors and controls database access; considered the run-time extension of the description file DMSUPPORT library RECONSTRUCT program Static Recompiled after a new description file is compiled Source of database information at run time Static Recompiled after a new description file is compiled Source for row reconstruction in an audited database Accessing the Database The People Involved People who access the Enterprise Database Server database have three separate roles: application programmer, application program user, and database administrator (DBA). Application Programmer The application programmer designs and writes a program that performs a task or set of tasks in relation to the data of an enterprise. This person writes the program in a computer language, for example, COBOL, ALGOL, RPG, or OLE DB applications. The application programmer thoroughly understands the perspective of the application user and depends on the DBA for information on Enterprise Database Server rules, and the utilities and tools with which the application program must interface. Application Program User In the normal course of business, the application program user submits changes to data or retrieves data while running the application program. Changes add, modify, and delete data. 1 12 3850 8198 001

Accessing the Database The application user understands how the application helps him or her to accomplish tasks. This person interfaces with the application programmer or DBA when new information is needed from the database or when a problem arises with the application. DBA The DBA and his or her assistants access the database to manage and maintain it. As data relationships change or new data relationships are added, the DBA changes the database structure definitions. The DBA obtains information about expanding or changed business perspectives from application programmers or application program users. The DBA keeps the database running smoothly and enforces rules for data integrity and security. While the database is running, the DBA and operators interact with the database through the Visible DBS commands. Other DBA responsibilities include making backups of the database, and monitoring, adjusting, testing, and qualifying changes to database performance in relation to other demands within the computer environment. Interface: Application Programs and Database Software Users access the database through the application program. The application program does not access data directly. Instead, the program interacts with Enterprise Database Server standard software and database tailored software working together as a unit. This software group directed by the Enterprise Database Server Accessroutines accesses, retrieves, and stores data in the physical database files. Reasons for Access An application program user accesses the data for one of two purposes: Inquiry, which is a read of data in the database usually to make a copy elsewhere Update, which is a write to the database that adds, deletes, or changes data Access for either purpose contributes to an operation on the database called a transaction. Transactions In data management, a transaction is a sequence of operations grouped by a user program because the operations constitute a single logical change to the database. The application programmer decides how many data reads and writes by a program make up a transaction. At the end transaction point, the transaction is complete and without error. It is considered committed to the database. Committed means that the database has been changed and that the change is visible to other database users. 3850 8198 001 1 13

Beginning a Database Design Beginning a Database Design Real-World Business Map of Data The design of an Enterprise Database Server database depends upon a real-world business map of the data to be stored. The following table illustrates part of the real-world map for the EMPLOYEEDB database used as an example in this guide. Creative Samples Inc. Employee Database Information Data Needed for Each Employee Employee ID number Name (first, last, and middle initial) Birth date Age Marital status Address (street, city, state, postal code) Next of kin (first name, last name, middle initial, relationship, phone) Citizenship Gender Employed (current/former) Hire date Status, title Leave status Reason for termination Last work date Overall rating Bonus Manager title Department number Head of department Acting manager People Category Breakdowns Needed Employee job information: Assignment Project Manager Department Benefit Information Spouse and children Skills Assessment/ Searches Education Assignment Title/level Assignment Start/end date Estimated hours Rating Project number Assignment number ID for Searches Employee number Project number Assignment number Project Information Project number Project title Department number Sub or super project of Program manager Version level Team ID Where to Put Real-World Business Data in the Database Real-world business data goes into logical structures that Enterprise Database Server uses to store data. Whoever designs the database maps categories of data to suitable structures. 1 14 3850 8198 001

Beginning a Database Design The following table shows the correspondence between some real-world data and Enterprise Database Server data structures. Appendix A shows the complete mapping of data in Enterprise Database Server structures in the EMPLOYEEDB database definition. Real-World Data Structure Example Category Data set Person Data that can serve as an index of a whole data set Data that can serve as an index of a data set under a condition Data about each instance of the category Data related to the database as a whole Set Subset Data item Global data item Employee number Employee number Name Address Total number of employees Related Information Topics For information about... Refer to... Complete sample DASDL definition DASDL definition for real-world business map of data Database Operations Center Enterprise Database Server utilities Installing Enterprise Database Server Security Tailored files Appendix A Section 2, 3, and Appendix A DASDL Reference Manual Database Operations Center Getting Started Guide Database Operations Center Help Varied sections of this guide Enterprise Database Server Utilities Operations Guide Simple Installation Operations Guide Security Administration Guide Varied sections of this guide Enterprise Database Server Utilities Operations Guide Types of structures Section 2 DASDL Reference Manual Visible DBS commands Enterprise Database Server Utilities Operations Guide 3850 8198 001 1 15

Beginning a Database Design 1 16 3850 8198 001

Section 2 Defining Enterprise Database Server Database Structures In This Section This section provides information on Data and Structure Definition Language (DASDL) Enterprise Database Server structures in general Data sets Sets Subsets Data items Global data items Accesses Structure examples are from the sample EMPLOYEEDB database used in this guide. Language That Defines the Database Introduction Once you have identified the real-world data to be stored in a database, you need to define that data in relation to Enterprise Database Server data structures that hold data. When you define data within structures, Enterprise Database Server and system software programs and application programs can understand how to make the data accessible for inquiry and change. You use the Data and Structure Definition Language (DASDL) to define the data. 3850 8198 001 2 1

Language That Defines the Database What This Guide Tells You About DASDL This guide provides you with A good example of a realistic database definition the EMPLOYEEDB DASDL definition (see Appendix A for the entire definition) DASDL components for data set, set, and subset descriptions (see Figures 2 3, 2 5, and 2 7) Brief explanations of the defaults, options, and parameters that are set for the sample EMPLOYEEDB database DASDL Information You Can Find in Associated Manuals You can find the following information about DASDL in other Enterprise Database Server reference manuals: DASDL language components Optional features not used in the sample database Remaps and logical databases for security Modeling and reorganizing Structure formats You can find the names of associated manuals under the Related Information Topics headings in this and other sections of this guide. What You Define with DASDL You define two kinds of basic information: The data structures to hold your data Optional features under which you want the database to run How to Write the DASDL Definition You can create a new DASDL definition or make changes to an existing DASDL definition by following the order and format shown in the sample EMPLOYEEDB database. You can also refer to additional information in the manuals listed under Related Information Topics. To create a new file of type DASDL or to edit an existing DASDL file, you can use CANDE (or another editor). For a new database, the database designer (who might also be the DBA) designs structures. However, anyone who understands the design and DASDL can create the DASDL file. 2 2 3850 8198 001

Overview of Enterprise Database Server Structures Overview of Enterprise Database Server Structures Structure Names and Purposes Enterprise Database Server structures are the building blocks of every Enterprise Database Server database. All structures are files of type DBDATA. The following table lists the names and purposes for Enterprise Database Server data structures. Name Data item Data set Global data item Set Subset Purpose Defines a unit of information about a category in a field (column) of a data set record. Stores data pertaining to a data category in a collection of records. Stores a unit of information about the entire database or any of its structures. Indexes all records in a data set. Indexes some records in a data set according to criteria. Terminology As you work with an Enterprise Database Server database, you might find that the types of data you talk about are frequently interchanged with the names of data structures. For example, In a relational database, a data set is called a table. A set or subset is frequently called an index. A data item is frequently called a field or a column, or is called by its data name, for example, project. The structures are made up of common file components: records and fields. Records A record is a group of logically related data items in a file. Sometimes a record is called a row. The data items reside in fields in the records. Sometimes a field is called a column. 3850 8198 001 2 3

Overview of Enterprise Database Server Structures Figure 2 1 illustrates several records from a file with labels for parts of the record. Figure 2 1. Part of a Data File How the Host System Deals with Records The system treats the record as a unit. The system makes data available to users in records, not individual data items. In programmer language, the record is the unit of data that the system reads from or writes to a file in one execution of a read or write statement in a program. In Enterprise Database Server, if the application program wants to change a data item in a record, Enterprise Database Server brings a copy of the record from physical storage to memory, enables the data item to be changed, and writes the changed record back to the file. For example, if a program was gathering the total salary for the employees on a project, the application program would contain a statement telling the computer to read the record of each employee on the project and write the salary information found in each record in a printed report form and total it. Fields A field is a consecutive group of bits or bytes within a component of a record that represents a logical piece of data. A field (or column) is defined by the description of the data item it is to hold. Types of Structures Each structure can be standard, or it can have one or more special characteristics that governs either the type of data the database stores or the way applications can access the data. The sample database used in this guide primarily contains standard structures. 2 4 3850 8198 001

Data Sets Data Sets Definition The data set is a physical file a collection of related data records stored on a randomaccess storage device (disk) in which your data resides. Purpose The data set exists to store data for a database entity. In the PERSON data set for the sample EMPLOYEEDB database, each piece of data about the entity (person) has a place to reside (see Figure 2 2). The physical data set file is in the form of a table with columns and rows: Each column contains a defined data item that describes something about the project for which the data set is built. Each row contains one instance of the entity for which the data set was built. After the database is generated, some application programs whose job is to update the database populate the data sets with data. That is, update programs write data to the fields of data set records. Figure 2 2. Sample Portion of PERSON Data Set 3850 8198 001 2 5

Data Sets Writing the Data Set Definition Figure 2 3 shows the beginning and end of the PERSON data set DASDL definition with labels identifying how DASDL expresses data set components to the system. Figure 2 3. Partial DASDL Definition for the PERSON Data Set Keeping a Data Set Up-To-Date A data set is kept up-to-date in two ways: Application programs add, change, or delete individual pieces of data or records stored in the data set. The DBA or assistants maintain the structure of the data set. For example, when necessary, the DBA Data Set Sections Definition Keeps the data set within maximum size limits Adds, deletes, or changes the definition of a data item (column) Creates new sets or subsets Monitors automatic Enterprise Database Server processes that guard data integrity Creates guard files to enhance the security of the data One significant feature of the Enterprise Database Server Extended Edition is the sectioned data set. A sectioned data set is one logical data set structure composed of multiple physical files. A minimal change in the physical record format occurs as a result of spreading the data set across several files. 2 6 3850 8198 001

Data Sets Purpose Sectioned data sets serve two main purposes: Enable the Enterprise Database Server Extended Edition to expand the data set capacity from 48 gigabytes per structure to 48 gigabytes per section. With a maximum of 255 sections per data set, each data set is capable of holding more than 12 terabytes of data. Reduce or eliminate the throughput restrictions imposed by the architecture of internal locks within the Enterprise Database Server. For example, standard data sets contain an available space table (DKTABLE) to efficiently manage space reuse. When a program seeks to add or delete a record from the structure, the program must acquire a lock for the DKTABLE, and the system must adjust the DKTABLE. When many programs seek to add or delete records simultaneously, the potential for having to wait for access to the DKTABLE increases. When a standard data set is sectioned, each section contains its own DKTABLE. Therefore, although there might be many programs adding or deleting records from the data set, only a fraction of those programs access a given data set section. Not only is contention for the DKTABLE reduced, but also multiple programs can add and delete records simultaneously. Impact on Application Programs Logically, sectioned data sets appear identical to nonsectioned data sets, and in fact, application programs cannot detect whether a data set is sectioned. Consequently, when you migrate to sectioned data sets, no changes to application program logic are required. However, application programs do need to be recompiled. Requirements The requirements for data set sections are as follows: Only disjoint compact, direct, random, and standard data sets can be sectioned. A data set cannot be sectioned if it is the target of a link item. All sections of a data set must reside on the same pack family. Specifying Data Set Sections You specify data set sections in DASDL by setting the EXTENDED option for the data set and, depending on the data set type, performing one of the following tasks: For compact and standard data sets, specify the number of sections into which the structure should be divided. For direct and random data sets, include the section specification as part of the Access declaration. 3850 8198 001 2 7

Data Sets The Enterprise Database Server Extended Edition creates the specified number of files and distributes data set records among the sections using the REORGANIZATION program. The Enterprise Database Server Extended Edition doubles the size of the absolute address (AA) word to identify records within a sectioned data set: The first word contains the section number in which a record resides. The second word is identical to the word traditionally used to point to a specific location within the file. Migrating to sectioned data sets, therefore, requires a reorganization to accommodate the two-word AA word pointing to the sectioned data set. All set structures that point to that sectioned data set need to be reorganized. Distribution of New Data Set Records Note: The following text applies to sectioned compact and standard data sets only. When a data set is sectioned, the Enterprise Database Server Extended Edition distributes new data set records among the sections by different schemes, depending on whether the reorganization is done online (the default) or offline: Online by a round-robin allocation Offline by round-robin allocation (no ORDERBY clause specified) Offline by section (ORDERBY clause specified) The system places the appropriate number of records in the first section before writing any records to the second section, and so forth. Sectioning Capabilities of Data Sets The following information describes sectioning capabilities for compact, direct, and random data sets. Detailed information regarding sectioning of these data sets as well as sectioning for standard data sets is available in the DASDL Reference Manual. Compact Data Sets You can implement a compact data set as a single logical file, while the underlying physical data store is implemented using multiple files. Sectioning compact data sets is identical to sectioning standard data sets in that sectioning is performed through the use of multiple physical files and records are added using a round-robin mechanism. This feature reduces contention for internal resources related to physical file I/O because the number of contenders for a resource is reduced to 1/n-th (n equals the number of sections) of those for a single physical file. By reducing the amount of contention, performance scalability increases. In addition, total data storage capacity for the logical structure increases because each section is capable of holding as much data as was previously allowed for the entire logical structure prior to sectioning. 2 8 3850 8198 001

Sets Direct Data Sets You can implement a direct data set as a single logical file, while the underlying physical data store is implemented using multiple files. As with compact data sets, this feature reduces contention for internal resources and increases both scalability and data capacity. Sectioning of direct data sets is similar to that of standard data sets because these structures also use multiple physical files. However, the method by which these records are assigned is based on a key value rather than the round-robin algorithm. The result is a mixture of the multiple physical file characteristics of a sectioned standard data set combined with the sectioning concepts of indexed sequential sets. A group item can be specified as the key for a direct data set. Group keys can contain only numeric items and must have a total length of 11 digits or less. Preallocation cannot be specified for sectioned direct data sets. Sectioning of direct data sets provides the ability to specify the gap areas as separate sections. By not preallocating these sections, you can effectively eliminate wasted space caused by these gaps. Sectioning of direct data sets is specified as part of the Access declaration rather than as part of the data set declaration. The syntax for the sectioning specification in the Access declaration is identical to that currently used for sectioning an indexed sequential set. Random Data Sets You can implement a random data set as a single logical file, while the underlying physical data store is implemented using multiple files. This feature increases scalability and data capacity. Sectioning of random data sets is specified as part of the Access declaration rather than as part of the data set declaration. The syntax for the sectioning specification in the Access declaration is identical to that currently used for sectioning an indexed sequential set. Sets Definition A set is a separate stored file that indexes all the records of a single data set. Enterprise Database Server uses sets to locate records in a data set. A set has no meaning apart from its data set. The collection of information in a set differs depending on the type of set that is being used. 3850 8198 001 2 9

Sets Purpose The set structure enables an application program to access all records of a data set in some logical sequence. Typically, a set is created to speed up certain types of data retrieval from the data set. The decision to create a set rests on knowledge of how users access data in the data set. However, access by way of a set can sometimes slow down updates to data. Therefore, after you create a set, you might monitor database performance during updates to ensure that the speed of access is not outweighed by the processing overhead connected with updating both the data set and the set. How Sets Work Figure 2 4 shows a sample portion of the PERSON data set with the PERSON-SET set that indexes the data set by Social Security number. The Social Security numbers are sequenced in an ascending order. Alternatively, they could be sequenced in a descending order. To use the set, an application program identifies the person by Social Security number. To retrieve the record of a person, Enterprise Database Server uses the smaller file, the set, to quickly point to the corresponding record in the larger file, the data set. Figure 2 4. Sample of PERSON-SET Set for the PERSON Data Set 2 10 3850 8198 001

Sets Writing the Set Definition The application programmer creates sets based on the ways in which application programs need to access data. Anyone who understands the database design and DASDL can write the DASDL definition for a set. Figure 2 5 shows the PERSON-SET set description with labels identifying how DASDL expresses set components to the system. Figure 2 5. DASDL Definition for the PERSON-SET Set Keeping a Set Up-To-Date A set is kept up-to-date in two ways: When application programs add, change, or delete individual pieces of data or records stored in the data set, Enterprise Database Server automatically makes the corresponding changes in the sets that are affected by such changes. The DBA maintains the structure of the set. For example, when necessary, the DBA Sectioned Sets Keeps the set within maximum size limits Adds, deletes, or changes a set definition Monitors automatic Enterprise Database Server processes that use the set The two types of sectioned sets are logical and physical. Logical A logically sectioned set is one physical structure file within which logical boundaries for a number of sections are defined. Physical A physically sectioned set allows more records in the structure than logical sectioning and allows multiple set files for specifying greater key range indexes. You can support a large number of records in a data set and improve the capacity of structure storage. Refer to the DASDL Reference Manual for instructions on how to use the syntax that enables you to define a physically sectioned set. 3850 8198 001 2 11

Sets Purpose of Sectioning Requirements When multiple application programs access a data set by way of a set, some contention for set resources occurs. As the number of programs rises, so does the amount of contention. Set sectioning reduces or, in some cases, eliminates set resource contention. Sectioning of sets is critical for achieving scalability on Systems with many processors Databases with large data sets Only disjoint index sequential sets can be sectioned. You specify section boundary values in DASDL and then perform a DASDL update. Understanding the Nature of Set Contention The reason contention for set resources occurs is that the Enterprise Database Server locks the entries in the coarse and fine tables of the set whenever a program Adds or deletes an entry The program must lock those portions of the set that could possibly be changed as a result of the addition or deletion. Depending on how full the set tables are at the time, many levels of set tables can be locked. Searches the set for an entry To ensure that entries do not change in the middle of the search, the Enterprise Database Server locks that portion of the set where the search is taking place. For find and lock operations, the Enterprise Database Server typically locks only two tables on adjacent levels simultaneously. How Traditional Sets Work Locks on tables at higher levels prevent access to more tables than do locks at lower levels, creating an inherent performance bottleneck. The extreme case occurs when the root table (the uppermost table in a set) is locked, preventing access to the set by other programs. As more programs access or manipulate set entries, more locking occurs. The key to removing this bottleneck is to divide the set in such a way that locks at higher levels affect fewer entries. 2 12 3850 8198 001

Subsets How Enterprise Database Server Extended Edition Sectioned Sets Work Sectioned sets enables you to define the root table entries to be a set of static values that Become the boundary values for the sections. Form the first-level table in the set, the section table. Because the values in the section table are fixed and do not change with the addition or deletion of entries, the Enterprise Database Server has no reason to lock the section table. Root Table for Each Section The root table for each section is the next level of table below the section table. The root table is the first level where locking occurs. By forcing the first-level of locks to occur below the section table, the entries that a task can lock are those in one section only. Even if access to one section is locked by a program, access to other sections can continue. When section boundaries are specified so that the set entries being accessed are evenly distributed among all sections, the contention for set resources can be greatly reduced with a corresponding increase in database throughput. In extreme cases, application programs can be adjusted so that they work on discrete sections of a set, thereby eliminating any possible contention. Relation of Set Sections to Data Set Sections Set sectioning is completely independent of data set sectioning. You can specify sectioning for both data sets and sets, neither, or one but not the other. Although set sectioning is independent of data set sectioning, the SECTIONS option requires that the EXTENDED attribute be specified for the related data set. In addition, if a data set has multiple spanning sets, you can section any number of those sets. Each set has its own sectioning specification. Subsets Definition A subset is identical to a set except that the subset need not contain a record for every record of the data set. A subset is a file that indexes none, one, several, or all of the records in a data set. A subset has no meaning apart from the data set. The collection of information in a subset differs depending on the type of subset that is being used. 3850 8198 001 2 13

Subsets Purpose The subset structure enables an application program to access only records of a data set that meet a particular condition. Like a set, a subset is created to speed up certain types of data retrievals from the data set records. However, access by way of a subset can sometimes slow down updates to data. How Subsets Work An application program compiles a list of people who are managers. Therefore, the application programmer creates the MANAGER subset (see Figure 2 6). To retrieve a manager record, Enterprise Database Server uses the smaller file, the subset, to quickly point to the corresponding records in the larger file, the data set. In this case, typically the resulting list of people would contain relatively few names. Figure 2 6 shows a sample portion of PERSON data set with the MANAGER subset that indexes the data set by the EMPLOYED value of 3. Figure 2 6. Sample of MANAGER Subset for the PERSON Data Set 2 14 3850 8198 001

Data Items Writing the Subset Definition Figure 2 7 shows the MANAGER subset description with labels identifying how DASDL expresses subset components to the system. Figure 2 7. DASDL Definition for the MANAGER Subset Keeping a Subset Up-To-Date A subset is kept up-to-date in two ways: When application programs add, change, or delete individual pieces of data or records stored in the data set, Enterprise Database Server supports either of two ways of updating a subset: Automatic update when the subset DASDL definition contains a WHERE clause Update by an application program when the subset DASDL definition does not contain a WHERE clause Enterprise Database Server automatically updates the subsets in the sample EMPLOYEEDB database because each subset contains a WHERE clause. The DBA maintains the structure of the subset. For example, when necessary, the DBA Data Items Definition Keeps the subset within maximum size limits Adds, deletes, or changes the subset definition when he or she changes the definition of the data item by which the subset indexes the data set Monitors automatic Enterprise Database Server processes that use the subset A data item is an element of data. In Enterprise Database Server, a data item can also be the field (column) in a database record. For example, SOC-SEC-NO is a data item in the sample PERSON data set. Purpose The data item describes the data to be stored. For example, the name of an employee is different from the Social Security number of an employee and needs a different 3850 8198 001 2 15

Data Items definition. One difference is that the name requires alphanumeric characters, and the Social Security number requires numbers only. How Data Items Work The data item provides to Enterprise Database Server the identity type, size, location, and attributes of one element of data for a database entity. The data item definition also provides one form of data security during an attempt to add, delete, or modify data. For example, when an application submits an update to a data item, Enterprise Database Server accepts the update if it corresponds to the data item definition. Otherwise, Enterprise Database Server rejects the change and reports an exception. The DBA adds, deletes, or changes data item definitions. Kinds of Data Items The following table lists and explains the types of data items supported by Enterprise Database Server. Type What the Type Defines Example Alphanumeric Numeric Real Words and characters, such as names, addresses, dates, and titles. By default, data is stored as EBCDIC characters; if requested, data is stored as Kanji characters. Integers and decimals, with or without signs. Can include Precision (number of places to the right of the decimal point) A scale factor (number of digits in an item) A sign (+ or ) Single-precision floating-point numbers that occupy one word (24 bits, up to 12 digits). Stored right-justified with leading zeros filling the gaps. DASDL definition: PROJECT-TITLE ALPHA (20); Data example: Earthworm DASDL definition: PROJECT-NO NUMBER (12); Data example: 121221 DASDL definition: EMP-SALARY REAL (08,02); Data examples: 1567324.45 129.25 4800.99 2 16 3850 8198 001

Data Items Type What the Type Defines Example Boolean TRUE and FALSE values. Data takes up one 4-bit digit of storage. Only the rightmost bit contains information. DASDL definition: RETIRED BOOLEAN; Data examples: 0 (FALSE) 1 (TRUE) Field TRUE and FALSE values in 1 bit of space (saves space). So, a field item can store Up to 48 Boolean values An unsigned, nonnegative integer that is up to 48 bits long DASDL definition: WORKS-ON FIELD ( DISK-PROJECT; PRINTER-PROJECT; TAPE-PROJECT; MEMORY-PROJECT; ); Data examples: 0 (FALSE) Group Count item Alphabetic or numeric items that can be viewed as a single item. Used to keep related data together, for example, name, street address, city, state, and postal code. A binary integer value indicating the number of counted links that refer to the record. The system automatically adjusts this value. 1 (TRUE) DASDL definition: EMP-NAME GROUP ( FIRST-NAME ALPHA (15); MIDDLE-NAME ALPHA (01); LAST-NAME ALPHA (20); ); Data example: Garcia, Mary 432 Pollard Street Buncombe, IL 60645 DASDL definition: WORKERS COUNT(03); 3850 8198 001 2 17

Data Items Type What the Type Defines Example Filler Space in a record for future information. Filler items enable you to add new data items without reorganizing the database, saving time later in the life of the database. DASDL definition: FILLER SIZE 12; Single Data Item Definition The data item definition appears in the DASDL declaration that defines the data set. For example, the following partial sample declaration for the PERSON data set of the EMPLOYEEDB database defines some data items for each person s record in the data set: PERSON DATA SET ( SOC-SEC-NO NUMBER (12); EMPLOYEE-ID NUMBER (12);... US-CITIZEN BOOLEAN; GENDER NUMBER (1); SPOUSE-SSN NUMBER (12); EMPLOYED NUMBER (1); HIRE-DATE ALPHA (8); Group Data Item Definition Some data items contain parts that are most useful when kept together. The idea is that the system keeps the parts together and treats them as a unit. The PERSON data set contains several group data items: NAME, CURRENT-RESIDENCE, and NEXT-OF-KIN. The definition for the NAME group data item follows: NAME GROUP ( FIRST-NAME ALPHA (15); MID-INITIAL ALPHA (1); LAST-NAME ALPHA (20); ); As you see, the data item is identified as group by the word GROUP following the data item name in the definition. The components of the group item are enclosed in parentheses. 2 18 3850 8198 001

Global Data Items Global Data Items Definition A global data item is a data item, group item, or population item that is not a part of any data set, but that pertains to the database as a whole. Global data items are stored in one special record called the global record in the DASDL declaration outside the structure definitions. Often, the global record is placed just before structure definitions in the DASDL file. Purpose A global data item Holds permanent information about the database as a whole or about a data set Acts as a placeholder for information that can be derived from the database How the Global Data Item Works Enterprise Database Server treats global data items according to their purpose and type. Consider the following examples and the contents of the sample global record that shows how they are defined in DASDL: The item TOT-EMP contains a count of the number of records in the EMPLOYEES data set. An aggregate item counts or sums data items in a data set. An item called TOT-SALARY could total all salaries in the EMPLOYEES data set. The item definition specifies that the salary total can have 12 digits to the left of the decimal point and 2 digits to the right of the decimal point. An application can use a numeric global data item called HIGHEST to retrieve the highest salary of an employee. The value of HIGHEST can contain a number with 8 digits to the left of the decimal point and 2 digits to the right of the decimal point. Sample Global Record The following global record shows how the preceding examples would be defined in DASDL: TOT-EMP POPULATION (10000) OF EMPLOYEES; TOT-SALARY AGGREGATE (12,02) SUM (SALARY) OF EMPLOYEES; HIGHEST REAL (08,02); 3850 8198 001 2 19

Accesses Keeping Global Data Item Definitions Up-To-Date The DBA adds, deletes, or changes global data item definitions when necessary. Some global items can be changed dynamically. Others require an edit of the source DASDL definition file and its recompilation. Accesses Accesses are used to retrieve records from direct, ordered, or random data sets. An Access functions like a set, but there is no physical file associated with it. Instead, the Access defines the actual physical ordering of the records in the data set. Any record in the data set can be retrieved using an Access. Only one Access can be declared for each direct, ordered, or random data set. Accesses must not be declared for other types of data sets. Sectioning for an Access The SECTIONS option for an Access is used to divide the corresponding data set into multiple physical files. A sectioned structure reduces internal Enterprise Database Server lock contention when an attempt is made to access these structures. The SECTIONS option requires that the EXTENDED attribute be specified for the related data set. You can specify the SECTIONS option for Accesses to disjoint direct and disjoint random data sets only. Accesses for ordered data sets cannot be sectioned. Key Bound Values You section an Access by specifying the SECTIONS option with key bound values. For each key item in the Access, you declare the key range by specifying the Upper bound value for ascending keys Lower bound value for descending keys You can separate each sectioning specification with a semicolon. The maximum number of sections you can declare with multiple bounds is 255. The result is that each section contains records whose key values lie within a specified key range. 2 20 3850 8198 001

Accesses Section Bounds Specification For direct data sets, an Access can specify a group item. However, the section bounds specification must be specified using the subitems of the group key. It is recommended that the specification for the last section include specifications of all subitems of a group key. For example, D DIRECT DATA SET (G GROUP ( G1 NUMBER(2); G2 NUMBER(4); ); INFO ALPHA(300); ); A ACCESS TO D KEY IS G SECTIONS (G1 = 50; G1 = 75, G2 = 500 ); Related Information Topics For information about... Refer to... Adding, deleting, and modifying database structures DASDL language and options Data structure types and their purpose Defining data structures Enterprise Database Server software overview Monitoring the database Types of structures and their purpose DASDL Reference Manual Enterprise Database Server Utilities Operations Guide DASDL Reference Manual DASDL Reference Manual DASDL Reference Manual Section 1 Enterprise Database Server Utilities Operations Guide DASDL Reference Manual 3850 8198 001 2 21

Accesses 2 22 3850 8198 001

Section 3 Defining Database Options in DASDL In This Section This section describes basic information about EMPLOYEEDB database Auditing the database Creating the database Optional Enterprise Database Server database functions Managing structures Controlling access to data Naming system and tailored files Audit trail options Control file location and usercode Defining the restart data set 3850 8198 001 3 1

Facts About the EMPLOYEEDB Database Facts About the EMPLOYEEDB Database The sample EMPLOYEEDB database used in this guide exists for a fictional company called Creative Samples, Inc. Becoming acquainted with the pack names used by this company and other facts about the database (refer to the following table) can help you understand the sample code and commands that follow later in this section. Database Facts EMPLOYEEDB Database Names Comment Name EMPLOYEEDB Overall database name (includes data files and tailored software) Usercode SYSDBA Usercode under which the database is run Family statement HR otherwise HUBPACK Primary family pack of SYSDBA and location for some database files (HR is an abbreviation for human resources.) Secondary pack name HUBPACK Secondary family pack of SYSDBA and location of system and Enterprise Database Server software and data files Primary audit pack name Secondary audit pack name Tape HRAUDIT HR1AUDIT EMPLOYEEDB/AUDIT# EMPLOYEEDB/2AUDIT# Location for primary audit trail Location for secondary audit trail Final location for audit trails 3 2 3850 8198 001

Auditing the Database Auditing the Database Introduction The most significant DASDL option that you set for a database defines whether the database is audited. Many other DASDL settings are appropriate for audited, but not unaudited, databases. Therefore, before considering any other DASDL options, you need to understand what Enterprise Database Server database auditing means. Definition Enterprise Database Server supports both logging changes to a database (auditing the database) and not logging changes (maintaining an unaudited database). If you specify the AUDIT option in the DASDL definition, the Accessroutines maintains a log of database changes called the audit trail. If you do not specify the AUDIT option in the DASDL definition, the system does not log data changes. The sample database in this guide is audited (see Appendix A). The AUDIT option appears under the OPTIONS statement, and the AUDIT TRAILS parameter appears under the PARAMETERS statement in the DASDL definition. Advantages of Auditing a Database Auditing a database assures you that if a database failure occurs, you have a record of database changes with which you can restore the database to a complete integral state. You potentially avoid Loss of information Corruption of information For a database that is updated in real time by interactive application programs, an audited database is the most secure option for your information resources. 3850 8198 001 3 3

Auditing the Database Advantages and Disadvantages of Not Auditing a Database In certain situations, not auditing the database might outweigh the overhead of auditing the database. An instance might be a database that is updated mainly in batch mode, and whose application programs are easily restartable. In most situations, not auditing the database places your information resources at risk. After any interruption, all changes since the last offline database dump are lost, and you must reload the database from an offline dump. Audit Trail Introduction The audit trail is a log of changes to the database. The audit trail is somewhat similar to the host system SUMLOG a history of all system activity except that the audit trail Records database update activity only Consists of separate, numbered files Characteristics Purposes An audit trail is A feature of an audited database A history of all events that physically change the database Generated by each database that has the DASDL AUDIT option set Automatically maintained by Enterprise Database Server as a single trail or as duplicate trails Written by the Accessroutines in numbered segments called audit files Stored on disk or tape Enterprise Database Server software uses an audit trail to Recover the database from an unusable state. Provide restart information to user programs. Reconstruct portions of the database that have been lost because of hardware errors. Back out aborted transactions. Roll back the entire database to a user-specified point. Rebuild the entire database to a user-specified point. 3 4 3850 8198 001

Auditing the Database Single or Duplicate Audit Trail Enterprise Database Server can generate a single audit trail or duplicate audit trails. If you do not specify DUPLICATE in the AUDIT TRAIL statement of DASDL, Enterprise Database Server generates a single copy of the audit trail. Having a copy of the audit trail in reserve, in case the original audit trail is lost or corrupted, makes good business sense. Loss or corruption of the audit trail might mean that a corrupted or otherwise unavailable database cannot be recovered. Audit Trail Contents An audit trail consists of audit files containing various control records, a sequence of beforeimages and afterimages, and index tables that Enterprise Database Server can integrate and use to rebuild the database should the need arise. The audit file provides a chronological history of all database update transactions. Audit Files An audit file is a numbered segment of the database audit trail. Enterprise Database Server assigns each audit file an audit file number (AFN) in the range 1 to 9999. After an audit file is numbered 9999, the sequence of audit file numbering starts again at the number 1. The name of the primary audit file is made up of the database name and the AFN in the following format: EMPLOYEEDB/AUDIT347 If you use DUPLICATE, the name of the secondary audit file is made up of the database name and the AFN in the following format: EMPLOYEEDB/2AUDIT347 You copy audit files with the COPYAUDIT utility. Because of the special nature of the audit file, always use the COPYAUDIT utility to copy an audit file; do not use a CANDE COPY command. The current audit file is the one being written to by the Accessroutines. Enterprise Database Server software keeps a record of the current audit file number in the database control file. 3850 8198 001 3 5

Auditing the Database Sectioned Audit Files A sectioned audit file is a single logical audit file divided into multiple physical files called sections. Together, the sections make up the audit file. The primary advantage of dividing audit files into sections is to enable multiple audit file I/Os to occur concurrently. Single and Multiple I/Os The MCP environment allows multiple I/Os to a single file, including database structure files. However, the Enterprise Database Server restricts audit trail I/Os to one I/O at a time to enforce internal integrity constraints that enable a positive identification of the end of the audit trail. By using multiple physical files for audit file sections, the Enterprise Database Server Extended Edition can initiate one write at a time to each section, even though prior writes to other sections have not yet completed. The result is an improvement in audit throughput, which in turn can result in higher database application throughput. For example, if the Enterprise Database Server Standard Edition takes 50 milliseconds to perform one audit write, five audit writes would require 250 milliseconds since all writes occur serially. However, using two audit sections and assuming that an audit block can be filled and ready to write every 25 milliseconds the system can write the same five blocks in 150 milliseconds, even though each write takes the same 50 milliseconds. Round-Robin Algorithm The Enterprise Database Server Extended Edition uses a round-robin algorithm for writes to audit file sections. Naming conventions for individual section files identify the audit sections by number. Writes to section 1 always precede writes to section 2, and so on until a write has been made to all sections, and then the write algorithm begins again at section 1. The Accessroutines monitors the status of audit writes with respect to program transaction states, and ensures that all auditing prerequisites have been fulfilled before a program receives confirmation that a transaction is complete. Therefore, although writes to the audit trail occur in parallel and can even be completed out of order, the serial requirements of the audit trail are still maintained. Although up to 63 individual audit sections can be defined, the combination of all the sections makes up the logical audit file. Consequently, all sections of an audit file must be present for the Enterprise Database Server Extended Edition to recognize that the audit file is present. 3 6 3850 8198 001

Auditing the Database Determining an Optimal Number of Sections Determining the optimal number of audit sections for a database requires some experimentation. In addition, because of the following factors, the optimal number of audit sections can change several times a day: The number of databases using a particular audit pack The number of tasks using those databases The work being done by those tasks A recommended starting point for the number of audit sections depends on the number of physical packs available on the audit family. In general, the number of sections for an audit file should not exceed the number of packs in the audit family. Selecting a number of audit sections that exceeds the number of packs in the audit family guarantees that at least two sections reside on a single pack. Although the Enterprise Database Server is capable of initiating writes in parallel, the physical disk unit must perform its writes serially. Therefore, having more than one section on a single pack provides no additional gain in audit throughput. In addition, because of the way the MCP allocates files, the possibility exists that multiple file sections could reside on the same physical disk unit, even though the number of sections does not exceed the number of packs. Therefore, while the number of sections can equal the number of physical disks, this number is not always an optimal number of sections. As a starting point, however, it is suggested that the number of sections be equal to the number of physical disks until you gain more experience with the performance of sectioned audits. PRINTAUDIT and COPYAUDIT Utilities The PRINTAUDIT and COPYAUDIT utilities have been enhanced to accommodate audit sections. Interfaces to these utilities keep all audit references at the logical audit file level. Given the name of an audit file, these utilities automatically find the individual audit sections and perform their usual operations. With these utilities, no need exists to refer to specific audit sections. Variable Audit File Buffers The maximum number of audit buffers is 11. Additional audit buffers enable The deployment of a new buffer queuing algorithm for sectioned audit files The system to absorb short bursts of additional audit activity without affecting overall database throughput 3850 8198 001 3 7

Auditing the Database Advantages of Audit Buffer Flexibility Varying the number of audit buffers when audit files are sectioned enables the database to absorb short periods of intense audit activity. The number of audit buffers automatically changes as the number of audit file sections increases or decreases, with approximately 10 audit buffers allocated for each section. Although the system adjusts the number of audit buffers automatically, you can manually assign the number of audit buffers by using the Visible DBS command AUDIT BUFFERS. The AUDIT BUFFERS command overrides the automatic calculation of the number of audit buffers. Audit Buffer Saturation Audit activity on a database can increase until all audit buffers are constantly filled. This situation is usually a sign that the disk subsystem is inadequate to handle the amount of audit activity. Adding additional audit buffers in this situation provides only temporary relief because the additional buffers also soon become filled. Eventually, the management of additional buffers can become more of a hindrance than a help. The recommended solution to constantly filled audit buffers is an examination of the disk subsystem to determine whether one or both of the following changes are necessary to relieve the situation: Additional audit sections Additional disk units or faster disk hardware 3 8 3850 8198 001

Creating the Database Creating the Database Specifying Dollar ($) Options With the dollar ($) options, you specify whether to run (SET) the Enterprise Database Server programs that create the database control file, the DMSUPPORT library, and the RECONSTRUCT program as part of the DASDL compilation run, or to run the Enterprise Database Server programs separately (RESET) after the compilation. Making Database Structures Ready for Access On a New Database In the following statements from the EMPLOYEEDB DASDL definition, the UPDATE option is commented out because the database is new and about to be initialized: %UPDATE; INITIALIZE; The INITIALIZE option causes DMUTILITY to initialize (establish the start-up state for) the EMPLOYEEDB database structures. On an Established Database The UPDATE option enables the DASDL compiler to make changes in the database definition. Setting the INITIALIZE option has one of two functions, depending on whether the UPDATE option is set: When the UPDATE option is set, DMUTILITY initializes new data sets only. When the UPDATE option is not set, DMUTILITY initializes all database structures; that is, DMUTILITY deletes all data from all database structures. Caution Do not set the INITIALIZE option when you make changes to the DASDL definition of an established database. You can too easily forget to include the UPDATE option and mistakenly wipe out the data in the database. 3850 8198 001 3 9

Optional Enterprise Database Server Database Functions Optional Enterprise Database Server Database Functions Overview In the Enterprise Database Server environment, you can choose various functions for running and managing a database. The sample EMPLOYEEDB database definition illustrates some of these functions. Briefly, the functions have to do with Workspace size Storage locations Operations that ensure data integrity and security Processes that optimize data access Enterprise Database Server maintains each function in one of two default states: turned on (SET) or turned off (RESET). You can accept or change the default state. For some options, you can specify a value. Choosing Functions Each function potentially provides a benefit for your database. Each function also requires computer resources, such as Processing time Processing power Storage space Memory After you understand a function and weigh the benefits against the costs of turning on the function, you also take into consideration your preferences and the realities at your site. Setting and Resetting Options You must answer the questions: If the option is set, do I want to reset it (turn it off)? If the option is reset, do I want to set it (turn it on)? In other words, you decide to Accept the default setting for the option (do nothing). Change the default setting for the option. That is, include the appropriate syntax under the appropriate statement in the database definition. 3 10 3850 8198 001

Optional Enterprise Database Server Database Functions The word SET is used only in the DASDL statements that begin with a dollar sign ($). Consider the following statements from the sample EMPLOYEEDB database DASDL: $SET DMCONTROL $SET ZIP Now consider subsequent statements from the sample database: OPTIONS ( ADDRESSCHECK, AUDIT, KEYCOMPARE INDEPENDENTTRANS, %REAPPLYCOMPLETED, STATISTICS ); Default values for each option are either set or reset. You can use the default values to avoid using the actual words SET or RESET: If the default for the option is reset, naming the feature is sufficient to turn it on. For example, ADDRESSCHECK is reset (off) by default. You set it by including the word ADDRESSCHECK in DASDL. If the default for the option is set, you can still name the feature either for the sake of clarity or to specify a value for the feature that is different from the default value. Doing nothing (omitting a feature name), of course, accepts its default. For example, the sample DASDL accepts the LOCKEDFILE default of reset by simply not including LOCKEDFILE in its list of OPTIONS options. Introducing a Second Meaning for the Term Default Default most commonly means a value automatically assigned by a program or system when another value has not been specified by the user. The term has this common meaning in the preceding paragraphs about option defaults. Enterprise Database Server also uses the term as the name of a category of DASDL options relating to managing data structures while a database is active. The name of this category is simply DEFAULTS. 3850 8198 001 3 11

Optional Enterprise Database Server Database Functions Optional Function Categories In the beginning of the sample database definition (see Appendix A), some options appear by themselves, and others appear under one or more statements. The following table lists individual options and option statements, and their purpose. Options and Option Statements Purpose $ options Compile DASDL. INITIALIZE UPDATE DEFAULTS OPTIONS PARAMETERS AUDIT TRAIL CONTROL FILE RESTART DATA SET Instruct the system to initialize new data sets. Instruct the system to accept changes in an existing database definition. Set location and verification processes for data structures. Set database audit and restart functions and Accessroutines transaction handling. Set Accessroutines use of memory buffers and frequency of syncpoints and controlpoints during transactions. Set location and duplication of the audit trail, the size of audit files, and audit file verification functions. Set location of the control file and usercode for the database. Set location of restart information in an audited database. 3 12 3850 8198 001

Optional Enterprise Database Server Database Functions Managing Structures DEFAULTS Options The DEFAULTS options apply to all structures in the database while the database is running unless an option is overridden later in the DASDL definition. The options that apply to data sets apply to all data sets and any structures. Here is the DEFAULTS statement for the sample database. DEFAULTS ( CHECKSUM = TRUE, REBLOCK = TRUE, REBLOCKFACTOR = 1, BUFFERS = 0 + 0 PER RANDOM USER OR 2 PER SERIAL USER, PACK = HUBPACK, DATA SET ( DIGITCHECK = TRUE, LOCK TO MODIFY DETAILS = TRUE, PACK = HUBPACK ), % SET % ( % % ), ALPHA (INITIALVALUE IS BLANKS), NUMBER (INITIALVALUE IS 0), BOOLEAN (INITIALVALUE IS 0), REAL (INITIALVALUE IS 0) ); 3850 8198 001 3 13

Optional Enterprise Database Server Database Functions Identifying Parts of the DEFAULTS Statement In the preceding DEFAULTS statement, four types of information appear within the outer set of parentheses: 1. Global options that apply to every file in the database 2. Options that apply to data sets only (within an inner set of parentheses) 3. Options that apply to sets only (within an inner set of parentheses, and rarely needed) 4. Initial values for the data types that occur in the database Overriding a DEFAULTS Option for an Individual Structure When you set a global or structure DEFAULTS option, you can override it for an individual structure later in the DASDL definition. For example, to turn off CHECKSUM for the PERSON-SET of the PERSON data set, you would add the final line to the following PERSON-SET definition: PERSON-SET SET OF PERSON KEY IS SOC-SEC-NO, CHECKSUM = FALSE; The following table explains the options in the DEFAULTS statement for the sample database and the options for data sets only. Option Name Task Comments CHECKSUM REBLOCK REBLOCKFACTOR Detects input/output (I/O) errors by storing the checksum when writing a block and comparing the checksum when reading the block. Enables two block sizes for the same structure simultaneously: a smaller block for random access and a larger block for serial access. Enables readahead. Designates the size of a large block in relation to the size of a small block. Benefit: Prevents file corruption. Cost: Extra word of storage per block. Benefit: Reduces elapsed serial access time. Cost: None. Benefit: Reduces elapsed serial access time. Cost: None. 3 14 3850 8198 001

Optional Enterprise Database Server Database Functions Option Name Task Comments BUFFERS DIGITCHECK LOCK TO MODIFY DETAILS PACK INITIALVALUE Controls the number of buffers that the system allocates for a structure, in addition to the number of buffers allocated to each program that calls the structure. Examines all required data items of the type NUMBER during a STORE operation. Ensures that programs lock the master record before they add, delete, or modify records belonging to the master. Assigns the location of database files. Controls the value assigned to data items when a new record is created. Benefit: Works with the REBLOCK and REBLOCKFACTOR options to optimize random and serial access. Cost: None. Benefit: Ensures that data item values remain unchanged when ported to other enterprise servers. Cost: None. Benefit: Guarantees that two programs cannot concurrently modify records subordinate to the same master. Cost: None. Benefit: Promotes efficient pack space management. Cost: None. Benefit: Promotes value consistency in unpopulated fields. Cost: None. 3850 8198 001 3 15

Optional Enterprise Database Server Database Functions Controlling Access to Data Accessroutines Program Enterprise Database Server controls access to database data with a software program called the Accessroutines, which is a collection of specialized routines that Enables many users to access the database at the same time When several programs access the database concurrently, the Accessroutines ensures that the access is controlled and synchronous (that is, access requests take turns and do not conflict with one another). Performs standard unchangeable tasks such as checking with the control file for the Database continuity timestamp DASDL update level (the Enterprise Database Server software version) Structure format timestamp for each structure it opens Structure version timestamps File and software locations Provides optional services that you specify in the DASDL definition Accessroutines Service Categories You select optional Accessroutines services by listing options under two DASDL statements: OPTIONS Option specifications deal with the characteristics of transactions and the auditing and restarting of the database. PARAMETERS OPTIONS Definition Parameter specifications deal with memory buffers that the Accessroutines uses and with the frequency of syncpoints and controlpoints during update transactions. The specifications in the OPTIONS statement pertain to the Accessroutines. Here is the DASDL definition for the OPTIONS statement in the sample EMPLOYEEDB database. OPTIONS ( ADDRESSCHECK, AUDIT, KEYCOMPARE, INDEPENDENTTRANS, %REAPPLYCOMPLETED, STATISTICS ); 3 16 3850 8198 001

Optional Enterprise Database Server Database Functions The following table explains the options in the OPTIONS statement that define Accessroutines tasks for the sample database. Option Name Task Comments ADDRESSCHECK AUDIT KEYCOMPARE INDEPENDENTTRANS Detects input/output (I/O) errors by storing an addresscheck word when writing a block and comparing the word when reading the block. Consequences of mismatch: An error report. Keeps a record of changes to the database in an audit trail. Verifies that the key in the data set matches the key entry in the set or the subset through which the data set is accessed. Consequences of mismatch: An error report. Enforces two-phase locking: an application must lock a record until the modification of that record is complete. Enables one program to abort a single transaction. Benefit: Eliminates corruption of the database by I/O errors. Cost: One word of storage per block. Benefit: Automatic and manual database recovery from software or hardware failures; minimizes the loss of data from a database interruption. Cost: Slightly slower performance; requires disk or tape space to store the audit trail. Benefit: Termination of database access when key mismatches are detected; prevention of database corruption. Cost: None. Benefit: Ensures that the failure of one program does not affect other programs. Cost: Slightly slower performance. 3850 8198 001 3 17

Optional Enterprise Database Server Database Functions Option Name Task Comments REAPPLYCOMPLETED STATISTICS Writes a transacti on to the audit trail at end transaction instead of writing all transactions at a syncpoint. Maintains information about the run-time performance of the database. Benefit: In case of a problem, only incomplete transactions are lost. Cost: Slightly slower performance. Benefit: Accessibility of performance information for tuning purposes; can be reset dynamically. Cost: Creates a printout each time the database terminates. 3 18 3850 8198 001

Optional Enterprise Database Server Database Functions PARAMETERS Definition The specifications in the PARAMETERS statement pertain to the Accessroutines. Here is the DASDL definition for the PARAMETERS statement in the sample EMPLOYEEDB database. PARAMETERS ( ALLOWEDCORE = 200000, CONTROLPOINT = 1, OVERLAYGOAL = 1, %Set to normal running value. %Setting depends on the site. %Enables Enterprise Database Server to manage buffers SYNCPOINT = 100, %Setting depends on the site. SYNCWAIT = 1 %Valid with INDEPENDENTTRANS. ); The following table explains the parameters in the PARAMETERS statement that define Accessroutines tasks in the sample database. Option Name Task Comments ALLOWEDCORE CONTROLPOINT Specifies the total number of words of main memory allocated to all database buffers at one time. The default value is 50,000 words; the maximum value is 268,435,455 words. The maximum value can be modified by using the Visible DBS command SM ALLOWEDCORE. Refer to the Enterprise Database Server Utilities Operations Guide for additional information. Specifies the number of syncpoints that occur before a new controlpoint occurs. The default value is 2; the maximum value is 4,095. Benefit: Automatic buffer overlay or deallocation of buffers to assist in optimizing data access. Cost: None. Benefit: Limits the amount of audit information required by halt/load and abort recovery. Cost: Frequent controlpoints increase the number of I/O operations; infrequent controlpoints increase the number of audit records that recovery operations must process. 3850 8198 001 3 19

Optional Enterprise Database Server Database Functions Option Name Task Comments OVERLAYGOAL SYNCPOINT SYNCWAIT Specifies the rate at which buffers are overlaid to the disk. The default value is 5 percent of the ALLOWEDCORE value for each minute; a value can be an integer or a decimal value between 0 and 100. Specifies the maximum number of transactions that can occur between syncpoints. The default is 100; the maximum number is 4,095. Sets the time, in seconds, that a program waits for a syncpoint; forces a syncpoint after the designated time, even if the number of transactions identified by the SYNCPOINT option have not occurred. The minimum value is 1. Benefit: Promotes efficiency by enabling the Accessroutines to manage buffers for the database. Cost: Too high a number can degrade performance. Benefit: Limits the amount of recovery time that is required if the database terminates abnormally. Cost: Frequent syncpoints increase the probability that a program will be suspended waiting for a syncpoint; infrequent syncpoints increase the number of audit records that recovery operations must process. Benefit: When the SYNCPOINT option is set, limits the time a program waits for a syncpoint. Cost: None. 3 20 3850 8198 001

Optional Enterprise Database Server Database Functions Naming System Files and Tailored Files Introduction Enterprise Database Server software files arrive with default names, and the generation of the database also creates tailored files with default names. Your site might have reason to change the default names of any of these files, and the DASDL definition gives you an opportunity to do so. Definition of File Names The DASDL definition of the EMPLOYEEDB database specifies the names of Enterprise Database Server system code files and tailored database code files. Here is the definition. ACCESSROUTINES = SYSTEM/ACCESSROUTINES; DATARECOVERY = SYSTEM/DMDATARECOVERY; RECOVERY = SYSTEM/DMRECOVERY; DMSUPPORT = (SYSDBA)DMSUPPORT/EMPLOYEEDB ON HR; RECONSTRUCT = (SYSDBA)RECONSTRUCT/EMPLOYEEDB ON HR; REORGANIZATION = (SYSDBA)REORGANIZATION/EMPLOYEEDB ON HR; 3850 8198 001 3 21

Optional Enterprise Database Server Database Functions Audit Trail Options The database only has an audit trail when the database is being audited. Therefore, you set audit trail options only when you have already specified AUDIT in the OPTIONS statement. Here is the audit trail definition for the sample EMPLOYEEDB database. AUDIT TRAIL ( AREAS = 10, AREALENGTH = 100 BLOCKS, BLOCKSIZE = 3600 WORDS, PACK = HRAUDIT, COPY TO TAPE (DENSITY = BPI6250, 2300) 1 TIMES AND REMOVE, DUPLICATED ON PACK = HR1AUDIT COPY TO TAPE (DENSITY = BPI6250) 1 TIMES AND REMOVE, UPDATE EOF = 3000 ); The following table explains the audit trail options for the sample database. Option Name Task Comments AREAS AREALENGTH BLOCKSIZE Controls the maximum number of areas to be assigned to a file. The default value is 65; the range is 1 through 1000. Controls the number of blocks per area. The default is 100 blocks; the range is 1 through disk pack size in segments. Controls the size of blocks. The default is 900 words; the range is 900 through 4095 words. Benefit: Helps specify the maximum size of the audit trails. Cost: Performance can be negatively affected by many areas on an active database. Benefit: Controls the amount of disk space allocated for each area; larger areas are better for performance on very active databases. Cost: Sometimes it is hard to find large areas. Benefit: Helps to improve I/O times. Cost: Performance can be negatively affected by small block sizes that cause extra record writes. 3 22 3850 8198 001

Optional Enterprise Database Server Database Functions Option Name Task Comments CHECKSUM PACK COPY TO TAPE AND REMOVE DUPLICATED ON PACK UPDATE EOF Detects I/O errors by storing the checksum when writing a block and comparing the checksum when reading the block. Set by default. Controls the location of the primary audit trail. Controls the copying of the primary and secondary audit trails to tape and their removal from disk. Controls the location of the secondary audit trail. When the audit file is on disk, controls how often the end-of-file pointer in block 0 is updated. The default value is 100 blocks; the range is 1 through 10,000. Benefit: Prevents inadvertent file corruption. Cost: None. Benefit: Default is to the primary pack; security is better if the audit trail is kept on a pack specifically for this task. Cost: None. Benefit: Second copy of an audit trail; storage of copy on tape; minimizes disk and tape overhead for auditing. Cost: Tapes for storage. Benefit: Second copy of an audit trail. Cost: Disk space for a secondary audit trail and an extra write of the audit trail. Benefit: Can shorten recovery time. Cost: Causes an audit file or audit files to close and open. Too low a value slows down performance. 3850 8198 001 3 23

Optional Enterprise Database Server Database Functions Control File Location and Usercode Introduction The control file for the database performs the following functions: Checks the date and timestamps to ensure that user programs and tailored database software are compatible with the database files Maintains audit control information Maintains dynamic database parameters Enforces database interlock control to enable functions, such as recovery, to have exclusive use of the database Control File Definition Here is the control file definition for the sample EMPLOYEEDB database. CONTROL FILE ( PACK = HUBPACK, USERCODE = SYSDBA ); If you omit control file specifications in the DASDL definition, the system looks for the control file on the database default pack. Because the default pack might not be directly visible to all database applications, specifying the location in the definition enables such programs to find the control file. If an application cannot open the database control file, it cannot open the database. 3 24 3850 8198 001

Optional Enterprise Database Server Database Functions The following table explains the control file specifications for the sample database. Option Name Task Comments PACK USERCODE Controls the location of the control file. Identifies the usercode under which the database is run and under which the control file, the data files, and the audit files are stored. Benefit: Control file location is available to all programs. Cost: None. Benefit: Known usercode under which the database is stored and run. Cost: None. Defining the Restart Data Set Introduction In an audited database, the restart data set maintains a count of the number of times the database has been in transaction state, that is, the number of times the database has been updated. While the database is active, the following processes store the last good restart area for a BEGIN-TRANSACTION operation and the last good restart area for an END-TRANSACTION operation: RECOVERY CLOSE (if the accessing program is discontinued) A good restart area is a point at which the data is correct and complete. The restart data set information ensures that the database restarts at such a point. Database Restart Definition Here is the database restart definition for the sample EMPLOYEEDB database. RST RESTART DATA SET ( RDS-ID ALPHA(6) COMS-ID; RDS-PROG REAL COMS-PROGRAM; RDS-LOCATOR REAL COMS-LOCATOR; RDS-PROGRAM ALPHA(48); RDS-MIX-NO NUMBER(6); RDS-USER-INFO ALPHA(300); ), POPULATION = 100; RESSET SET OF RST KEY IS (RDS-PROGRAM), DUPLICATES; 3850 8198 001 3 25

Optional Enterprise Database Server Database Functions These specifications (explained more fully in the following table) identify the record information stored by the halt/load recovery or abort recovery programs in the restart data set. After a recovery or an abort operation, Transaction Server and application programs find the information in the restart data set about where they can restart. After restarting, an application program should delete its restart records. Option Name Task Comments RDS-ID RDS-PROG RDS-LOCATOR RDS-PROGRAM RDS-MIX-NO RDS-USER-INFO Enables Transaction Server synchronized recovery. Enables Transaction Server synchronized recovery. Enables Transaction Server synchronized recovery. Enables a program to find its restart records. Enables a program to find its restart records. Enables a program to restart at a particular point. For Transaction Server programs only; value is ONLINE for online programs BATCH for batch programs User program fills in the Transaction Server program designator. User program fills in the Transaction Server locator designator. User program fills in an identifier for its program. User program fills in the mix number for the current program. User program fills in information needed to restart the program after a problem. Related Information Topics For information about... Refer to... Accessroutines Audit trail Control file DASDL language and options Naming files in a DASDL definition Restart data set Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide DASDL Reference Manual DASDL Reference Manual Transaction Server Programming Guide Enterprise Database Server Utilities Operations Guide 3 26 3850 8198 001

Section 4 Generating a Database In This Section This section provides information about using the DASDL, source file to generate a new database and associated database files. The following text also provides information on Checking the DASDL syntax Compiling the DASDL definition Identifying tailored database files 3850 8198 001 4 1

Generating a New Database Generating a New Database Introduction Once you have written the definition for a database in DASDL, you need to compile the definition file. In the EMPLOYEEDB DASDL definition, ensure that its compilation results in the creation of the following files: Database description file, control file, DMSUPPORT library, and RECONSTRUCT program The purpose of these files is to communicate details about the database to application programs and standard Enterprise Database Server software during data access and maintenance operations. Empty physical data files and audit trail The purpose of data files is to hold the database data. The audit trail logs changes to data in the database. Steps in Generating a Database The steps in generating a new database are 1. Check and correct the DASDL syntax (this step also happens when you try to compile DASDL). 2. Compile the DASDL file. When the DASDL file compiles successfully, the database is generated and ready to be populated with data. Before You Generate the Database Before you compile DASDL, be sure that the code is free of definition errors. A successful DASDL compilation tells you only that the code is free of syntax errors. Carefully read your definitions and compare them with the design to ensure that errors such as omitting a data item, or putting the wrong data type or field size are eliminated. If you generate a database containing a definition error, you might have to update or reorganize the database (see Section 11), depending on the error. 4 2 3850 8198 001

Generating a New Database Checking the DASDL Syntax and Compiling the DASDL Source File Perform the following separate syntax-checking and compilation operations: 1. Check the DASDL syntax. 2. When the DASDL source file is complete, accurate, and free from syntax errors, compile the source file. DMUTILITY overwrites the database description file every time you attempt a compilation, whether or not syntax errors occur. If you do not perform syntax-checking and compilation operations in sequence, you risk losing the version of the description file that you need to initialize or update the paragraph. Checking DASDL Syntax The following command instructs the system to check the DASDL syntax only for the EMPLOYEEDB database definition: GET EMPLOYEEDB; COMPILE AS $EMPLOYEEDB WITH DASDL SYNTAX When the compiler finds syntax errors, the compiler displays the line numbers on which the errors occur and sufficient text for you to figure out how to fix the errors (see Figures 4 1 and 4 2, and Table 4 1). Compiling the DASDL Source File The following command compiles the EMPLOYEEDB DASDL source file and produces several files and an empty database: GET EMPLOYEEDB; COMPILE AS $EMPLOYEEDB WITH DASDL 3850 8198 001 4 3

Generating a New Database Correcting DASDL Syntax Errors Syntax errors commonly occur in a code file of several pages. The compiler reports the first errors it discovers, up to a maximum number of errors. Perform the following steps repeatedly until the compiler does not report any syntax errors and compiles the file. Step Action 1 Take note of the first error reported by the compiler. Frequently the first error is also the cause of subsequent errors. 2 Open the DASDL file and make the first correction. 3 Save the file. 4 Transmit the command to compile the file. G EMPLOYEEDB; COMPILE AS $EMPLOYEEDB WITH DASDL #WORKFILE EMPLOYEEDB: DASDL, 342 RECORDS, SAVED #COMPILING 9936 PACK HUBPACK, 00000532 ERROR:HUBPACK - RIGHT PAREN EXPECTED 00000542 ERROR:(DATA SET - SEMI COLON EXPECTED RST RESTART DATA SET 00001300 WARNING:EMPLOYEEDB - AS OF SSR 52.1 POPULATIONINCR WILL DEFAULT TO 10% #SNTX #ET=0.8 PT=0.4 IO=0.5 Figure 4 1. Sample DASDL Syntax Error Messages 4 4 3850 8198 001

Generating a New Database [00000428] 00000725 00428 _00500 DEFAULTS _00520 ( _00522 CHECKSUM = TRUE, _00524 REBLOCK = TRUE, _00526 REBLOCKFACTOR = 1, _00528 BUFFERS = 0 + 0 PER RANDOM USER OR _00530 2 PER SERIAL USER, _00532 PACK HUBPACK, _00534 _00540 DATA SET... 01300 RST RESTART DATA SET _01400 ( _01450 RDS-ID ALPHA(6) COMS-ID; [REGAN8] _01500 RDS-PROG REAL COMS-PROGRAM; _01600 RDS-LOCATOR REAL COMS-LOCATOR; _01650 RDS-PROGRAM ALPHA(48); _01700 RDS-MIX-NO NUMBER(6); _01800 RDS-USER-INFO ALPHA(300); _01900 ), POPULATION = 100; _01902 _ Figure 4 2. DASDL Code Showing Syntax Errors 3850 8198 001 4 5

Generating a New Database Table 4 1. Explanations of Syntax Error Messages Line Number Error Comment 00000532 Equal sign is missing between PACK and HUBPACK. Right parenthesis is missing after HUBPACK. 00000542 Semicolon is missing at the end of the previous DEFAULTS statement. 00001300 Warning about how the software will behave as of release 52.1. None Without the semicolon to end the statement, the compiler cannot recognize the beginning of the DATA SET statement. The future behavior of the software will override the behavior of the current population value. Operations Included in the Sample Database DASDL Compilation When the DASDL compiler successfully compiles the DASDL definition, operations actually occur in the sequence of the options set in the DASDL definition. In the sample, the name of the option causing the operation is identified in parentheses. 1. The DASDL compiler produces the description file from the DASDL source (see Figure 4 3). 2. SYSTEM/DMCONTROL produces the control file from the description file ($SET DMCONTROL) (see Figure 4 3). 3. Several Enterprise Database Server programs produce the DMSUPPORT library and the RECONSTRUCT program ($SET ZIP) (see Figure 4 3). 4. SYSTEM/DMUTILITY produces the empty database from the DMSUPPORT library and the control file (INITIALIZE) (see Figure 4 4). Manually Initializing the Control File You can manually initialize the database control file as an alternative to including the $SET DMCONTROL option in the DASDL definition. The following command initializes the database control file for the EMPLOYEEDB database: RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HR INITIALIZE") 4 6 3850 8198 001

Generating a New Database Manually Creating the DMSUPPORT Library and RECONSTRUCT Program You can manually compile the DMSUPPORT library and the RECONSTRUCT program as an alternative to including the $SET ZIP option in the DASDL definition. The following command compiles the database DMSUPPORT library and RECONSTRUCT program for the EMPLOYEEDB database: START DATABASE/WFL/COMPILEDB ("DB = EMPLOYEEDB OBJECT = HR COMPILE = DMSUPPORT, RECONSTRUCT AUDIT = SET", "","","","","") Tailored Database Files Definition During database generation, standard Enterprise Database Server programs tailor several files for use with the specified database only. These files are known as tailored files or tailored software. For the sample EMPLOYEEDB database, the following table lists the default names of tailored files and some of their characteristics. Default Software Name Descriptive File Name File Characteristics and Purpose DESCRIPTION/ EMPLOYEEDB EMPLOYEEDB/ CONTROL DMSUPPORT/ EMPLOYEEDB RECONSTRUCT/ EMPLOYEEDB Description file Static Changed only by DASDL compilation Serves as a basis for other tailored files Used by application compilers Control file Dynamic Continually accessed and changed during database access Monitors and controls database access; considered the run-time extension of the description file DMSUPPORT library RECONSTRUCT program Static Recompiled after a new description file is compiled Source of database information at run time Static Recompiled after a new description file is compiled Starts DMDATARECOVERY for row reconstruction in an audited database 3850 8198 001 4 7

Generating a New Database Figure 4 3. Flowchart of a DASDL Compilation Figure 4 4. Creation of an Empty Database 4 8 3850 8198 001

Generating a New Database After the Database Is Generated After the database is initially generated, edit the DASDL source file to Add the UPDATE option and save the file. The reason to perform this task now is that you could forget to insert the option later on, before you perform an update to this now existing database. If you forget, and the INITIALIZE option remains in the DASDL source file, the INITIALIZE option would generate the database from scratch again, and delete all data from the database in the process. Remove the INITIALIZE option or at least comment it out. On an existing database, the INITIALIZE option initializes new structures during a database update. However, you can easily initialize new structures manually also, making it unnecessary for you to retain the INITIALIZE option in the DASDL source file. Related Information Topics For information about... Refer to... Compiling tailored software Control file Generating a database Tailored database files Enterprise Database Server Utilities Operations Guide DASDL Reference Manual DASDL Reference Manual Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide Tailored Database Files in this section DASDL Reference Manual Enterprise Database Server Utilities Operations Guide 3850 8198 001 4 9

Generating a New Database 4 10 3850 8198 001

Section 5 Populating a Database In This Section This section provides information about Methods of populating the database Running a batch application program Using the DMSUPPORT Library During a DMUPDATE 3850 8198 001 5 1

Populating the Database Populating the Database Definition Populating the database means to insert data into the records and fields of a newly generated database. Methods Methods of populating a database are the same as methods of updating a database: Online The online application program accepts input from a terminal and updates the database interactively as data is entered into fields on the computer screen. Example: An online application that enters information about change of address for an employee database. Batch The batch application program reads data from a prepared data file and updates the database all at once without computer operator interaction. Example: An application that summarizes daily sales of a national discount chain from a data file that contains online sales, returns, and inventory applications created during store hours. This guide shows examples of the batch method. Running a Batch Application Program Introduction The batch method of populating the database requires A data file that contains the data An application program whose purpose is to read the data file and instruct the Accessroutines to write data to, or remove data from, the appropriate database fields. The following sample data file, DATA/EMPLOYEE, is for the EMPLOYEEDB database: 00000001 512661112 111150 926910000 Raji Rama 00000002 677234423 102274 926910000 Leslie Hall-Steger 00000003 559456677 015066 926910000 Harlow Kasselbaum 5 2 3850 8198 001

Running a Batch Application Program Compiling the Batch Application Program The application program needs to be compiled by the COBOL74 compiler to produce an object code file that can be understood by the MCP and Enterprise Database Server software and run against the database. The following command makes the source file EMPLOYEEDB/UPDATE the current workfile of a CANDE session and compiles the source file to produce the object code file, OBJECT/EMPLOYEEDB/UPDATE: GET EMPLOYEEDB/UPDATE; COMPILE EMPLOYEEDB/UPDATE Running the Application Program From CANDE Directly RUN EMPLOYEEDB/UPDATE From WFL in CANDE WFL RUN OBJECT/EMPLOYEEDB/UPDATE Both commands start OBJECT/EMPLOYEEDB/UPDATE, the application program that reads the data file, DATA/EMPLOYEE. Data from the data file then populates the sample EMPLOYEEDB database. Sample COBOL Program The following example shows the COBOL source file for the batch application program, EMPLOYEEDB/UPDATE. This program reads the DATA/EMPLOYEE data file to populate the EMPLOYEEDB database. *Identify the name and purpose of the program. $SET LIST STACK OFFSET MAP LINEINFO ERRORLIMIT = 999 IDENTIFICATION DIVISION. PROGRAM-ID. ABCDTEST. AUTHOR. DATE-COMPILED. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT EMPDATA ASSIGN TO DISK. *Specify the format for an employee record and the data fields within the record. DATA DIVISION. FILE SECTION. $PAGE FD EMPDATA VALUE OF FILENAME IS W-EMPLOYEE-ID. 01 EMPLOYEE-REC PIC X(80). 3850 8198 001 5 3

Running a Batch Application Program 01 EMP-REC. 03 EMP-ID PIC 9(09). 03 FILLER PIC X(01). 03 EMP-SSN PIC 9(09). 03 FILLER PIC X(01). 03 EMP-BIRTH-DT PIC 999999. 03 FILLER PIC X(01). 03 EMP-ZIP-CODE PIC 9(9). 03 FILLER PIC X(01). 03 EMP-FIRST-NAME PIC X(15). 03 FILLER PIC X(01). 03 EMP-LAST-NAME PIC X(15). DATA-BASE SECTION. DB EMPLOYEEDB ALL. *Specify error-handling information and identify file titles. WORKING-STORAGE SECTION. 77 ERROR-FLAG PIC 9 COMP. 88 NOT-FOUND VALUE 1. 88 STORE-EXCEPTION VALUE 2. 88 DELETE-EXCEPTION VALUE 3. 88 CREATE-ERROR VALUE 4. 88 STORE-ERROR VALUE 5. 88 ERROR-FOUND VALUE 1 THRU 9. 77 EOF-FLAG PIC 9(01) COMP VALUE O. 77 TRAN-COUNT PIC 9(04) COMP VALUE 0. 01 MSG PIC X(100). 01 RESULT PIC X(6) 01 MISC-DATA. 03 W-EMPLOYEE-ID. 05 FILLER PIC X(15) VALUE "DATA/EMPLOYEE.". 03 W-DMSUPPORT-NAME. 05 FILLER PIC X(22) VALUE "DMSUPPORT/EMPLOYEEDB". 05 FILLER PIC X(16) VALUE " ON HR.". 5 4 3850 8198 001

Running a Batch Application Program *Open the data file, open the database and update it, count changed records, then close the database, and close and save the data file. $PAGE PROCEDURE DIVISION. MAIN-PARA. CHANGE ATTRIBUTE DEPENDENTSPECS OF EMPDATA TO VALUE TRUE. CHANGE ATTRIBUTE TITLE OF EMPDATA TO W-EMPLOYEE-ID. CHANGE ATTRIBUTE TITLE OF "DMSUPPORT" TO W-DMSUPPORT-NAME. OPEN INPUT EMPDATA. DISPLAY "** OPENING EMPLOYEE DATABASE UPDATE **". OPEN UPDATE EMPLOYEEDB. DISPLAY "** DATABASE OPENED **". PERFORM CREATE-EMPLOYEE THRU CREATE-EMPLOYEE-EXIT UNTIL EOF-FLAG = 1. DISPLAY "** NUMBER OF RECORDS CREATED = " TRAN-COUNT. CLOSE EMPLOYEEDB. CLOSE EMPDATA WITH SAVE. STOP RUN. *State the procedure, CREATE-EMPLOYEE. CREATE-EMPLOYEE. READ EMPDATA AT END MOVE 1 TO EOF-FLAG GO TO CREATE-EMPLOYEE-EXIT. CREATE-EMPLOYEE-EXIT. EXIT. *State the procedure, CREATE-REC. CREATE-REC. CREATE PERSON ON EXCEPTION MOVE 4 TO ERROR-FLAG PERFORM ERROR-IT. PERFORM BUILD-RECORD THRU BUILD-RECORD-EXIT. BEGIN-TRANSACTION NO-AUDIT RST ON EXCEPTION PERFORM ABORT-IT. STORE PERSON ON EXCEPTION MOVE 5 TO ERROR-FLAG PERFORM ERROR-IT. 3850 8198 001 5 5

Running a Batch Application Program END-TRANSACTION AUDIT RST ON EXCEPTION PERFORM ABORT-IT. ADD 1 TO TO TRAN-COUNT. CREATE-REC-EXIT. EXIT. *State the procedure, ERROR-IT. ERROR-IT. IF NOT-FOUND DISPLAY "RECORD NOT FOUND". IF STORE-EXCEPTION DISPLAY "RECORD NOT STORED". IF DELETE-EXCEPTION DISPLAY "RECORD NOT DELETED". PERFORM ABORT-IT. *State the procedure, BUILD-RECORD. BUILD-RECORD. MOVE EMP-FIRST-NAME TO FIRST-NAME. MOVE EMP-LAST-NAME TO LAST-NAME. MOVE EMP-ID TO EMPLOYEE-ID OF PERSON. MOVE EMP-BIRTH-DT TO BIRTH-DATE. MOVE EMP-SSN TO SOC-SEC-NO OF PERSON. MOVE EMP-ZIP-CODE TO ZIPCODE. BUILD-RECORD EXIT. EXIT. *State procedure, ABORT-IT. ABORT-IT. *Specify result messages: CHANGE ATTRIBUTE TITLE OF "DMSUPPORT" TO W-DMSUPPORT-NAME. MOVE DMSTATUS(DMRESULT) TO RESULT. CALL "DMEXCEPTIONNAME OF DMSUPPORT" USING RESULT, MSG. DISPLAY "EXCEPTION CTGY: "MSG. CALL "DMSTRUCTURENAME OF DMSUPPORT" USING RESULT, MSG. DISPLAY "STRUCTURE NAME: "MSG. CALL "DMEXCEPTIONTEXT OF DMSUPPORT" USING RESULT, MSG. DISPLAY "EXCEPTION TEXT: "MSG. CHANGE ATTRIBUTE STATUS OF MYSELF TO -1. END-OF-JOB. 5 6 3850 8198 001

Running a Batch Application Program Using the DMSUPPORT Library During a Software Update with the DMUPDATE Utility The following is a sample of program declarations and application text for an ALGOL program: BOOLEAN PROCEDURE GET_DMSUPPORT_UPDATED(); LIBRARY DMSUPPORT; BOOLEAN PROCEDURE ALGOEXCEPTIONTEXT (RSLT, MSGTEXT); BOOLEAN RSLT; STRING MSGTEXT; LIBRARY DMSUPPORT; IF ALGOLEXCEPTIONTEXT( DBRSLT,MSG) THEN BEGIN % DELINKLIBRARY(DMSUPPORT); % call again in case the message is new ALGOLEXCEPTIONTEXT(DBRSLT,MSG); END; IF GET_DMSUPPORT_UPDATED THEN DELINKLIBRARY(DMSUPPORT); The following is a sample of program declarations and application text for a COBOL85 program: 77 UPDATE-RESULT PIC 9(11) BINARY VALUE ZERO. CALL "GET-DMSUPPORT-UPDATED OF DMSUPPORT" GIVING UPDATE-RESULT. IF UPDATE-RESULT IS NOT EQUAL TO ZERO CANCEL "DMSUPPORT". Software Interaction for an Open Database Figure 5 1 shows the interaction of Enterprise Database Server software when an audited database is open. 3850 8198 001 5 7

Running a Batch Application Program Figure 5 1. Software Interaction When the Database Is Open Related Information Topics For information about... Refer to... COBOL language interface Sample EMPLOYEEDB DASDL definition Writing application programs COBOL ANSI-74 Reference Manual, Vol. 2 COBOL ANSI-85 Reference Manual, Vol. 2 Appendix A Enterprise Database Server Application Programming Guide 5 8 3850 8198 001

Section 6 Managing an Active Database In This Section This section includes information on Database administration Database maintenance Enterprise Database Server database services Common maintenance tasks Host system tasks Initializing database files Maintaining the database control file Keeping track of a processing job Troubleshooting Discontinuing a program I/O errors 3850 8198 001 6 1

Administering a Database Administering a Database What the DBA Needs to Know To make sound decisions about what options work best for day-to-day database operations, for unusual situations, and for long-range planning, the DBA needs to understand Employer and application program information management goals How the host system works How Enterprise Database Server works How Enterprise Database Server software and host system software interact DBA Administrative Tasks A DBA is primarily responsible to his or her employer to keep business information complete and accessible. In carrying out this primary responsibility, the DBA Informs the information systems manager about computer resource requirements for databases and their application programs Helps formulate, test, and make known disaster recovery plans for the company information systems Works with application programmers and new database designers to Collaborate with them about their needs to access the database Communicate to them the characteristics that the DBMS and database require of their programs so that the programs can run efficiently Negotiate issues about use of computer resources Changes database structures when they must be added, deleted, or modified Establishes the maintenance schedules for databases Executes, or supervises the execution of, database maintenance tasks 6 2 3850 8198 001

Maintaining a Database Maintaining a Database Introduction The updating task of application programs, in some respects, is the main event for a database. But besides the main event, the database needs regular maintenance to ensure that it continues to meet business requirements. Benefits of Good Database Maintenance Recommended database maintenance practices enable you to Use computer resources efficiently. Serve your customers well by preventing downtime time when the database cannot be accessed. Keep surprises in running the database to a minimum. Be prepared for the unexpected events that do happen. (Nobody schedules power failures.) Be informed about how the database is performing, the demands being made on it, and how it is using system resources. Scheduling Database Maintenance The DBA designs and schedules the database maintenance routine. The routine needs to be coordinated with other demands on the database. Usually, the best time for maintenance is when few or no users are active in the database. For example, the fewer changes made to the database during the daily copy (dump) of the database, the quicker database replacement can be done with that dump if you perform a rebuild, restore, or reconstruct operation shortly after the dump. When possible, it can be more efficient to schedule maintenance tasks separately from the running of database update and inquiry programs. Some slowing can occur when user and maintenance tasks compete for computer resources. Enterprise Database Server Provisions for Maintenance Tasks Enterprise Database Server provides software geared to perform many standard services for the database. You perform maintenance tasks by running the provided software with whatever options and parameters you think benefit the database situation at your site. 3850 8198 001 6 3

Enterprise Database Server Database Services Enterprise Database Server Database Services Services at a Glance Enterprise Database Server database services control, monitor, back up, recover, access, and guard the database files and accessing applications. The following table lists Enterprise Database Server database services and identifies the Enterprise Database Server program that performs the tasks to accomplish each service. The tailored software programs listed in this table are the control file, the description file, the DMSUPPORT library, the RMSUPPORT library, the RECONSTRUCT program, and the REORGANIZATION program. All other files listed in the table are standard Enterprise Database Server software files. Database Service Software Program Tasks Performed Control all access to the database, including Enterprise Database Server program access. Provide the database definition to other programs. Control file DASDL compiler Admits or denies access to the database. Holds dynamic database parameters (version, timestamp, and so on). Matches versions and timestamps of software and database files before running the database. Controls special processing such as halt/load recoveries and reorganizations. Checks integrity and consistency of data files. Provides interpretive information. Maintains the status of database files and of audit file parameters. Checks DASDL syntax and creates a database description file that other software programs can read. 6 4 3850 8198 001

Enterprise Database Server Database Services Database Service Software Program Tasks Performed Describe the database to other software programs. Control both physical and logical access to the database. Provide the database configuration. Enable global transactions. Recover the database. Reconstruct parts of the database. Create backup database files, reload database files, initialize database files, print data and control information, start various forms of recovery. Description file Accessroutines DMSUPPORT library RMSUPPORT library DMRECOVERY program RECONSTRUCT program DMUTILITY program Provides database format for tailored software compilation and for application programs. Provides access to database format by nontailored software. Helps create the database control file and the DMSUPPORT library. Synchronizes and controls data access for programs accessing the database simultaneously. Helps the Accessroutines to locate data. Creates two data sets for use by Open Distributed Transaction Processing, and adds them to the description file. Automates recovery of an audited database. Reconstructs specific rows of a database. Dumps and loads database files. Prints data and control information for database files. Initializes all or specific database structures. Initiates nonautomatic recovery for audited databases. 3850 8198 001 6 5

Common Maintenance Tasks Database Service Software Program Tasks Performed Change database structures. Specify reorganization parameters. REORGANIZATION program BUILDREORG program Defines the changes to be made to database structures after an update to the database DASDL. Enables reorganization specifications to be entered into the compilation of the REORGANIZATION program. Starts the compilation of the REORGANIZATION program. Common Maintenance Tasks Like many other maintenance tasks, database maintenance tasks can be categorized by the overall service they help to achieve. For example, the next five sections of this guide focus on the five primary categories of service and the tasks connected to them: Backing up the database Recovering the database Monitoring the database Using audit files as a diagnostic tool Updating and reorganizing the database The remainder of this section provides explanations and examples of other database maintenance tasks and ways to do generic tasks that you can use at any time, for example, file-equating a file. Host System Tasks Introduction Frequently, Enterprise Database Server requires you to perform host system tasks because Enterprise Database Server software works so closely with host system software. Brief explanations and examples for some of these tasks follow. 6 6 3850 8198 001

Common Maintenance Tasks Designating a Library for a Function Some Enterprise Database Server operations require you to designate which library to use for an operation function. Turning On the Designation SL RDBSUPPORT = *SYSTEM/RDBSUPPORT ON DISK Instructs the system to use the nonusercoded SYSTEM/RDBSUPPORT code file on DISK when the RDBSUPPORT function is needed. SL is the system Support Library command. Turning Off the Designation SL - RDBSUPPORT Removes the designation of the code file that was most recently established for the RDBSUPPORT function. File-Equating a File Some software needs a body of information to perform its job, but any one of several files could provide the information. When you perform the task, you need to tell the system which file to access. The mechanism you use is a file-equation statement. The following statement specifies the database description file as the source of DASDL information needed by the DBCERTIFICATION program: FILE DASDL(TITLE = DESCRIPTION/EMPLOYEEDB ON HR); Initializing Database Files Introduction Before any Enterprise Database Server program can access a new database, or a newly created structure within an existing database, the DMUTILITY program must initialize the database files. The DMUTILITY INITIALIZE statement can initialize either all database files or specific structures only. The initialization explained here is the same operation that can be performed during the compilation of the DASDL source file when the INITIALIZE option is set. 3850 8198 001 6 7

Common Maintenance Tasks Effects of Initialization The effects of initializing new database files vary depending on the type of database structure involved. For a disjoint data set (the type of data set for the sample EMPLOYEEDB database in this guide), the following structures are also initialized: Sets that refer to the data set Embedded structures within the data set Caution Initializing existing populated database structures deletes all the data from the structures if you do not also set the DASDL UPDATE option. In addition, initializing an existing data set can invalidate the values of the COUNT, AGGREGATE, and POPULATION data items. Initialization Task Examples Initializing a New Database RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB INITIALIZE ALL") Initializes all database structures. Initializing a New Database Structure RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB INITIALIZE DEPARTMENT") Initializes the data set called DEPARTMENT and its set DEPARTMENT-SET. Maintaining the Database Control File Introduction A control file controls each active Enterprise Database Server database. The control file, by default called <database name>/control, must be present whenever the database is open. All Enterprise Database Server software programs involved in making the database available to users depend on the control file for information, direction, and other services. A DBA should understand control file structure and functions. 6 8 3850 8198 001

Common Maintenance Tasks Control File Functions The control file Contains the timestamps for the database software and files The Accessroutines uses timestamps to check the validity of data. Contains the update levels of the database and the structures The Accessroutines uses update levels to check the validity of data. Stores audit control information, dynamic database parameters, and other information Guards the database from interruption while a process that needs exclusive access to the database completes its tasks successfully (for example, a halt/load recovery and a reorganization) Ensures that a database that has been interrupted (discontinued) for any reason is not accessed until the integrity of the database is guaranteed by the successful completion of a recovery process How the Control File Performs Its Services The control file contains information that the programs need to continue on their way or includes data that prevents them from continuing. A program cannot continue when it encounters either the name of the exclusive process or the in-use bit that is inserted into the control file when the database has been interrupted. 3850 8198 001 6 9

Common Maintenance Tasks Tasks for Managing the Database Control File The following table lists the goals for tasks that initialize and maintain the database control file and the Enterprise Database Server software that facilitates each task. Goal Initialize the database control file, <database name>/control. Recover a control file that has become lost or corrupt. In the control file, change the family designations for the control file, the audit file, the database structures, and the Enterprise Database Server code files without performing a DASDL update. Unlock the control file if any of the following operations fail: offline dump, offline copy, offline certification, initialization request. Corresponding Task Compile the DASDL source file with the DMCONTROL option set. Run the DMCONTROL program with the INITIALIZE parameter. Run the DMCONTROL program with the RECOVER INITIALIZE or RECOVER UPDATE parameter. Run the DMCONTROL program with the data file family change and the code file family change parameters. Run the DMUTILITY CANCEL statement. Enterprise Database Server Software DASDL compiler DMCONTROL program DMCONTROL program DMCONTROL program DMCONTROL program DMUTILITY CANCEL statement 6 10 3850 8198 001

Common Maintenance Tasks Control File Task Examples Creating a Temporary Control File RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HR INITIALIZE") Creates a temporary control file for use with the DMUTILITY RECOVER REBUILD command when the RECOVER REBUILD command copies a backup dump. The RECOVER REBUILD command overwrites the temporary control file with the control file from the backup dump. Updating the Control File RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HR UPDATE") Causes a new control file to be created from the existing control file and the new DASDL description file after a DASDL update run. Recovering a Control File RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HR RECOVER UPDATE") Creates a good control file from an old copy of the control file, the current description file, and your input when the current control file is lost or corrupted. Caution The audit file number that you furnish to the DMCONTROL software must be the number of the current audit file. Furnishing some other number can have disastrous results on the recovery of the database. Unlocking the Control File Initiating the offline dump, offline copy, offline certification, or initialization operations automatically sets the in-use bit, that is, locks the control file to other programs. Concluding these operations automatically resets the in-use bit (unlocks the control file). Therefore, after a failure, manually reset the in-use bit. The following command resets the in-use bit in the control file after the failure of an offline dump, an offline copy, an offline certification, or an initialization operation: RUN $SYSTEM/DMUTILITY ("DB = (SYSDBA)EMPLOYEEDB ON HUBPACK CANCEL") 3850 8198 001 6 11

Common Maintenance Tasks Changing the Location of Database Files Occasionally, you need to change the location of database files, including structures or primary or secondary audit trails. Designate the new location in the control file so that programs accessing the database know where to find the relocated files. Use the following procedure. Step Action 1 Run DMCONTROL to specify the file name and pack location. DMUTILITY sets the family change flag. Family and code file name changes made in the DASDL source file cannot go into effect until the family change flag is reset. 2 Run DMCONTROL again using the OVERRIDE FAMILY option. 3 Run DMCONTROL again using the UPDATE option. Primary Audit Trail RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HUBPACK AUDITFAMILY = HR3") In the control file, this command specifies HR3 as the disk location for the primary audit trail: Secondary Audit Trail RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HUBPACK SECAUDITFAMILY = TRANSPACK") In the control file, this command specifies TRANSPACK as the disk location for the secondary audit trail. Changing the Location of Code Files When you change the location of database code files or the COPYAUDIT WFL job, designate the new location in the control file so that programs accessing the database know where to find the relocated files. DMSUPPORT Library RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HUBPACK DMSUPPORT FAMILY = SYSORG") In the control file, this command specifies SYSORG as the DMSUPPORT library location. 6 12 3850 8198 001

Keeping Track of a Processing Job COPYAUDIT WFL Job RUN $SYSTEM/DMCONTROL ("DB = (SYSDBA)EMPLOYEEDB ON HUBPACK COPYAUDITPRIWFL FAMILY = SYSORG") In the control file, this command specifies SYSORG as the COPYAUDIT WFL job location for the primary audit trail. Keeping Track of a Processing Job Overview The host system keeps close track of every task or job that it is processing. One tracking mechanism is the identification by number of every job and task that the system processes. The mix number of a job is a 4- or 5-digit number that identifies the process while it is executing. This number is stored in a task attribute called MIXNUMBER. Finding a Mix Number On the MARC screen, transmit A in the Action field. The system displays a list of active jobs on the system (see Figure 6 1). OUTPUT - MARC COMMAND OUTPUT 12:19:22 Action: ía ì HOme GO REturn COmnd STore + - (Press SPCFY for Help) Response returned at 12:19:07 ---Mix-Pri--CPU Time------------ 294 ACTIVE ENTRIES ------------------------- 1383 50 :52 (C85TEST) *SYSTEM/TESTDRIVER/HELPER ON QUAL * 2219 50 :00 (C85TEST) (C85TEST)OBJECT/NUL/SEQ/WRITE/READ_INTO/SQ108A ON QUAL 1949 50 1:34 JOB (JEAN) *SYSTEM/XREFANALYZER ON SYS00 2384 50 :00 JOB (HU) RUN 1133 50 1:37 (C85TEST) *SYSTEM/TESTDRIVER/HELPER ON QUAL 2966 50 :00 (MINH) (MINH)T2DB/RDB/AGENT 2949 50 :00 JOB (MINH) (MINH)T2DB/RDB/SERVER 2960 50 :01 (MINH) (MINH)T2DB/ACR/SERVER 1084 50 :49 (PMSLICE) *SYSTEM/TESTDRIVER/HELPER ON QUAL 2215 45 :02 (PMSLICE) (PMSLICE)OBJECT/MV441CC/BM/BNC29/C ON CTEST012 4639 45 :02 (SLICETADS) (SLICETADS)SYSTEM/TESTDRIVER/HELPER ON USERMAST 4877 45 :01 (SLICETADS) (SLICETADS)OBJECT/STADS/CC/ON/ARRAY/ERROR ON Figure 6 1. Jobs Displayed on MARC Screen 3850 8198 001 6 13

Troubleshooting Troubleshooting Overview When the host system cannot complete a task you request, the system displays a message to let you know what the problem is so that you can fix it. Example 1 Problem: You request the system to list (display) a file on your terminal, and a message similar to #NO FILE ON <family name> appears. Cause: Either the file is not resident on your family or you have made a mistake when typing the name of the file. Solution: Either copy the file to your family or type the correct file name. Example 2 Problem: You transmit the following family statement: FAMILY DISK PRODB OTHERWISE PRODALL The system responds with the following message: #EQUAL SIGN EXPECTED. SCANNING PRODB Cause: The statement requires an equal sign (=) between DISK and the family name PRODB. Solution: Re-enter the statement with the equal sign in the appropriate place. Where Messages Originate Messages often referred to as error or exception messages can originate from any program that is running. Messages are of varied formats because software programs were developed by different programmers at various times. Enterprise Database Server Messages to Application Programs While an application program runs, Enterprise Database Server monitors all database processing. The Accessroutines returns any database processing exceptions and errors to the application program. 6 14 3850 8198 001

Troubleshooting Exceptions Enterprise Database Server has divided exceptions and errors into 21 major categories. Each category deals with a general and specific cause for the condition. Each application program can decide how to handle an exception or error for each data management operation. The program can either Check the processing messages for exceptions or errors, and decide on an appropriate course of action. Ignore processing messages and have Enterprise Database Server handle the condition by discontinuing the program. This process is referred to as masking the message. An exception is a categorized notification to an application program by Enterprise Database Server software that a requested database operation was not performed. The exception is included in a category and subcategory exception listing and is returned to the program as part of the ON EXCEPTION syntax. Errors An error is a notification to an application program by the Enterprise Database Server software that a requested database operation could not be performed. An error is caused by a problem within the database, such as a corrupted data structure, or within the Enterprise Database Server software, such as corrupted code. An error is usually fatal to the database and produces a message on the terminal and in the system SUMLOG file. In most cases, the message includes a sequence number indicating where the error occurred. Results of Enterprise Database Server Exceptions and Errors Enterprise Database Server exceptions and errors Require the application program to stop (a fatal condition) and return a SYSTEMERROR. For example, if there is a hardware problem, Enterprise Database Server terminates the program until you correct the problem. Allow the program to continue as long as the program does not access corrupted control information (a nonfatal condition) and return an exception result. For example, if a CHECKSUM error occurs, Enterprise Database Server returns an exception to the program. If the program handles the exception, it can continue processing. 3850 8198 001 6 15

Troubleshooting Where to Find Current Exceptions You can find the current Enterprise Database Server exceptions in the file DATABASE/PROPERTIES. When you receive a new system software release or a new Enterprise Database Server IC (Interim Correction), issue a CANDE WRITE command to print sequence range 32000000 to 32999999 in the DATABASE/PROPERTIES file. This action provides you with a hard-copy list of current exceptions. To check on errors and exceptions, you can use the PRINTAUDIT utility to print specific audit records. The records show the cause and disposition of errors and exceptions. Discontinuing a Program Introduction When a program is stopped in the midst of its processing, it is said to be discontinued, or terminated abnormally. A process can be discontinued by operator commands, by statements in related processes, or by the system software. How to Discontinue a Program To stop a program or one of its tasks from processing once it has begun, you use the DS (Discontinue) system command. You identify the job or task by providing the program mix number in front of the command. For example, if you want to discontinue the task of mix number 57312, you type and transmit the following command in CANDE:?57312 DS What happens to the items in the program that were processed prior to the DS command depends on the type of program that is discontinued. When the System Discontinues a Program Sometimes, an event occurs that makes it impossible for the system to continue processing a program. In that case, the system discontinues the program and displays the message ( P-DS ) along with any other explanation of the problem. I/O Errors Definition An I/O operation is one in which the system reads data from or writes data to a file on a peripheral device such as a disk drive. An I/O error amounts to a failure of a read or write operation. 6 16 3850 8198 001

Troubleshooting Handling I/O Errors If errors occur in reading from or writing to tape or disk while DMUTILITY is creating or restoring a database dump, DMUTILITY takes an automatic action or an operator takes a manual action, as shown in the following table. Tape or Disk Type of Error Action Tape Write DMUTILITY relabels the tape as BADTAPE. Read Operator mounts an alternate tape. Operator performs one of the following actions: Retries the operator Skips the row Quits the tape Disk Read or write DMUTILITY displays the I/O error. How the Accessroutines Handles Read Errors The Accessroutines automatically retries the I/O operation once. If the retry fails, an operator Retries the operator again Locks the row Discontinues the worker by issuing a DS system command Uncorrectable disk errors cause the worker task to fail. A read error occurs when the Accessroutines tries to read a portion of the database and fails to receive proper data. When a read error occurs, Accessroutines performs the following actions. Note: Under some circumstances, you can force additional retries. If they are not successful, then you have the option to lock the row. 1. Displays a message about the read error. For example: <job#> DISPLAY: (<usercode>)<database name>: *** READ ERROR ON STR #2, FILE: (<usercode>)<database name>/<file name> ON <pack name> <job#> DISPLAY: (<usercode>)<database name>: *** RSLT=XXXX... 3850 8198 001 6 17

Troubleshooting 2. If the error is a software error, automatically retries the read operation and displays the results, as follows: Or <job#> DISPLAY: (<usercode>)<database name>: *** READ FOR STR #2 ON... HAS BEEN RETRIED 3 TIMES WITHOUT SUCCESS. <job#> ACCEPT : R TO RETRY OR L TO LOCK THE ROW. <job#> DISPLAY: (<usercode>)<database name>: *** READ FOR STR #2 ON... WAS RETRIED 1 TIMES BEFORE SUCCEEDING. 3. If the retries fail, marks the row as having a read error and displays the following message: <job#> DISPLAY: (<usercode>)<database name>: *** READ ERROR BIT FOR ROW #5 OF STR #2 ON <pack name> HAS BEEN SET. 4. Sends an I/O error to the application program requesting the data. How the Accessroutines Handles Write Errors A write error occurs when the Accessroutines tries to write a portion of data to disk and fails to do so. If retries of the write operation are unsuccessful, Accessroutines writes the data to the audit file. However, the data on the physical disk file is no longer current and must not be accessed until it is made current. When a write error occurs, Accessroutines performs the following actions: 1. Displays a message about the write error. For example: <job#> DISPLAY: (<usercode>)<database name>: *** WRITE ERROR ON STR #2, FILE: (<usercode>)<database name>/<file name> ON <pack name> <job#> DISPLAY: (<usercode>)<database name>: *** RSLT=XXXX... 2. Automatically retries the write operation and displays the results, as follows: Or <job#> DISPLAY: (<usercode>)<database name>: *** WRITE FOR STR #2 ON... HAS BEEN RETRIED 2 TIMES WITHOUT SUCCESS. <job#> DISPLAY: (<usercode>)<database name>: *** WRITE FOR STR #2 ON... WAS RETRIED 1 TIMES BEFORE SUCCEEDING. 3. If the retries fail, the operator can choose to try again or to lock out the row and continue by responding to the following message: <job#> ACCEPT : R TO RETRY WRITE OF STR #2 ON <pack name> OR L TO LOCK ROW #5 AND PROCEED 6 18 3850 8198 001

Troubleshooting 4. If the operator responds with L, the row is locked from further access and the following message is displayed: <job#> DISPLAY: (<usercode>)<database name>: *** ROW 5 OF STR #2 ON... HAS BEEN LOCKED OUT. Related Information Topics For information about... Refer to... Accessroutines Audit file and audit block numbers BUILDREORG program Control file COPYAUDIT program DASDL compiler Description file DMRECOVERY program DMSUPPORT library DMUTILITY program Effect of initialization on structure types Exceptions and errors File equations File types Halt/load recovery Host system concepts and procedures DASDL Reference Manual Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide DASDL Reference Manual DASDL Reference Manual System Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide DASDL Reference Manual Enterprise Database Server Utilities Operations Guide Enterprise Database Server Application Programming Guide File Attributes Reference Manual File Attributes Reference Manual Enterprise Database Server Utilities Operations Guide System Operations Guide 3850 8198 001 6 19

Troubleshooting For information about... Refer to... INITIALIZE option as part of the DASDL compilation Mix numbers RECONSTRUCT program Reorganization REORGANIZATION program RMSUPPORT library System commands Section 3 DASDL Reference Manual System Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide DASDL Reference Manual System Commands Reference Manual DASDL Reference Manual System Commands Reference Manual 6 20 3850 8198 001

Section 7 Backing Up a Database In This Section This section includes information about Making and keeping backup copies of database files Summary and order of backup tasks Backing up all or part of a database Backing up the database by increments Storing a dump Database activity during the dump Performing online dumps Performing offline dumps Tasks to be performed on an existing dump Verifying that the dump is complete and correct Copying or duplicating a dump Backing up audit files Backing up related database files 3850 8198 001 7 1

Making and Keeping a Recent Backup of Database Files Making and Keeping a Recent Backup of Database Files Introduction Perhaps the single most important preventive maintenance task you can perform is to back up the database frequently and keep the backups for a period of time. Definition To back up the database means to use the DMUTILITY program to make a copy of all or part of the database. The backup includes a check of the physical integrity of all database structures being backed up. Frequency The recommended frequency is daily more often if special circumstances exist. For example, making changes to the database structures is a special operation during which backups are recommended both before and after the changes. Backing Up Related Database Files A complete database backup includes a reserve copy of all the files pertaining to the database. These files include not only the database files and the control file (which change frequently), but also the DASDL source file, description file, tailored files, application programs, and audit files. Purpose of the Backup Database Continuity Reserve copies of all the files necessary to the database enable you to put the database back in operation quickly in case the current database files become unavailable or damaged. Definition of Dump A dump is either A copy of stored data in which a change has been made since the previous dump of that data The transfer of all or part of the contents of one section of computer storage to another section or to some output device 7 2 3850 8198 001

Making and Keeping a Recent Backup of Database Files Terms for Backing Up the Database This guide uses the following terms when referring to database backups that you perform with the DMUTILITY program: The processes used to make a database backup are called backing up and dumping. The backed-up database is called a backup or a dump. A backup to tape is called a tape dump. A backup to disk is called a disk dump. Terms for Backing Up the Dump Once you have backed up the database, you are strongly encouraged to back up the database dump. The reason is that files can be deleted or be made unusable. You can use either of the following commands to back up the database dump: The COPYDUMP command copies the backup to the same or different type of media. The DUPLICATEDUMP command duplicates the backup to the same type of media. In informal conversations, people frequently use the terms duplicates and copies interchangeably. A Dump as Part of the Database Backup The files that the DUMP command backs up are either some or all of the database files and the control file. As suggested earlier in this section, a broad view of a database backup includes also having in reserve all the other files that the database needs. When you plan the backing up of these files, keep in mind that you Back up these files only when they change (although there is no harm in backing up the files more frequently). Use utilities other than the DUMP command to copy the files. Usually do not need these files for the ordinary manual recovery. These files assure you that you are ready for the unusual or disastrous situation. 3850 8198 001 7 3

Making and Keeping a Recent Backup of Database Files Backing Up Database Files Other Than Dumps To reestablish the current database from scratch, you must have in reserve the following files: DASDL source file Audit files (copies of both the primary and secondary audit trails) Tailored software: DMSUPPORT library, RECONSTRUCT program, and if your site uses the Open Distributed Transaction Processing product, RMSUPPORT library Database description file You back up a database application program after a database reorganization that causes the application program to change. Otherwise, you back up an application program when you back up the pack on which it resides. 7 4 3850 8198 001

Summary and Order of Backup Tasks Summary and Order of Backup Tasks Overview The following table provides an overview of the goals you want to achieve in backing up the database. The table also presents the tasks you perform to achieve those goals, and the Enterprise Database Server software that facilitates the task. The table lists the tasks chronologically, beginning with creating the backup. Goal Task Utility Keep a recent backup of the database in reserve in case automatic recovery of the current database becomes impossible. Make sure the backup is usable. Keep a copy of the backup in reserve in case the original backup becomes unusable. Remove outdated database files. Make a reserve copy of the audit file if duplicate audit trails are not used. Keep reserve copies of application programs, database description file, and tailored software. Catalog nondump database backup files. Back up (dump) the database; back up frequently (daily or more often) and regularly. Verify that the backup is an integral (error-free) duplicate of the database. Copy the database backup on the same or another medium. Duplicate the database backup on the same medium. Delete the reserve database backup files. Include the dump if no dump tape directory exists. Copy the audit files. Copy the files by using the library maintenance ADD or COPY command with the VERIFY and COMPARE options. Set up a system of file names and directories to keep track of backup files, their dates, and locations. DMUTILITY DUMP command DMUTILITY VERIFYDUMP command DMUTILITY COPYDUMP command DUPLICATEDUMP command CANDE REMOVE command COPYAUDIT program CANDE COPY command None Enterprise Database Server dump files can be encrypted using DMUTILITY encryption. Data files can be encrypted when they are copied from disk to tape or from disk to disk as part of a DMUTILITY DUMP operation. Before you can perform DMUTILITY encryption, your security administrator must establish machine encryption keys by using the Security Center MMC snap-in that runs on the security administrator s workstation. This workstation could be a separate Windows-based computer or, if you are using MCPvm systems, the Windows side of a ClearPath system. Security Center refers to these keys as tape encryption keys. See the Security Administration Guide and the Security Center Help for additional information about configuring, exporting, and importing tape encryption keys. 3850 8198 001 7 5

Summary and Order of Backup Tasks Software encryption and decryption affect the total time to transfer data to tape, and result in increased processor usage. Performance is affected by tape drive throughput, the performance level of the MCP system, the performance level of the hardware performing the encryption, and competing workloads. It is recommended that you incrementally introduce encryption to your database environment to ensure that adequate resources are available and to prevent encryption and decryption from having adverse effects on existing processes. Encrypted files are automatically decrypted when copied to disk as part of the DMUTILITY RECOVER, RESTORE, COPY, CLONE, and STRUCTURECLONE operations. Caution Do not copy the database and audit files with library maintenance commands. These commands do not provide the database integrity checking that the DMUTILITY DUMP commands provide. 7 6 3850 8198 001

Backing Up All or Part of the Database Backing Up All or Part of the Database Introduction You can back up the entire database or one or more of the following: Specific files on a named pack Specific files within a family index range Specific rows in a named file When to Perform a Partial Dump A partial dump consumes less time and fewer resources than a full database dump. In general, a partial dump of specific database files is beneficial when only defined parts of the database have changed, or when you consider making changes to a particular structure. For example, a partial dump makes sense when You need to back up the results of a reorganization process for three changed database structures. You dump the three structures only. A program has just populated a newly created database structure. You dump that new structure only. A program encounters nonfatal read errors. You dump only the rows on which the errors occurred. You want to test and assess planned changes to a structure. To test on another system, you dump the structure with the DMUTILITY partial dump operation. To test on the same system, you copy the structure with library maintenance commands. You have a very large database that cannot be dumped within a given time frame. You dump the database in increments over several days. 3850 8198 001 7 7

Backing Up All or Part of the Database Backing Up the Database by Increments Definition of an Increment An increment is one of a series of regular consecutive additions. For example, if you have a database that is too large to back up daily, you could create a schedule that backed up certain number of database files (an increment) each day until the entire database was backed up. You would then repeat the schedule. Large High-Use Databases and the Daily Dump Recommendation Consider a large database that is online 12 to 24 hours a day, and 5 to 7 days a week. Performing a daily dump of such a database might be virtually impossible because An offline dump is out of the question. You cannot exclude online update users from the database at the precise time they need to access it. An online dump process cannot complete because it is competing for processor time with the database main event its update and inquiry by primary users. Examples of Solutions Examples of solutions to the preceding situation of insufficient time to perform daily database dumps might be to Perform the backup at a time of day when database application use is lowest. Set up a schedule to back up in increments of approximately 20 percent of the database each weekday, and start with the first increment again on the following Monday. Make a business case for requisitioning Additional equipment to reduce dump times Extended business hours and additional personnel to perform dumps Handling Incremental Dumps With the DUMP command you can specify a series of database files for a dump, with each item in the series having its own Family index range Row range Family name Destination (with specific tape or disk file options) Specifying a Family Index Order on the Backup You can cause the database information to be backed up to tape or disk in the order in which it resides within a disk family. This option causes the system to back up files by structure according to the index within a disk family. 7 8 3850 8198 001

Backing Up All or Part of the Database The advantage of specifying a family index order for the dump is that database recovery from a bad index in the family can be more efficient. In addition, having all the structure-related information from a single disk family or family index together Minimizes pack contention during the reloading of the dump for any recovery purposes Specifically speeds up a recovery from A pack lost in a multipack family Read and write errors INCREMENTAL and ACCUMULATED Options The INCREMENTAL and ACCUMULATED options in the DMUTILITY DUMP command enable you to back up all data sets, sets, and subsets that have been modified since the last full, incremental, or accumulated dump. All structures that have the DUMPSTAMP option set in DASDL contain an extra word for storing dumpstamp information. The DMUTILITY program uses this dumpstamp information to determine which data blocks to include in the incremental dump. If a structure does not have the DUMPSTAMP option enabled, the whole structure is dumped. A full database dump is required before an incremental dump can begin. The first dump must be a full dump after any of the following operations: Any REORGANIZATION run Initialization of the control file RECOVER UPDATE of the control file Any online garbage collection Initialization of a database structure When incremental or accumulated dumps are used for dump to tape, an additional tape is needed to store the updated tape directory information. You need an additional tape because the tape directory cannot be overwritten after it has been written out to tape, and information pertaining to modified blocks is not available until the end of the incremental and accumulated dumps. 3850 8198 001 7 9

Backing Up All or Part of the Database NOCOMPARE Option The NOCOMPARE option in the DMUTILITY DUMP command enables you to skip automatic checking of a newly created tape. This option delays verification of the tape until a more convenient time or allows verification of the tape on a different machine to make better use of resources. Use of the NOCOMPARE option is recorded on the dump tape. This option is not recommended without a subsequent DMUTILITY run to verify the newly created tape. Exclude List Clause The exclude list clause in the DMUTILITY DUMP command enables you to exclude one or more structures from a database dump. The exclude list can consist of one or more database files. This clause adds flexibility to the DUMP command and is especially useful when you want to exclude a small percentage of structures from a DUMP operation. When performing a full database dump with the intent of excluding one or more database files, you must exclude all structures related to the file or files. Related structures include data sets, sets, subsets, and embedded structures. Serial Number Reporting Serial number reporting includes information about the last tape dump used in the dump output listing. This feature enables administrators to display serial number information without having to use the DUMPDIR option. ALL Option The ALL option in the Visible DBS command STATUS MIX displays information about all the tasks related to the database. This information includes tasks and libraries that declare the database as well as tasks that do not declare the database but are currently attached to the database stack through a library or other mechanism. Support for DLT and CTS9840 Tape Drives The DLT and CTS9840 tape drives are enhanced tape device peripherals supported by Enterprise Database Server software. This feature reduces the number of tapes needed for backup and recovery operations. DMUTILITY QUIESCE and RESUME Commands The DMUTILITY commands QUIESCE and RESUME enable you to create coherent online copies of a running database by temporarily suspending active users, flushing all data to disk, and waiting for a response to continue. It is during the final wait period that a mirrored copy of the database disks can be detached. 7 10 3850 8198 001

Backing Up All or Part of the Database By using the QUIESCE and RESUME commands you can replicate one or more copies of a complete database with each copy containing physically consistent data. QUIESCE Command The QUIESCE command performs the following tasks: Waits for current transactions to complete, while preventing any new transactions. Changes all access to database files to read-only. The database remains in a readonly state until a DMUTILITY RESUME command is executed. Flushes all modified data and audit buffers to disk. RESUME Command The RESUME command is the user s response to resume normal database access. Storing the Dump Introduction You can dump the database to tape or disk, depending on the storage resources available at your site. In the database industry, tapes are most frequently used because they are a less expensive resource than the disk medium. Dumping to Tape When you dump to tape, you furnish information common to any disk-to-tape process. The information includes Tape name (1 node up to 17 characters long) Cycle number Version number Workers Serial number Compression and noncompression Density SCRATCHPOOL option Dumping to Disk When you specify a dump to disk, you specify the File title for the entire dump Number of dump files into which the system should place the dump 3850 8198 001 7 11

Backing Up All or Part of the Database Backing Up with a WFL Job You can place commands to back up a database in a WFL job and then just run the WFL job when you need to take a dump. Following are two WFL jobs that initiate a backup of the EMPLOYEEDB database from disk to tape. One WFL job is for an online database dump, and the other is for an offline database dump. WFL Job for Online Dump Backup %EMPLOYEEDBBACKUP/JOB: Performs online backup of EMPLOYEEDB database % BEGIN JOB EMPLOYEEDBBACKUP/JOB; JOBSUMMARY = UNCONDITIONAL; STRING DUMPPARM, TAPENAME; TASK T1, T2; TAPENAME := "EMPLOYEEDB" & DATETIME(YYMMDD) & "A"; % BACKUP OTHER DATABASE RELATED FILES DO BEGIN COPY DESCRIPTION/EMPLOYEEDB, DMSUPPORT/EMPLOYEEDB, RECONSTRUCT/EMPLOYEEDB, SOURCE/DASDL/EMPLOYEEDB FROM HR(PACK) TO #TAPENAME; [T1]; % DUMP DATABASE DUMPPARM := "DB = *EMPLOYEEDB ON HR1 " & "OPTIONS (WORKERS = 2) DUMP = TO " & "EMPLOYEEDB" & DATETIME(YYMMDD) & "B " & "(TAPES = 2, DENSITY = BPI38000)"; DO BEGIN INITIALIZE (T2); RUN *SYSTEM/DMUTILITY (DUMPPARM); [T2]; IF T2(TASKVALUE) = 0 THEN DISPLAY "** ERROR IN EMPLOYEEDB DATABASE BACKUP, " & "WILL RETRY **"; END UNTIL T2 IS COMPLETEDOK AND T2(TASKVALUE) NEQ 0; IF T2(TASKVALUE) = 2 THEN BEGIN DISPLAY "** WARNINGS OCCURRED DURING EMPLOYEEDB " & "DATABASE BACKUP **"; 7 12 3850 8198 001

Backing Up All or Part of the Database DISPLAY "** CHECK JOB SUMMARY FOR WARNING " & "MESSAGES**"; END; END JOB. WFL Job for Offline Dump Backup %EMPLOYEEDBBACKUP/JOB: Performs offline backup of EMPLOYEEDB database % BEGIN JOB EMPLOYEEDBBACKUP/JOB; JOBSUMMARY = UNCONDITIONAL; STRING DUMPPARM, TAPENAME; TASK T1, T2; TAPENAME := "EMPLOYEEDB" & DATETIME(YYMMDD) & "A"; % BACKUP OTHER DATABASE RELATED FILES DO BEGIN COPY DESCRIPTION/EMPLOYEEDB, DMSUPPORT/EMPLOYEEDB, RECONSTRUCT/EMPLOYEEDB, SOURCE/DASDL/EMPLOYEEDB FROM HR(PACK) EMPLOYEEDB/= FROM HRAUDIT(PACK), % BACKUP AUDITS EMPLOYEEDB/= FROM HR1AUDIT(PACK), % BACKUP AUDITS TO #TAPENAME; [T1]; % DUMP DATABASE DUMPPARM := "DB = *EMPLOYEEDB ON HR1 " & "OPTIONS (WORKERS = 2) OFFLINE DUMP = TO " & "EMPLOYEEDB" & DATETIME(YYMMDD) & "B " & "(TAPES = 2, DENSITY = BPI38000)"; DO BEGIN INITIALIZE (T2); RUN *SYSTEM/DMUTILITY (DUMPPARM); [T2]; IF T2(TASKVALUE) = 0 THEN DISPLAY "** ERROR IN EMPLOYEEDB DATABASE BACKUP, " & "WILL RETRY **"; END UNTIL T2 IS COMPLETEDOK AND T2(TASKVALUE) NEQ 0; 3850 8198 001 7 13

Database Activity During the Backup IF T2(TASKVALUE) = 2 THEN BEGIN DISPLAY "** WARNINGS OCCURRED DURING EMPLOYEEDB " & "DATABASE BACKUP **"; DISPLAY "** CHECK JOB SUMMARY FOR WARNING " & "MESSAGES**"; END; END JOB. Database Activity During the Backup The Deciding Factor Activity on the database during a backup operation is determined primarily by whether the database is audited or unaudited. Audited Database Activity During the backup of an audited database, you can choose either of the following levels of database activity: Update and inquiry users can access the database (the default) an online dump. Only an audited database backup is a candidate for the online dump because the recovery of database updates requires an audit trail. An advantage of the online dump is that full access to the database can continue during the dump process. However, if you have a choice, do an online dump when user activity is low or sporadic. At that time, updates are few, and the dump completes more quickly. Subsequently, read-error row reconstructions using rows from the dump can also complete more quickly. Inquiry users only can access the database an offline dump. Specifying the OFFLINE option of the DUMP command prevents update users from accessing the database during the backup. Note: The usual meaning of the term offline is that the database or program is unavailable to all users. The use of the term offline for an offline dump is different; it means that only update users cannot access the database. Advantages of an offline dump are that The database is in a consistent state throughout the backup process. The backup process is quicker than the online backup because the system is not processing updates. Reloading an offline dump can be quicker than reloading an online dump because audits do not need to be applied during the recovery. 7 14 3850 8198 001

Database Activity During the Backup Unaudited Database Activity The backup of an unaudited database requires that no update users access the database. Therefore, when you enter the name of an unaudited database in the DUMP command, the system automatically processes an offline backup. In general, if an unaudited database is interrupted while it is being updated, one safe way exists to recover it. You reload the database files and the control file from a dump and reprocess the updates made since the time of the dump. Performing Online Dumps Your Actions Before an Online Dump Before an online dump, ensure that A reorganization is not in progress. Starting an online dump when a reorganization is in progress results in a fatal error. The following message is displayed: ONLINE DUMP IS ILLEGAL WHEN REORGANIZATION IS IN PROGRESS You do not perform a disk stream dump if the volume of transactions performed against the database is unknown, volatile, or can grow beyond bounds. Online disk stream dumps can run into disk size limitations or disk resource contention problems. DMUTILITY Actions That Begin an Online Dump To begin an online dump, the DMUTILITY program opens the database for inquiry only, as though it were an inquiry-only application program. DMUTILITY bypasses the database guard file. This way of accessing the database prevents security errors from occurring when the online dump worker processes attempt to open the database. DMUTILITY Actions During an Online Dump DMUTILITY automatically copies the control file to the beginning of each tape reel or disk file of the dump. 3850 8198 001 7 15

Database Activity During the Backup Examples of Syntax for Online Dumps Dumping the Entire Database to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (WORKERS = 3) DUMP = TO EMPLOYEEDB043097(SCRATCHPOOL = SP4637, DENSITY = FMT36TRK, TAPES = 3)") Starts a dump of the EMPLOYEEDB database to tape chosen from scratchpool SP4637 with a density of FMT36TRK. Dumping the Entire Database to Disk RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR DUMP = TO EMPLOYEEDBDUMP/043097 ON HR3") Creates a disk stream dump, backing up the EMPLOYEEDB database to pack HR3. Dumping Specific Files of a Data Set to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (FORWARD COMPARE) DUMP EMPLOYEEDB/PERSON/= TO TAPEA (DENSITY = BPI38000)") Starts a dump of the EMPLOYEEDB PERSON data set and its associated sets to TAPEA with a BPI38000 density. Dumping Rows to Disk RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR DUMP EMPLOYEEDB/PROJECT/DATA (ROW = 14-36) TO PARTIAL/DUMP ON HR2") Backs up rows 14 through 36 of the PROJECT data set file of the EMPLOYEEDB database to pack HR2. Dumping a Pack to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR DUMP = (PACKNAME = HR1) TO TAPEPRIME") Backs up EMPLOYEEDB files on HR1 pack to a tape called TAPEPRIME with default worker and density values. 7 16 3850 8198 001

Database Activity During the Backup Performing Offline Dumps DMUTILITY Actions Before an Offline Dump Before initiating an offline dump for an audited database, the DMUTILITY program 1. Waits for the programs that have the database open for update to complete processing. 2. Locks the control file. 3. Allows only inquiry users to access the database. DMUTILITY Actions After an Offline Dump After the successful completion of an offline dump, the DMUTILITY program unlocks the control file to allow update users to access the database. Your Actions After an Offline Dump After an offline dump, perform the following actions: Store a copy of the last audit file with the offline dump. A recovery with the dump might not succeed without the system having access to the audit file. In this instance, you can copy the audit file with library maintenance. If the dump does not complete successfully, unlock the control file by entering the following command: RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR CANCEL") Examples of Syntax for Offline Dumps Note: Offline dump syntax differs from online dump syntax in one way: the option OFFLINE precedes the term DUMP as shown in the following examples. Dumping the Entire Database to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (WORKERS = 3) OFFLINE DUMP = TO EMPLOYEEDB043097(SCRATCHPOOL = SP4637, DENSITY = FMT36TRK, TAPES = 3)") Starts a dump of the EMPLOYEEDB database to tape chosen from scratchpool SP4637 with a density of FMT36TRK. Dumping the Entire Database to Disk RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OFFLINE DUMP = TO EMPLOYEEDBDUMP/043097 ON HR3") Creates a disk stream dump, backing up the EMPLOYEEDB database to pack HR3. 3850 8198 001 7 17

Tasks to Be Performed on an Existing Dump Dumping Specific Files of a Data Set to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (FORWARD COMPARE) OFFLINE DUMP EMPLOYEEDB/PERSON/= TO TAPEA (DENSITY = BPI38000)") Starts a dump of the EMPLOYEEDB PERSON data set and its associated sets to TAPEA with a BPI38000 density. Dumping a Pack to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OFFLINE DUMP = (PACKNAME = HR1) TO TAPEPRIME") Backs up EMPLOYEEDB files on HR1 pack to a tape called TAPEPRIME with default worker and density values. Tasks to Be Performed on an Existing Dump Verifying a Dump Before using a dump on tape or disk for recovery purposes, you should use the VERIFYDUMP command to verify that the dump is free of Block CHECKSUM errors Block sequencing errors I/O errors Verifying a Disk Dump RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR VERIFYDUMP EMPLOYEEDBDUMP/PERSON ON HR1") Verifies a dump of the PERSON data set on pack HR1. Verifying a Dump on a Quarter-Inch Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR VERIFYDUMP TAPEPRIME (DENSITY = BPI1250)") Verifies EMPLOYEEDB dump files on TAPEPRIME. For a quarter-inch tape, you must specify a density of BPI1250 or FMTQIC1000. Other tapes do not require a tape density. 7 18 3850 8198 001

Tasks to Be Performed on an Existing Dump Copying or Duplicating a Dump The Distinction The COPYDUMP and DUPLICATEDUMP commands enable you to make a backup copy of a dump. Both commands include integrity-checking in their processes. The COPYDUMP command copies the backup to the same or different type of media. The DUPLICATEDUMP command duplicates the backup to the same type of media. Examples of Syntax for Copying and Duplicating Dumps Copying a Dump from Tape to Disk RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (WORKERS = 4, DUMPDIR) COPYDUMP FROM TAPEPRIME (DENSITY = BPI1250) TO TAPEPRIMECPY ON HR2") Copies the dump on TAPEPRIME to a file called TAPEPRIMECPY on pack HR2. Includes information about the copy of the dump in the dump tape directory. Because TAPEPRIME is a quarter-inch tape, a density of BPI1250 or FMTQIC1000 is required. Other tapes do not require a tape density. Copying a Dump from Disk to Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (FORWARD COMPARE, DUMPDIR) COPYDUMP FROM EMPLOYEEDB/PERSON/DUMP/3 ON HR2 TO PERSONDUMP(SCRATCHPOOL = SP44667, COMPRESSED, DENSITY = FMT36TRK)") Copies the dump EMPLOYEEDB/PERSON/DUMP/3 on HR2 pack to a tape called PERSONDUMP, using a scratchpool called SP44667. Includes information about the copy of the dump in the dump directory on HR2. Duplicating a Dump on Tape RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (WORKERS = 10, FORWARD COMPARE, DUMPDIR) DUPLICATEDUMP FROM TAPEPRIME (DENSITY = BPI1250) TO TAPEPRIME(DENSITY = BPI1250)") Duplicates the TAPEPRIME dump to another set of tapes called TAPEPRIME. Includes information about the copy of the dump in the dump directory on HR. Duplicating a Dump on Disk RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB ON HR OPTIONS (DUMPDIR) DUPLICATEDUMP FROM EMPLOYEEDB043097 ON HR TO EMPLOYEEDB043097CPY ON HR1") 3850 8198 001 7 19

Backing Up Audit Files Duplicates the database dump EMPLOYEEDB043097 to another set of tapes called EMPLOYEEDB043097. Includes information about the copy of the dump in the dump directory on HR. Backing Up Audit Files Reasons to Back Up Audit Files The main reasons to back up audit files are to Archive the files for database recovery processes. Keep sufficient space for audit files on your system. DASDL Options That Ensure Automatic Audit Backup To back up audit files automatically, you can set either or both of the following DASDL options in the AUDIT TRAIL section of the DASDL source: The DUPLICATED option instructs DMUTILITY to produce a duplicate audit trail as audits are generated. The COPY TO, VERIFY, or QUICKCOPY TO option instructs the Accessroutines to initiate the COPYAUDIT program by way of a WFL job each time an audit file switch occurs. The COPYAUDIT program not only copies but verifies the audit file and stores the copy to tape or disk. Note: Do not use library maintenance commands to copy audit files because these commands do not perform verification checks. In addition, any tape files generated by these commands are not directly usable by the database recovery processes. An exception: After an offline dump, you can use library maintenance to copy the audit file that might need to be reloaded with the dump. A rollback or rebuild recovery can then occur immediately. Manual COPYAUDIT Operations You can manually perform the following operations: Copy audit files between media and to the same media. Copy the secondary audit trail as the primary audit trail, or the primary audit trail as the secondary audit trail. Using the QUICKCOPY command with the APPEND option and with MAXAUDITS set, append audit files to QUICKCOPY tapes that already contain audit files. Verify the contents of audit files. Print or display an audit tape directory. 7 20 3850 8198 001

Backing Up Audit Files Encrypting Audit Files Audit files can be encrypted when they are copied from disk to tape as part of a COPYAUDIT QUICKCOPY operation. Before you can perform COPYAUDIT tape encryption, your security administrator must establish machine encryption keys by using the Security Center MMC snap-in that runs on the security administrator s workstation. This workstation could be a separate Windows-based computer or, if you are using MCPvm systems, the Windows side of a ClearPath system. Security Center refers to these keys as tape encryption keys. See the Security Administration Guide and the Security Center Help for additional information about configuring, exporting, and importing tape encryption keys. Software encryption and decryption affect the total time to transfer data to tape, and result in increased processor usage. Performance is affected by tape drive throughput, the performance level of the MCP system, the performance level of the hardware performing the encryption, and the competing workloads. It is recommended that you incrementally introduce encryption to your database environment to ensure that adequate resources are available and to prevent encryption and decryption from having adverse effects on existing processes. Encrypted audit files are automatically decrypted when copied to disk as part of COPYAUDIT QUICKCOPY operations. Results of a COPYAUDIT Run The COPYAUDIT program returns one of the following task values when it completes. Task Value Meaning and Action Required 0 The COPYAUDIT run ended abnormally. Investigate and correct the cause of the failure, and then rerun the COPYAUDIT program. 1 The COPYAUDIT run was successful. 2 The COPYAUDIT run completed, but a warning was issued. Check the warning and, if appropriate, correct the problem and rerun the COPYAUDIT program. 3850 8198 001 7 21

Backing Up Audit Files When a COPYAUDIT Operation Cannot Complete During a COPYAUDIT operation, an I/O error can occur while the Accessroutines is writing the audit file, or the program can detect corruption in an audit file. In either case, The system terminates the program, displays a diagnostic message, and lists the job DATABASE/WFL/COPYAUDIT in waiting entries in the mix. You should dump the database as soon as possible. Note: When you run COPYAUDIT, if you are not completely sure of what is happening and what you need to do about it, contact your Unisys Support Center immediately. How Long to Keep Audit Files Audit files should be kept until they are no longer a candidate for a database recovery. The length of time can vary with circumstances at each site. A good plan is to keep dumps and matching audit files for at least a month. Managing Sectors on the Audit Pack An ideal arrangement is to store database audit trails only on the audit pack. When it is not possible to dedicate an entire pack to audit trails, then for performance reasons, plan to make audit storage areas as large and as few as possible. 7 22 3850 8198 001

Backing Up Audit Files Differences Between QUICKCOPY and COPY Commands The COPY and QUICKCOPY commands are alternative methods of copying an audit file. The syntax for both commands is similar. The QUICKCOPY command syntax differs from the COPY command syntax in the following ways: QUICKCOPY keyword replaces the COPY keyword. APPEND keyword is available to support the appending of audit files to existing audit tapes. MAXFILESPERTAPE phrase is available to control the number of audit files that can be stored on any one tape. Audit file range phrase is available to enable more than one audit file to be copied or appended in a single COPYAUDIT run. FROM and TO clauses are limited to allow only disk-to-tape or tape-to-disk copies; that is, the QUICKCOPY command cannot be used to copy audit files from tape-to-tape or from disk-to-disk. Examples of Copying Audit Files Initiating COPYAUDIT RUN $SYSTEM/COPYAUDIT ("<COPYAUDIT statement>") START DATABASE/WFL/COPYAUDIT ("<COPYAUDIT statement>") Both of these commands start the COPYAUDIT program. The COPYAUDIT statement contains a version of the QUICKCOPY, COPY, DIRECTORY, or VERIFY command of the COPYAUDIT program. QUICKCOPY Examples Appending Files to an Existing Single- or Multiple-Reel Tape RUN $SYSTEM/COPYAUDIT ("QUICKCOPY APPEND EMPLOYEEDB/AUDIT4 - EMPLOYEEDB/AUDIT7 ALL FROM PACK = HR1 TO TAPE (DENSITY = BPI38000)") Copies audit file numbers 4 through 7 for the EMPLOYEEDB database to a tape named EMPLOYEEDB/AUDIT1 (the name of the first audit file on the tape), which already contains audit files 1 through 3. Appending Files to an Existing Tape and Additional Tapes RUN $SYSTEM/COPYAUDIT ("QUICKCOPY APPEND MAXFILESPERTAPE = 6 EMPLOYEEDB/AUDIT4 - EMPLOYEEDB/AUDIT17 ALL FROM PACK = HR1 TO TAPE (DENSITY = BPI38000)") 3850 8198 001 7 23

Backing Up Database-Related Files Copies audit file numbers 4 through 17 for the EMPLOYEEDB database to a tape named EMPLOYEEDB/AUDIT1 (the name of the first audit file on the existing tape). Stores six audit files only on each tape. Copying Files from Tape to Disk RUN $SYSTEM/COPYAUDIT ("QUICKCOPY EMPLOYEEDB/AUDIT7 - EMPLOYEEDB/AUDIT9 ALL FROM TAPE TO PACK = HR2 CHECK") Copies audit files 7 through 9 from tape to pack HR2, and checks the internal integrity of the audit file copy. COPY Examples Copying a File from Disk to Tape RUN $SYSTEM/COPYAUDIT ("COPY EMPLOYEEDB/AUDIT4 ALL FROM PACK = HR1 TO TAPE (DENSITY = BPI38000) CHECK FORWARD COMPARE") Copies audit file number 4 for the EMPLOYEEDB database to a tape named EMPLOYEEDB/AUDIT4 and uses the forward compare method to check the integrity of the audit file copy. Copying a File from Tape to Disk RUN $SYSTEM/COPYAUDIT ("COPY EMPLOYEEDB/AUDIT4 ALL FROM TAPE TO PACK = HR1 CHECK") Copies audit file number 4 for the EMPLOYEEDB database from tape named EMPLOYEEDB/AUDIT4 to a disk file of the same name on pack HR1, and checks the integrity of the audit file copy. Backing Up Database-Related Files To back up database-related files other than files backed up by the DUMP command, run the CANDE COPY command with the COMPARE or VERIFY option. COPY & COMPARE DMSUPPORT/EMPLOYEEDB, RECONSTRUCT/EMPLOYEEDB FROM <CODE FILE FAMILY>(PACK), DESCRIPTION/EMPLOYEEDB, <DATABASE DASDL SOURCE> FROM <SOURCE FAMILY>(PACK) TO EMPLOYEEDB043097B; 7 24 3850 8198 001

Backing Up Database-Related Files Related Information Topics For information about... Refer to... CANDE commands COPYAUDIT program COPY command DMUTILITY dump commands DUMP command and examples Offline and online dumps QUICKCOPY command Tailored software WFL jobs CANDE Operations Reference Manual Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Performing Online Dumps and Performing Offline Dumps in this section Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide DASDL Reference Manual WFL Made Simple WFL Reference Manual 3850 8198 001 7 25

Backing Up Database-Related Files 7 26 3850 8198 001

Section 8 Recovering the Database In This Section This section includes an overview of database recovery and information about Automatic recovery for audited databases Single transaction abort recovery Abort recovery Halt/load recovery Manual audited database recovery Reconstructing parts of a database Rebuilding a database Rolling back a database Recovering an unaudited database 3850 8198 001 8 1

Overview Overview Meaning of Recovering Recovering a database means to get it back up-to-date, ready for access, with complete and correct data. Enterprise Database Server has a strong database recovery capability. In some situations, Enterprise Database Server can recover the database automatically. At other times, you manually recover the database using Enterprise Database Server utilities and commands. Recovering Database Files Using a Recent Backup The recovery of databases under Enterprise Database Server differs for audited and unaudited databases. For an audited database, the audit trail provides that recovery can be Automatic or manual Partial or full Because no audit trail exists for an unaudited database, the only safe action is manual recovery by 1. Reloading the database files and control file from an offline dump 2. Reprocessing the updates made since the time of the dump Automatic Recovery for Audited Databases The following table lists the recovery situations that the Accessroutines detects and the types of automatic recovery with which it responds. Situation When the INDEPENDENTTRANS option is set for the database, and an application program cannot complete a transaction. When the INDEPENDENTTRANS option is not set for the database, and an application program closes the database before completing a database update. A user makes the first database open request after the system halt/loads and memory has been lost. Recovery Type Single transaction abort Abort Halt/load 8 2 3850 8198 001

Automatic Single Transaction Abort Recovery Automatic Single Transaction Abort Recovery Introduction Purpose An automatic single transaction abort recovery occurs only when The INDEPENDENTTRANS option is set in the DASDL source file. An application program cannot complete a transaction. A deadlock situation (two or more programs have locked records and are also attempting to lock records held by each other) is the most common reason that a program is not able to complete a transaction. The single transaction abort recovery process limits the impact of a problem in updating the database to a single transaction of a single application program. Benefits Setting the INDEPENDENTTRANS option to enable single transaction abort recovery Avoids the expense of time and resources on a database abort recovery Leaves the responsibility for rectifying a backed out transaction with the application program and the end user Minimizes involvement of the database administrator and operations personnel in an abort recovery process Actions Within a Single Transaction Abort Recovery Process A single transaction abort recovery process 1. Encounters a deadlock situation or other obstacle between the begin transaction point and the end transaction point. 2. Stops processing the transaction. 3. Backs out the transaction by reversing the updates of the transaction from the audit trail to the begin transaction point. If the transaction spans two or more audit files, a. Displays a task in the mix. b. Requires operator intervention for the location and identification of the previous audit file. 4. Begins processing the transaction that follows the aborted transaction. 3850 8198 001 8 3

Automatic Abort Recovery Tracking Single Transaction Abort Recoveries When the abort recovery process does not require steps 3a and 3b (see the preceding topic), the DBA and operations staff can track single transaction abort recoveries At run time by using the Visible DBS STATUS command to mark that a program is in the process of a SINGLEABORT operation After the program has run by looking for reversed audit images in the audit trail The way that the application user keeps track of single transaction abort recoveries in an application program depends on how the program is set up. The program can automatically retry the transaction. If the program cannot solve a problem automatically, the program can prompt the end user for further input. Automatic Abort Recovery When an Abort Recovery Occurs The automatic abort recovery occurs only when the INDEPENDENTTRANS option has not been set for the database in the DASDL source file. When an application program terminates or closes a database before it has completed an update transaction, the Accessroutines begin the abort recovery process when the last program leaves transaction state. The last program to leave transaction state processes a separate stack called ABORT/<database name>, which is used to perform the abort recovery. Purpose The abort recovery process returns the database to a logically consistent state. When a program fails to complete a transaction according to Enterprise Database Server constructs and rules, a transaction is interrupted between its BEGIN TRANSACTION statement and END TRANSACTION statement. The database is left in a logically inconsistent state. Meaning of Logically Consistent Logically consistent means that all of the expected information is present and available. If some information is missing or corrupt in one record, when that record is accessed again, more inconsistency and corruption can result. For example, suppose only part of your college application was entered into the college database because the system went down during the entry of your data. The incomplete information stored about you shows you as someone who inquired about the college, not as an applicant. Your application is not processed and you cannot start classes. 8 4 3850 8198 001

Automatic Abort Recovery Actions Performed During an Abort Recovery The primary actions of an automatic abort recovery are as follows: 1. Notify programs accessing the database that they need to wait for the recovery process to complete. 2. Back out any partially completed transactions by applying audit-trail images to the database to restore it to a consistent state. 3. Pass restart information to the programs accessing the database so that those programs know the point at which they can resume processing. The Accessroutines program performs the abort recovery process. The name of the process in the mix is ABORT/<database name>. The abort recovery process rolls back the transaction to a quiet point, a point at which no programs are in transaction state. Figure 8 1 shows a flowchart of Accessroutines actions during a successful abort recovery. Figure 8 1. Flowchart of Accessroutines Actions in the Abort Recovery Process Monitoring an Abort Recovery Introduction Even though the abort recovery is an automatic process begun by the Accessroutines, events can occur during the process that might require your attention. Understanding the events can help you to respond to them. 3850 8198 001 8 5

Automatic Abort Recovery How to Interpret the Results of an Abort Recovery At the completion of an abort recovery, the DMUTILITY program returns one of the task values listed in the following table. Depending on the result, you might have to perform further operations or recover the database manually. Task Value Meaning and Action Required 0 The abort recovery process ended abnormally. Investigate and correct the cause of the failure, and then recover the database manually. 1 The abort recovery process was successful. 2 The abort recovery process completed, but a warning was issued. Check the warning and, if appropriate, correct the problem and recover the database manually. Abort Recovery Cannot Be Restarted Because the abort recovery process starts automatically only in response to a given set of circumstances, it cannot restart itself or be restarted. Successful Abort Recovery Messages The job summary shows the following beginning and ending messages: BOT <job#> (<usercode>)abort/<database name>... <job#> (<usercode>)<database name>/rowlockoutaudit REMOVED EOT <job#> (<usercode>)abort/<database name> ROWLOCKOUTAUDIT REMOVED means that the recovery process has removed the <database name>/rowlockoutaudit temporary file that held any I/O error information until that information was written to the audit file. 8 6 3850 8198 001

Automatic Abort Recovery Events That Cause Abort Recoveries to Fail The following table lists the causes for an unsuccessful abort recovery and the task or manual recovery operation you need to perform to bring the recovery to a successful conclusion. Cause The abort recovery process is discontinued. The system halt/loads during the abort recovery process. The audit file necessary to the abort recovery process is corrupt or unavailable. The ROWLOCKOUTAUDIT file becomes corrupted or unavailable. Remedy Perform a rebuild recovery. Halt/load recovery reruns automatically. Retry on a duplicate audit file. If the retry fails, perform a rebuild recovery. Perform a rebuild recovery. I/O Errors During an Abort Recovery An abort recovery process can receive one or more I/O errors during the read and write tasks that the process performs. When an I/O error occurs, the ABORT/<database name> procedure performs these tasks in order: 1. Writes appropriate information about the error to the temporary work file, ROWLOCKOUTAUDIT. 2. Locks the record where the error occurred. 3. Writes any error information from the ROWLOCKOUTAUDIT work file to the audit file. 4. Removes the ROWLOCKOUTAUDIT work file. After the abort recovery completes, use the information on locked rows in the audit file to reconstruct those rows manually. Enterprise Database Server Errors During an Abort Recovery When abort recovery updates the restart data set for each active application, occasionally the Enterprise Database Server software issues an error. Common errors are in the following categories: DUPLICATES A program attempts to store a duplicate key item in a structure where duplicates are not allowed. LIMITERROR A program attempts an action that exceeds limits set in DASDL. An example is an attempt to store the 101st record in a structure whose POPULATION value is 100. 3850 8198 001 8 7

Automatic Halt/Load Recovery Enterprise Database Server Error Messages The message the abort recovery process displays at the ODT is formulated as <job#> DISPLAY: RESTART DS ERR : CAT nn SUBCAT nnn CAT nn is the category number of the error, and SUBCAT nnn is the subcategory number of the error. The category and subcategory values enable you to identify the error in a list of Enterprise Database Server exceptions and errors. Automatic Halt/Load Recovery When a Halt/Load Recovery Occurs Purpose Enterprise Database Server initiates a halt/load recovery upon the first database open request after one of the following events occurs: The host system fails. The operator discontinues the database. A fatal error occurs in Enterprise Database Server software. An abort recovery fails to complete successfully. The halt/load recovery returns the database to a logically and physically consistent state. Enterprise Database Server has strong internal guards against allowing any logical or physical database inconsistency. Logical consistency means that the data relationships within the database continue to reflect the logical model of the database. Physical consistency means that all the information that should be in the physical data files is represented correctly in those files. Actions Performed During a Halt/Load Recovery The primary actions of an automatic halt/load recovery are as follows: 1. Notify programs accessing the database that they need to wait for the recovery process to complete. 2. Back out any partially completed transactions by applying audit-trail images to the database to restore it to a consistent state. 3. Pass restart information to the programs accessing the database so that those programs know the point at which they can resume processing. The DMRECOVERY program performs the halt/load recovery process. The name for the process in the mix is SYSTEM/DMRECOVERY. 8 8 3850 8198 001

Automatic Halt/Load Recovery Figure 8 2 shows a flowchart of the order of actions by Enterprise Database Server software and application programs during a successful halt/load recovery. Figure 8 2. Flowchart of the Halt/Load Recovery Process 3850 8198 001 8 9

Automatic Halt/Load Recovery Monitoring a Halt/Load Recovery Introduction Even though the halt/load recovery is an automatic process begun by the Accessroutines, errors can occur during the process that require your attention. Therefore, you need to be aware of the errors that are possible and how to respond if they occur. Halt/Load Recovery Cannot Be Restarted Because the halt/load recovery process starts automatically in response to a given set of circumstances, it cannot restart itself or be restarted at the point where it is interrupted. Manually Starting a Halt/Load Recovery You can, however, manually start the halt/load process from the beginning by running SYSTEM/DMRECOVERY with the following command: RUN $SYSTEM/DMRECOVERY ("DB = <database name> ON <control file family>") Successful Halt/Load Recovery Messages When the halt/load recovery completes successfully, the job mix shows the following beginning and ending messages: BOT XXXX (<usercode>)system/dmrecovery... XXXX (<usercode>)<database name>/recoveryinfo REMOVED EOT XXXX (<usercode>)system/dmrecovery RECOVERYINFO REMOVED means that the recovery process has deleted the <database name>/recoveryinfo temporary work file and any other temporary files necessary for the recovery. 8 10 3850 8198 001

Automatic Halt/Load Recovery Events That Cause Halt/Load Recoveries to Fail When a halt/load recovery fails, the Accessroutines sends diagnostic information to the printer. The following table lists the causes for an unsuccessful halt/load recovery and the task or manual recovery operation you need to perform to bring the recovery to a successful conclusion. Cause The halt/load recovery process is discontinued. The system halt/loads during the halt/load recovery process. The audit file necessary to the halt/load process is corrupt or unavailable. The RECOVERYINFO file becomes corrupted or unavailable. The ROWLOCKOUTAUDIT file becomes corrupted or unavailable. Remedy Perform a manual halt/load recovery. Perform a manual halt/load recovery. Retry on a duplicate audit file. If the retry fails, perform a rebuild recovery. Perform a manual halt/load recovery. Perform a manual halt/load recovery. I/O Errors During a Halt/Load Recovery A halt/load recovery can receive an I/O error during any of the read and write tasks that the process performs. When an I/O error occurs, SYSTEM/DMRECOVERY writes appropriate information about the error to the temporary work file, ROWLOCKOUTAUDIT, and locks the area where the error occurred. If the REAPPLYCOMPLETED option is not set, SYSTEM/DMRECOVERY does not remove the ROWLOCKOUTAUDIT file after the recovery completes. You can use row reconstruction to repair the locked area. If the REAPPLYCOMPLETED option is set and SYSTEM/DMRECOVERY encounters an I/O error while completed transactions are being processed, SYSTEM/DMRECOVERY fails with a fatal database error. If an I/O error occurs while the REAPPLYCOMPLETED option is set in a Remote Database Backup environment, Tracker fails with a fatal database error. In both cases, the ROWLOCKOUTAUDIT file is removed. You can use row reconstruction to repair the locked area, and then run halt/load recovery. The database is locked until both recovery processes are complete. If the database cannot be successfully recovered, you must rebuild the database. 3850 8198 001 8 11

Manual Recovery for Audited Databases Enterprise Database Server Errors During a Halt/Load Recovery When a halt/load recovery updates the restart data set for each active application, occasionally the Enterprise Database Server software issues an error. Frequent errors are DUPLICATES or LIMITERROR. The message the halt/load recovery displays at the ODT is <job#> DISPLAY: RESTART DS ERR : CAT nn SUBCAT nnn The category and subcategory values enable you to identify the error in a list of Enterprise Database Server exceptions and errors. Manual Recovery for Audited Databases Introduction For an audited database, problems can arise that do not fit the circumstances of automatic abort or halt/load recovery operations. Enterprise Database Server tries to notify you of problems, but it is up to you to notice the messages and reports and start the appropriate form of recovery to correct the problem. Note: If you have the slightest doubt about the best course of action to take to recover a database, call the Unisys Support Center first. When to Recover a Database Manually The following table explains the circumstances under which you recover a database manually. Situation Explanation Type of Recovery An automatic recovery process fails. I/O read or write software errors occur. I/O read or write errors occur because of malfunctioning hardware. An abort or a halt/load recovery ended its task with a task value of 0 (zero). The Accessroutines first handles I/O read and write errors and locks the records where the errors were found. A disk crashes, a controller malfunctions, or errors occur in memory. Reconstruct the damaged rows or files, or rebuild the database. Reconstruct the damaged rows or files. Repair the hardware. Reconstruct the damaged rows or files. 8 12 3850 8198 001

Manual Recovery for Audited Databases Situation Explanation Type of Recovery A pack is lost. A pack directory is lost. An audit file becomes corrupted. Enterprise Database Server software or an application program processes incorrectly. A pack crashes or several packs crash, which makes database files on those packs unavailable. The directory of a pack becomes lost or corrupted. The process that uses the audit has failed, and the state of the database is in question. Through incorrect processing, the database has lost its integrity. Issue the RC (Reconfigure Disk) system command using the same family index number and the KEEP option. Reconstruct the unavailable files. Rebuild the database. If you use duplicate directories, remove the old base pack with the DD (Directory Duplicate minus) system command, and reconstruct the family index. Retry the process with the duplicate audit file. If the retry fails, rebuild the database. Rebuild the database. Roll back the database to a time when it was consistent. Recovering All or Part of a Database As you can see from the list in the previous table, sometimes the entire database requires recovery (rebuild and rollback operations), and at other times you can target recovery to specific rows or files within the database (reconstruct operation). Database Availability During Manual Recovery Operations During a reconstruction, the database can be available to both updating and inquiring application programs. Because of the nature of the rebuild and rollback operations, the database is not available to any application programs for any purpose until these operations have completed successfully. 3850 8198 001 8 13

Manual Recovery for Audited Databases Controlling Manual Recovery The NOZIP Option Within each type of recovery operation, the DMUTILITY software performs two primary operations, as explained in the following list: 1. Recovery preparation, which accomplishes the following tasks: a. Prepares a list of needed recovery changes and generates a report. b. Identifies files necessary to do the recovery. c. Identifies checkpoints during the recovery from which the recovery can be restarted if necessary. d. Incorporates all recovery information into a file that is used by the recovery program. 2. Actual recovery, which accomplishes the following tasks: a. Runs the recovery program. b. Makes actual database changes. c. Prints a report of the recovery accomplishments. d. Performs other tasks depending on the type of recovery. You can control whether step 2 follows step 1 automatically or whether step 2 occurs some time after the conclusion of step 1. By default, step 2 follows step 1 automatically. DMUTILITY immediately runs (zips) the recovery program. To initiate this behavior, omit the NOZIP option from the recovery command syntax. When you include the NOZIP option in the recovery syntax, you instruct DMUTILITY to create the file in step 1 and stop processing until you enter a command that runs the recovery program to make the actual recovery changes. Reasons to Delay Recovery Changes Occasions on which you might want to delay the actual recovery changes include The list of required changes that DMUTILITY prepares in step 1 needs to be evaluated more thoroughly. More system resources are available to handle the recovery later in the day. You can include the syntax for running the recovery program in a WFL job, or you can issue the command directly from CANDE or an ODT when you are ready. A few examples in this section use the NOZIP option and provide the syntax for running the recovery program. 8 14 3850 8198 001

Manual Recovery for Audited Databases Restarting Manual Recovery Operations At times, a system or software error can interrupt the processing of the tasks in step 2. When such an error occurs, you correct the problem that caused the error and then rerun the recovery program used in step 2. The recovery specifications that DMUTILITY creates in step 1 include periodic checkpoints from which the program can be restarted if necessary. Therefore, when you rerun the recovery program, the program begins from the first checkpoint prior to the occurrence of the error. Examining the Final Report from a Manual Recovery At the completion of every manual recovery operation, the utility involved prints a report about the recovery accomplishments. The report identifies Rows and areas where reconstruction succeeded or failed Update tasks in the mix at the stopping point of the rebuild and rollback operations Stopping point in the audit file, including the date and time Fatal error encountered (if the recovery is interrupted) You have not completed a recovery operation until you have checked the report. The report is the only way the system communicates to you exactly what the recovery did to your database files, and it is critical for future database processing that you understand what was done or not done. Error Handling during a Manual Recovery Operation Your handling of errors during a manual recovery operation depends on the nature and severity of the error. I/O errors Handle I/O errors in much the same way as you handle them during the halt/load recovery described earlier in this section. Audit errors If the operation reports an audit error, retry the recovery operation with the secondary audit copy or another good copy of the primary audit. If you do not have a good duplicate copy of the audit trail, you cannot recover the database; some data might be lost. If possible, have application programs reprocess their transactions since the most recent good database dump. Bad dump tape error If a tape does not work when you retry it, try another drive if possible. If the tape is still not usable, then load the previous set of dump files. 3850 8198 001 8 15

Reconstructing Parts of a Database Reconstructing Parts of a Database Introduction When smaller portions of the database are damaged, you can reconstruct those portions, for example: Specific rows (to correct read or write errors) Database files on a pack in a multipack family Two Methods of Reconstruction To perform a reconstruction, you use one of the following methods, depending on the database components that need reconstruction: Reconstruction from a backup dump Use this method for read or write errors or a pack crash. The files required are those in the most recent DMUTILITY dump of the portions of the database to be recovered. Reconstruction using an audit file only Use this method for write errors only. The required file is the audit file in which write errors were recorded. Database Availability and Temporary Reconstruction Files Application programs can update the database while you run the reconstruction. Therefore, the reconstruct recovery software generates temporary work files during the reconstruction and removes them by the end of the reconstruction. By default these files have the name <database name>/reconstruct/<str #>/<structure name> Note: You can observe these file names in the mix. However, do not attempt to access or manage these files in any way. Speeding Up a Reconstruction from Tape When you use a dump that resides on three or more tapes, you can speed up the recovery process by a large factor when you include cycle and version syntax. This syntax ensures that the system accesses and reads the minimum amount of information necessary for the recovery. To use the cycle and version syntax, perform the following steps. Step Action 1 Determine the cycle and version number of the last reel of the dump from the dump report. 8 16 3850 8198 001

Reconstructing Parts of a Database Step Action 2 Mount the last reel for the dump. DMUTILITY reads the dump directory on the last reel to determine which tapes to read and how much of each tape to read. Sample Cycle and Version Syntax FROM CURRENTDUMP (CYCLE = 1, VERSION = 5) This syntax fragment reflects a dump created by one worker on five tapes. Reconstructing from a Backup Dump Backup Dump Initiation The following examples show syntax for initiating a reconstruction operation. Initiation Syntax This syntax identifies for correction all read and write errors for the sample database. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (ROWS USING BACKUP) = (ROWLOCK = READERROR, LOCKEDROW) FROM CURRENTDUMP(CYCLE = 1, VERSION = 5)") This syntax identifies for correction corrupted family indexes 2 and 3. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (ROWS USING BACKUP) = (FAMILYINDEX = 2,3) FROM CURRENTDUMP") This syntax identifies for restoration the EMPLOYEEDB/FAMILY/DATA file that had been removed by mistake. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB OPTIONS (NOZIP) RECOVER (ROWS USING BACKUP) = EMPLOYEEDB/FAMILY/DATA (RESTORE) FROM CURRENTDUMP") Initiation Process The following list shows the order of DMUTILITY actions during a successful initiation. The name for the process in the mix is SYSTEM/DMUTILITY. 1. Checks the syntax and analyzes the reconstruction initiation request. 2. Identifies database areas (rows) that contain errors to be processed. 3. Copies the database areas needed from the backup dump. 3850 8198 001 8 17

Reconstructing Parts of a Database 4. Constructs the <database name>/reconstructinfo parameter file for SYSTEM/DMDATARECOVERY. 5. Prints a report of the areas to be reconstructed. 6. Runs the program RECONSTRUCT/<database name> only if the NOZIP option is not specified. Backup Dump Reconstruction Use the reconstruction syntax when you Set the NOZIP option. Restart the reconstruction operation. Reconstruction Syntax RUN RECONSTRUCT/EMPLOYEEDB Runs RECONSTRUCT/EMPLOYEEDB, which initiates SYSTEM/DMDATARECOVERY. Reconstruction Process The following list shows the order of DMDATARECOVERY actions during a successful reconstruction. The name for the process in the mix is SYSTEM/DMDATARECOVERY. 1. Opens the audit file that was in use at the time of the earliest dump identified in the initiation process. 2. Reconstructs the database forward until the end of the audit trail is reached. 3. Generates a report of areas reconstructed in the database. Reconstruction Using an Audit File Only The following examples show the syntax for initiating a reconstruction operation from an audit file only. Audit File Only Initiation Initiation Syntax This syntax identifies for correction all write errors in the last hour and a half. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (ROWS USING AUDIT ONLY, LIMIT = * - 1:30)") This syntax identifies for correction all write errors from the present back through the time of the creation of two earlier audit files. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB OPTIONS (NOZIP) RECOVER (ROWS USING AUDIT ONLY, LIMIT = * - 2 AUDIT FILES)") 8 18 3850 8198 001

Reconstructing Parts of a Database This syntax identifies for correction all write errors from the present back through 12 control points. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (ROWS USING AUDIT ONLY, LIMIT = 12 CONTROL POINTS)") Initiation Process The following list shows the order of DMUTILITY actions during a successful initiation. The name for the process in the mix is SYSTEM/DMUTILITY. 1. Checks the syntax and analyzes the reconstruction initiation request. 2. Constructs the <database name>/reconstructinfo parameter file for SYSTEM/DMDATARECOVERY. 3. Prints a report of the areas in which reconstruction could be needed. 4. Runs the program RECONSTRUCT/<database name> only if the NOZIP option is not specified. Audit File Only Reconstruction Use the reconstruction syntax when you Set the NOZIP option. Restart the reconstruction operation. Reconstruction Syntax RUN RECONSTRUCT/EMPLOYEEDB Runs RECONSTRUCT/EMPLOYEEDB, which initiates SYSTEM/DMDATARECOVERY. Reconstruction Process The following list shows the order of DMDATARECOVERY actions during a successful reconstruction. The name for the process in the mix is SYSTEM/DMDATARECOVERY. 1. Opens the audit file that is in use. 2. Finds the write error records in the audit trail and from these records identifies which blocks in the areas need to be recovered. 3. Reconstructs the database forward until the end of the audit trail is reached. 4. Generates a report of areas reconstructed in the database. Reconstructing Rows with the Quickfix Process When you reconstruct rows using the audit file only, you can use a process called Quickfix. During this process, you do not need to load the rows from a backup dump. 3850 8198 001 8 19

Reconstructing Parts of a Database Quickfix recovers only locked rows (rows having write errors) and reduces the time required for row recovery. However, Quickfix might not always be able to reconstruct all locked rows. Quickfix begins by scanning the audit in the reverse direction starting at the current end of the audit. You specify a limit to this reverse scan by supplying a value for the <limits> construct. The backward scan of the audit stops short of the specified limit if the recoverability status of all locked rows is determined prior to reaching the limit. At the end of the reverse scan, DMDATARECOVERY determines if any rows can be reconstructed. If so, it reverses direction in the audit and performs a normal row recovery. Reconstructing Rows with a WFL Job You can use a WFL job to reconstruct rows. First you can run a job that lists areas that are locked out or that have read errors. Then you can run a job to accomplish one of the following tasks: Reconstruct rows having both read and write errors (the most commonly used operation). Reconstruct rows having read errors only. Reconstruct rows having write errors only. Reconstruct rows with the Quickfix process. Sample WFL Reconstruction Jobs Using an Audit File Only Reconstructing Areas That Are Locked Out or Have Read Errors BEGIN JOB RECOVERDB; TASK UTILTASK; RUN SYSTEM/DMUTILITY ("DB = EMPLOYEEDB " & "RECOVER (ROWS USING BACKUP) = " & "(ROWLOCK = READERROR) FROM AUG06DUMP") [UTILTASK]; IF UTILTASK(TASKVALUE) = 0 THEN ABORT "UTILITY ERROR" ELSE IF UTILTASK(TASKVALUE) = 2 THEN DISPLAY "UTILITY WARNING"; END JOB. Produces a list of all rows containing read and write errors. 8 20 3850 8198 001

Reconstructing Parts of a Database Reconstructing Areas with Read and Write Errors BEGIN JOB RECOVERDB; TASK UTILTASK; RUN SYSTEM/DMUTILITY ("DB = EMPLOYEEDB " & "RECOVER (ROWS USING BACKUP) = (ROWLOCK = LOCKEDROW)"& "(ROWLOCK = READERROR) FROM CURRENTDUMP"); [UTILTASK]; IF UTILTASK(TASKVALUE) = 0 THEN ABORT "UTILITY ERROR" ELSE IF UTILTASK(TASKVALUE) = 2 THEN DISPLAY "UTILITY WARNING"; END JOB. Performs a row recovery for all read and write errors with DMUTILITY and RECONSTRUCT running together in the same job. Because the NOZIP option is not specified, the RECONSTRUCT/EMPLOYEEDB program runs automatically after all the rows are loaded. 3850 8198 001 8 21

Reconstructing Parts of a Database Reconstructing Areas with Write Errors BEGIN JOB RECOVERDB; TASK UTILTASK; RUN SYSTEM/DMUTILITY ("DB = EMPLOYEEDB " & "RECOVER (ROWS USING BACKUP) = (ROWLOCK = LOCKEDROW)"& " FROM CURRENTDUMP"); [UTILTASK]; IF UTILTASK(TASKVALUE) = 0 THEN ABORT "UTILITY ERROR" ELSE IF UTILTASK(TASKVALUE) = 2 THEN DISPLAY "UTILITY WARNING"; END JOB. Performs a row recovery for all write errors with DMUTILITY and RECONSTRUCT running together in the same job. Because the NOZIP option is not specified, the RECONSTRUCT/EMPLOYEEDB program runs automatically after all the rows are loaded. Reconstructing Areas with Read Errors BEGIN JOB RECOVERDB; TASK UTILTASK; RUN SYSTEM/DMUTILITY ("DB = EMPLOYEEDB " & "RECOVER (ROWS USING BACKUP) = " & "(ROWLOCK = READERROR) FROM AUG06DUMP") [UTILTASK]; IF UTILTASK(TASKVALUE) = 0 THEN ABORT "UTILITY ERROR" ELSE IF UTILTASK(TASKVALUE) = 2 THEN DISPLAY "UTILITY WARNING"; END JOB. Performs a row recovery for all read errors with DMUTILITY and RECONSTRUCT running together in the same job. Because the NOZIP option is not specified, the RECONSTRUCT/EMPLOYEEDB program runs automatically after all the rows are loaded. 8 22 3850 8198 001

Rebuilding a Database Quickfix Reconstruction BEGIN JOB RECOVERDB; TASK UTILTASK; RUN SYSTEM/DMUTILITY ("DB = EMPLOYEEDB OPTIONS (NOZIP)"& "RECOVER(ROWS USING AUDIT ONLY, LIMIT = * - 1:30)") [UTILTASK]; IF UTILTASK(TASKVALUE) = 0 THEN ABORT "UTILITY ERROR" ELSE IF UTILTASK(TASKVALUE) = 2 THEN DISPLAY "UTILITY WARNING"; END JOB. Performs a Quickfix row recovery. Rebuilding a Database Purpose Rebuilding a database completely replaces the current database with an earlier version of the database using the following files: A complete database dump dated earlier than the events that have made the current database unusable All audit files created during and since the start time of the dump When to Rebuild Use rebuild recovery when events have occurred that result in Removal of all or part of the database Complete or major destruction of the database Corruption of the audit trail needed for a halt/load recovery or a reconstruction Incorrect processing by an application program Before You Begin Before you perform a rebuild operation, ensure that For an online dump, the endpoint is greater than the end of the dump. You specify a date and time with the BOJ/EOJ syntax to prevent a runaway rebuild. 3850 8198 001 8 23

Rebuilding a Database Rebuild Initiation The following examples show the syntax for initiating a rebuild operation. Initiation Syntax RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (REBUILD THRU AUDIT 326) FROM CURRENTDUMP") Specifies that the database be rebuilt from the dump through the end of audit file number 326. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB OPTIONS (NOZIP) RECOVER (REBUILD TO BOJ OF 1135/1136 ON AUG 5 AT 10:00) FROM AUG04DUMP, AUG03DUMP") Specifies that the database be rebuilt from the two dumps through the beginning of the job number 1135/1136 on August 5. Initiation Process The following list shows the order of DMUTILITY actions during a successful initiation. The name for the process in the mix is SYSTEM/DMUTILITY. 1. Checks the syntax and analyzes the rebuild initiation request. 2. Loads the database from backup dumps. 3. Constructs the <database name>/rebuildinfo parameter file for SYSTEM/DMRECOVERY. 4. Prints a report of the areas to be rebuilt. 5. Runs the program SYSTEM/DMRECOVERY when the NOZIP option is not specified. Examining the DMUTILITY Report If the NOZIP option is specified, before running the SYSTEM/DMRECOVERY program, examine the report printed in step 4 to make sure that the proper parameters were specified. The report contains The syntax of the rebuild request The database files that are to be reloaded from the list of dump tapes you provided Rebuild Syntax RUN $SYSTEM/DMRECOVERY ("DB = EMPLOYEEDB") Runs SYSTEM/DMRECOVERY. 8 24 3850 8198 001

Rolling Back a Database Rebuild Process The following list shows the order of DMUTILITY actions during a successful rebuild. The name for the process in the mix is SYSTEM/DMRECOVERY. 1. Opens the audit file that was in use at the time of the earliest dump identified in the initiation process. 2. Rebuilds the database files by applying the afterimages from all audit files created since the time of the dump up to the stopping condition. 3. Checks the validity of the stopping condition. 4. Prints a report of programs rebuilt into the database and in progress at the stopping point. 5. Initiates a halt/load process to correctly set the location of the audit and to inform all update users of the rebuild. Rolling Back a Database Purpose Rolling back a database undoes the effects of a program or series of programs on all current database files. Beginning with these files, rollback recovery moves the database backward in time to a specific stopping point. Rollback is a reverse of the rebuild process. Rebuild recovers transactions forward from a certain point in the audit file up to the present. Rollback begins at the present and undoes transactions backward through a designated point in an earlier audit file. The files required for a rollback operation are The current database files All audit files created from the present back through the stopping point When to Roll Back the Database Rollback recovery works when the database is intact but a program or programs have processed data incorrectly. Rollback recovery can be a speedier alternative to a rebuild. To be effective, rollback requires undamaged database files, so you cannot perform a rollback to correct database damage. Database Availability During a Rollback By the nature of the rollback operation, the database is not available to any application programs for any purpose during the rollback. 3850 8198 001 8 25

Rolling Back a Database Before You Begin Before you perform a rollback operation, ensure that For an online dump, the endpoint is greater than the end of the dump. You specify a date and time with the BOJ/EOJ syntax to prevent a runaway rollback. Rollback Initiation The following examples show syntax for initiating a rollback operation. Initiation Syntax RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB RECOVER (ROLLBACK THRU AUDIT 119)") Specifies that the database be rolled back through the end of audit file 119. RUN $SYSTEM/DMUTILITY ("DB = EMPLOYEEDB OPTIONS (NOZIP) RECOVER (ROLLBACK TO BOJ OF 1135/1136 ON AUG 5 AT 10:00)") Specifies that the database be rolled back until the beginning of the job number 1135/1136 on August 5. Initiation Process The following list shows the order of DMUTILITY actions during a successful initiation. The name for the process in the mix is SYSTEM/DMUTILITY. 1. Checks the syntax and analyzes the recover initiation request. 2. Constructs the <database name>/rollbackinfo parameter file for SYSTEM/DMRECOVERY. 3. Runs the program SYSTEM/DMRECOVERY when the NOZIP option is not specified. Rollback Recovery Use the rollback syntax when you Set the NOZIP option. Restart the rollback operation. Rollback Syntax RUN $SYSTEM/DMRECOVERY ("DB = EMPLOYEEDB") Runs SYSTEM/DMRECOVERY. 8 26 3850 8198 001

Recovering an Unaudited Database Rollback Process The following list shows the order of DMUTILITY actions during a successful rollback. The name for the process in the mix is SYSTEM/DMRECOVERY. 1. Performs a halt/load recovery to make sure the database is consistent. 2. Rolls back the database files by applying the beforeimages from the audit files back until the stopping condition is reached or until an error in the audit occurs. 3. Performs an abort recovery to make the database logically consistent and to get restart information to application programs. 4. Prints a report of fully and partially backed out jobs. Recovering an Unaudited Database Introduction After an unaudited database has been interrupted, the only safe way to recover it is to replace the current database with the most recent dump and then restart the update program. Reloading the Database Files To reload the database files, you must have dumped the database before starting an update program. Loading Database Files from a Dump RUN $SYSTEM/DMUTILITY("DB = EMPLOYEEDB COPY = FROM EMPLOYEEDBAUG09") Loads database files for the EMPLOYEEDB database from the EMPLOYEEDBAUG09 dump. Reprocessing Updates RUN $<program name> START <WFL job name> Both of the commands reprocess application updates for the EMPLOYEEDB database. 3850 8198 001 8 27

Recovering an Unaudited Database Related Information Topics For information about... Refer to... DASDL source file options DD (Directory Duplicate minus) system command DS (Discontinue) system command Enterprise Database Server exception and error lists I/O error information Recovery (all forms) DASDL Reference Manual System Commands Reference Manual System Commands Reference Manual Enterprise Database Server Application Programming Guide Enterprise Database Server Interpretive Interface Programming Manual Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide 8 28 3850 8198 001

Section 9 Monitoring a Database In This Section This section includes information about The purpose of monitoring a database General database monitoring tasks Certifying the consistency of database structures Analyzing logical and physical structures Acquiring database status and performance statistics 3850 8198 001 9 1

Monitoring the Database Monitoring the Database Reasons to Monitor the Database You might be wondering With all the Enterprise Database Server options that internally check what is going on inside the database, why would I need to perform any extra surveillance to be sure that the database is sound and running all right? Internal system checks can be only relatively effective not absolutely effective because of how database transactions take place and because of the variety of ways to process application programs. The DBA needs to check regularly to be sure no program or process is functioning in a way that corrupts data in the database or degrades system performance. Specific Monitoring Purposes The following table lists the purposes for which you can monitor a Enterprise Database Server database and the Enterprise Database Server software programs that perform the monitoring task. Purpose Certify the consistency and integrity of files and structures. Analyze logical and physical structures. Produce database status and performance statistics information as a way to check on, and possibly change, database options and parameters. Software DBCERTIFICATION dbatools Analyzer dbatools Monitor Several DMUTILITY statements enable you to perform general monitoring tasks that help you to achieve your monitoring goals. 9 2 3850 8198 001

General Database Monitoring Tasks General Database Monitoring Tasks Overview The following table lists the general monitoring tasks that DMUTILITY facilitates for all three monitoring programs: DBCERTIFICATION, dbatools Analyzer, and dbatools Monitor. Goal Examine the status of some or all of the database files. Temporarily ensure that the database cannot be accessed. Reestablish the database as accessible. Examine the contents of one or more database files. Corresponding Task Report on the files and rows in a specified list. Disable the database. Enable the database. List file contents on a terminal. Write file contents to a printer. DMUTILITY Statement DMUTILITY DBDIRECTORY statement DMUTILITY DISABLE statement DMUTILITY ENABLE statement DMUTILITY LIST/WRITE statement Sample Occasion to Use General Monitoring Tasks General monitoring capabilities of Enterprise Database Server are useful when you back up a database. For example, before performing a database dump, you can be sure that every row in the database will dump by 1. Running the DMUTILITY DBDIRECTORY statement to get a report of any rows that are locked out or have read errors 2. Running a row recovery to correct the rows reported in step 1 After the dump, you can perform steps 1 and 2 again to take care of any errors that might have occurred during the dump. 3850 8198 001 9 3

Certifying the Consistency of Database Structures Certifying the Consistency of Database Structures Introduction While the database is being updated, you can have Enterprise Database Server check aspects of data and file consistency and integrity. Examples are the ADDRESSCHECK, CHECKSUM, DIGITCHECK, and KEYCOMPARE DASDL options that are set in the EMPLOYEEDB sample database. While the database is not being updated, you can run a special program called DBCERTIFICATION that examines structures from three perspectives. The following table lists the perspectives, the actions that DBCERTIFICATION takes, and the options you submit to request the actions. Perspective Action Option Soundness of a physical file Consistency within a structure Consistency between structures Ensures that a file is physically intact and accessible, and isolates problems at the data block level. Verifies that data in a single structure is consistent, storage control information is valid, and internal control information is valid. Verifies that relationships between data structures are correct. READONLY AVAILABLE SPACE, CONTENTS, RECORD PLACEMENT, STRUCTURE CHECK, and VERIFYSTORE COUNT, LINK, OWNER, and SETS Certification Output The following analyses result from performing certification tasks: Error messages that display at the terminal from which you run DBCERTIFICATION Error messages, hexadecimal dumps, and error addresses that print on a printer when DBCERTIFICATION completes successfully Database Updates and Certification If the database were open to update during the certification, the fact that the data was changing would compromise the certification. 9 4 3850 8198 001

Certifying the Consistency of Database Structures How You Certify Files and Structures You run the DBCERTIFICATION program interactively or in a batch job, and in either of the following ways: Exclusively (locking the database until it completes) Online (allowing inquiry programs to access the database during the run) Note: You can control the use of disk space by processing structures in groups. Do not attempt to certify all database structures at once unless the database is relatively small. Doing so risks exhausting available disk space because the system retains many of the work files it creates until all cross-checking is complete. Examples of Syntax for an Interactive DBCERTIFICATION Session Starting Command RUN $SYSTEM/DBCERTIFICATION; FILE DASDL(TITLE = DESCRIPTION/EMPLOYEEDB ON HR); Announces an interactive DBCERTIFICATION session for the EMPLOYEEDB database. Specifying an Online Session ONLINE Specifies that inquiry programs can access the database during the session. Specifying a Pack for Output Files INTERNAL FILES (FAMILYNAME = HRCERT); Instructs certification work files on the HRCERT pack. Selecting Output Options OPTIONS = (RESET REMOTEOUT); Selects printed output only. 3850 8198 001 9 5

Analyzing Logical and Physical Structures Certifying One Structure CERTIFY PERSON Certifies the PERSON data set, and causes terminal output similar to the following: #4903 DISPLAY:CERTIFYING PERSON. #BOT 5342 (SYSDBA)SORT/INTERNAL/FILES #EOT 5342 (SYSDBA) (SYSDBA)SORT/INTERNAL/FILES #4903 PK303 (SYSDBA)DBCERTIFICATION/33637/EXTRACT REMOVED ON HRCERT #4903 PK303 (SYSDBA)DBCERTIFICATION/33637/EXTRACT/SORTED REMOVED ON HRCERT ***NO PHYSICAL ERRORS DETECTED*** ***NO STRUCTURE CHECK ERRORS DETECTED*** ***NO AVAILABLE SPACE ERRORS DETECTED*** ***NO CONTENTS ERRORS DETECTED*** ***NO VERIFYSTORE ERRORS DETECTED*** #4903 DISPLAY:END OF CERTIFYING PERSON. ANALYSIS TIMES: PROCESS 00:00:00.25 IO 00:00:00.92 ELAPSED 00:00:02.87.PAGE. Certifying Some Structures CERTIFY 1-5 (OPTIONS = ALL) Certifies structure numbers 1 through 5 with all internal structure checks. Certifying All Structures from All Perspectives CERTIFY ALL (OPTIONS = ALL) Certifies all structures (of small databases) from all perspectives. Analyzing Logical and Physical Structures Introduction For database tuning, modification, reorganization, and documentation, a DBA relies on information about the condition of logical and physical database structures and their relations to other structures. The dbatools Analyzer program enables a DBA to assemble static information about database structures and files that can aid in short- and long-range database planning. 9 6 3850 8198 001

Analyzing Logical and Physical Structures Example of How an Analysis Can Help You might have noticed that the population for the PERSON data set in the EMPLOYEEDB database has the value of 100. This value means that the number of records in the data set cannot exceed 100. If a program tries to add the 101st record in this data set, the database receives a LIMITERROR and the database is suspended. What a DBA looks for in an analysis is how close an actual data set population is from the DASDL population for the data set. If the actual population is approaching the limit, the DBA can raise the population value for the structure in the DASDL; make any adjustments in sets, subset, or related data sets; and reorganize the database before errors occur. What an Analysis Provides An analysis of the database provides varied results depending on your instructions to the dbatools Analyzer program: Identifies each structure (name, number, structure type, relation to other structures, and DASDL comments) Lists every set of each data set Lists the physical file attributes of the structure Lists the structure attributes that describe how the Accessroutines interfaces with the file Extracts from the description file and prints all DASDL-generated source statements included in the DMSUPPORT library Analyzes the file according to options you select (generally includes available space, control information, and data within the structure) Summarizes processor, input/output, and elapsed time required to perform the analysis Lists population trends of database structures Where to Find Analysis Output Analysis output Displays at the terminal from which you run dbatools Analyzer. When you specify a terminal page length greater than zero, dbatools Analyzer displays a full page of output and then waits for you to transmit one or more blanks before displaying the next page. If you transmit a nonblank character, the system discards the remaining output and displays the number sign (#). Prints or displays at another terminal depending on your specifications in the OPTIONS command. 3850 8198 001 9 7

Acquiring Database Status and Performance Statistics How to Analyze Files and Structures You run the dbatools Analyzer program interactively or in a batch job. When you run the program, the database can be online, but not for updating. Allow inquiry programs to access the database only while the analysis is running. If the database were open to update during the analysis, The fact that the data was changing would compromise the analysis. The program could be discontinued with a variety of errors. Acquiring Database Status and Performance Statistics Introduction What is happening inside the database as programs are running against it? How efficiently does Program A work when Program Z is also in the mix? Why does the daily backup take so long? Running the dbatools Monitor program gathers information that can help to answer these and other questions about how the database is performing as it is being updated. The DBA analyzes the dbatools Monitor program output, which can include comparing statistics from one time period with another time period, and getting a periodic readout of status during a time period. Having analyzed the results of the current database options and parameters, the DBA can then experiment with changes to one option at a time. Subsequent monitoring of the database reveals whether the option change had the desired effect on performance. Where to Find Monitoring Output The monitoring operation captures status and statistics information for the entire database or for selected structures. The operation provides output on a terminal screen and in a disk file. How to Monitor the Database You initiate the dbatools Monitor program, which interacts with the database in the same way as any database inquiry program. Instead of gathering copies of data from the database, however, dbatools Monitor gathers statistics and messages about how the database is currently running. The database must be online when you monitor it. Update and inquiry programs can be accessing the database. 9 8 3850 8198 001

Acquiring Database Status and Performance Statistics Related Information Topics For information about... Refer to... dbatools Analyzer dbatools Monitor DBCERTIFICATION command and option details DMUTILITY File attributes LIMITERROR Software Product Catalog Software Product Catalog Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide File Attributes Reference Manual DASDL Reference Manual Row recovery Section 8 Enterprise Database Server Utilities Operations Guide Sample database DASDL Appendix A 3850 8198 001 9 9

Acquiring Database Status and Performance Statistics 9 10 3850 8198 001

Section 10 Using Audit Files as a Diagnostic Tool In This Section This section includes information about Reasons to view an audit file Contents of an audit file view Types of records an audit file contains Requesting an audit file view Ordering the contents of a view Understanding interval types Selection parameters and examples 3850 8198 001 10 1

Reasons to View an Audit File Information Not Included in This Guide This section does not teach you How to read and interpret the following formats in which Enterprise Database Server presents audit file information for viewing: Hexadecimal format Hexadecimal and alphanumeric format How to request an audit file view in batch mode How to tailor a PRINTAUDIT program so that you can choose audit information based on criteria you define Detailed meanings of the approximately 100 record types that make up audit files However, the guide presents the abbreviated identifiers of the record types along with the full word meanings of the identifiers and the corresponding record type numbers. Conventions for This Section For simplicity, this section uses the term viewing to refer to all the options provided: printing, displaying, and extracting audit file information to a disk file. Whatever medium you choose, you are still viewing the audit file information. 10 2 3850 8198 001

Reasons to View an Audit File Reasons to View an Audit File Introduction An audit file is essential for recovering a database. An audit file can also be useful for diagnostic purposes because it contains detailed specialized information about database transactions, users, and the sequence of database events. Basic Information-Gathering Diagnostics You can obtain information such as When a user application signed on and off the database What program was updating which database structure at the time of a database failure When a particular record was updated and what it looked like before the change The point in time to rebuild or roll back the database The more experienced DBA can use audit files to identify the program that changed a specific record or to determine approximately how long transaction states are. Where to View the Audit File Why should you choose one form of viewing an audit file over another? The following table provides general rules to answer this question. Requirement Small amount of information Large amount of information Programmatic control over the selection criteria Viewing Method Terminal Printer Batch SELECT 3850 8198 001 10 3

Contents of an Audit File View Contents of an Audit File View Introduction The Enterprise Database Server PRINTAUDIT program prepares the view of an audit file according to your specifications. PRINTAUDIT works with the physical audit file to present a logical view of the audits. Determining the View Format PRINTAUDIT gives you the option of viewing the audit file in one of two formats: Hexadecimal (default format) Hexadecimal is a base-16 numeric notation system frequently used to specify addresses in computer memory. In hexadecimal notation, the decimal numbers 0 through 15 are represented by the decimal digits 0 through 9 and the alphabetic characters A through F (A = decimal 10, B = decimal 11, and so forth). Hexadecimal and alphanumeric (syntax: ALPHA) Alphanumeric is a data type that includes the letters of the alphabet (A through Z and a through z), special characters, and the numerals 0 through 9. In addition, you can limit the number of lines per record in the view. What the View Contains The view of the audit file contains An introductory summary of your request Block 0 (zero) of the file that contains two words: AUDITEOF (audit end of file) field: the number of the last record AUDITTIMESTAMP field: the time when the audit file was created The records you requested Any partial audit record: The split portion of a logical audit record that is larger than one physical block or record, and that must be split between two audit files part at the end of one audit file and part at the beginning of the next audit file The audit file viewing program includes in the view Records you request (whole records only) Partial records at the beginning or end of the file 10 4 3850 8198 001

Contents of an Audit File View Introductory Lines of a Viewed Audit File Figure 10 1 shows the introductory lines of an audit file view with comments to explain them. These lines look the same in both view formats. Audit File ** REPORT ON CONSTRAINTS: ** OUTPUT TO DISK. DISK FILE TITLE: (SYSDBA)PRINTAUDIT/REPORT/7180 ON HRAUDIT. RANGE: 0 999999999 AUDIT RECORD ABBREVIATIONS: ** ALL ** MAX LINES OF HEX DUMP PER AUDIT RECORD: 99999 STACK NUMBERS: ** ALL ** MIX NUMBERS: ** ALL ** STRUCTURE NUMBERS: ** ALL ** TITLE =(SYSDBA) EMPLOYEEDB/AUDIT1 ON HRAUDIT. PACKNAME =HRAUDIT. KIND =PACK MAXRECSIZE = 30 WORDS BLOCKSIZE = 30 WORDS LOGICAL BLOCK = 900 WORDS AREASIZE = 3000 BLOCKS AREAS = 100 LASTRECORD = 264 Comments Report states the view instructions. Output is to be a disk file with a specific title on the HRAUDIT pack. Range means that all records of the audit file are wanted. View is not limited by a time interval, ABSN, relative block, or any other limiting value. Title states the name of the audit file being viewed. Packname states the location of the file. The list states the size of records, blocks, areas of the audit file, and the number of the last record in the file. Figure 10 1. Introductory Lines of Audit File View 3850 8198 001 10 5

Contents of an Audit File View Figure 10 2 shows how each audit file view format looks for the first two blocks of the view defined in Figure 10 1. Hexadecimal View ****************** BLOCK 0 OF FILE 1 ****************** AUDIT EOF= 264(000000000108) AUDIT TIME STAMP = 02/22/1997 07:06:05 0(0000) 000000000001 000000000108 00027AEF719E 002B01300006 4(0004) 000000000000 000000000039 000000000000 190000000BB8 8(0008) 65C527342D25 000000000001 65C527AEF71A 000000000000 12(000C) 000000000013 000000000108 000000000384 000000000000 16(0010) 000000000000 FOR 12 WORDS (3 LINES ) 28(001C) 000000000000 C0D9070AD2BE ****************** BLOCK 1 OF FILE 1 ****************** **** SER= 2(000000000002) LCW@ 12(000C) LWD@ 12(000C) SPLIT=0 **** MY TS=07:06:06.2046(00027AF18D14) PRTS=07:06:05.8731(00027AEF719E) DBSI = 21 STR= 0 SNR=(000) INX= 5(0005) SZ= 8 DATETIMESTAMP = 02/22/1996 07:06:06 0(0000) 000000000815 65C527AF188C 000000000000 000000000000 4(0004) 000000000000 000000000000 000000000000 000000000815 Hexadecimal and Alphanumeric View ****************** BLOCK 0 OF FILE 1 ************** AUDIT EOF= 264(000000000108) AUDIT TIME STAMP = 02/22/1996 07:06:05 0(0000) 000000000001 000000000108 00027AEF719E??????????????:??? 3(0003) 002B01300006 000000000000 000000000039?????????????????? 6(0006) 000000000000 190000000BB8 65C527342D25?????????????E???? 9(0009) 000000000001 65C527AEF71A 000000000000???????E??7??????? 12(000C) 000000000013 000000000108 000000000384?????????????????D 15(000F) 000000000000 FOR 12 WORDS (4 LINES )?????????????????? 27(001B) 000000000000 000000000000 C0D9070AD2BE????????????{R?K? ****************** BLOCK 1 OF FILE 1 ****************** **** SER= 2(000000000002) LCW@ 12(000C) LWD@ 12(000C) SPLIT=0? **** MY TS=07:06:06.2046(00027AF18D14) PR TS=07:06:05.8731(00027AEF719E) DBSI = 21 STR= 0 SNR=(000) INX= 5(0005) SZ= 8 DATETIMESTAMP = 02/22/1996 07:06:06 0(0000) 000000000815 65C527AF188C 000000000000???????E?????????? 3(0003) 000000000000 FOR 3 WORDS (1 LINES )?????????????????? 6(0006) 000000000000 000000000815???????????? Figure 10 2. Comparison of Audit File View Formats 10 6 3850 8198 001

Contents of an Audit File View When and Why to Limit the Lines of an Audit Record The number of lines in an audit file record can vary greatly. PRINTAUDIT furnishes a default number of lines per record: 99999 lines. The line limit does not include the record heading information that the program always includes in the view. You can limit the lines of an audit record to a specific number between 0 and 99999. The reasons for doing so range from practical to technical, and include Information you need is in the beginning lines of the record. Viewing the beginnings of each record can enable you to pinpoint the records you want to see in a subsequent view. The file or printout would be too unwieldy without a line limitation. Record Heading Information The line count does not include the audit record heading information. The heading information identifies the record by abbreviation and number, stack number of application program (SNR), associated structure number (STR), index number (INX), and size (SZ). For example, the DBSI record from Figure 10 2 contains record heading information: DBSI = 21 STR= 0 SNR=(000) INX= 5(0005) SZ= 8 DATETIMESTAMP = 02/22/1996 07:06:06 Types of Records in an Audit File Overview An audit file contains records. However, many types of records in an audit file are unlike the types of records in other files. An audit file can be composed of approximately 100 different types of records. A list of the audit record types is available to you in The section on printing, viewing, and extracting audit information in the Enterprise Database Server Utilities Operations Guide The DATABASE/PROPERTIES file, sequence range 30000000 to 30999999 The list of valid audit records is always current in this file. You can perform a CANDE write operation to print the list. 3850 8198 001 10 7

Contents of an Audit File View How to Recognize an Audit Record Type When you view an audit file, how can you identify a record type? And how do you request that certain record types be included in a view? You can identify records by a mnemonic and by a number. Figure 10 3 shows the following record types: BTR (begin transaction) ADSS (allocate data set space) SAC (single abort create) DSC (data set create) BTR = 4 STR= 2 SNR=(39C) INX= 17(0011) SZ= 4 0(0000) 39C002000404 000005000001 000000000001 39C002000404 ADSS = 16 STR= 7 SNR=(39C) INX= 21(0015) SZ= 7 0(0000) 39C007000710 000000000000 000000000232 000000000000 4(0004) 000000000119 000000000060 39C007000710 SAC = 81 STR= 7 SNR=(39C) INX= 28(001C) SZ= 7 0(0000) 39C007000751 000000600011 000000000009 000000000000 4(0004) 005007005001 000000000119 39C007000751 DSC = 10 STR= 7 SNR=(39C) INX= 35(0023) SZ= 284 0(0000) 39C007011C0A 000000000119 C2C2C2C2C2C2 C2C2C2C2C2C2 4(0004) C2C2C2404040 404040404040 404040404040 404040404040 Figure 10 3. Examples of Types of Audit File Records Selecting Record Types You can select record types by their individual mnemonics, separating each mnemonic by a comma. You can also select two record type groups by substituting a letter in the PRINTAUDIT syntax instead of typing the mnemonics for each record in the group. The groups are Control record group (syntax letter = C) The group includes the following records: SPT, BCP, ECP, DBSI, DBST, FILEDC, STRDC, and RECOV. These records relate to the control of transactions and database integrity. These records are also useful as rebuild and rollback points because they are quiet points, that is, points in time when No transactions are in progress. Updated buffers have been forced to disk. Data change record group (syntax letter = D) The group includes the following records: DSC, DSD, DSM, AIO, BIO, and CCD. 10 8 3850 8198 001

Requesting an Audit File View Requesting an Audit File View Introduction Requesting an audit file view is an interactive process between you and the SYSTEM/PRINTAUDIT program. Designating Where to View the Audit File Table 10 1 lists the ways to tell PRINTAUDIT how you want to view the audit file. Table 10 1. Designating Where to Send the Audit File View This request... Directs the audit file selection to... PRINT DISPLAY EXTRACT SELECT The system printer The terminal monitor A disk file called PRINTAUDIT/REPORT/<task number> To supply a different file name, refer to the Utilities Operations Guide. A destination determined by a tailored version of the PRINTAUDIT program that you develop in the ALGOL programming language 3850 8198 001 10 9

Requesting an Audit File View Beginning a SYSTEM/PRINTAUDIT Session Your Action RUN $SYSTEM/PRINTAUDIT; FILE AUDIT (TITLE=(SYSDBA)EMPLOYEEDB/AUDIT1 ON HRAUDIT); Starts a session to view an audit file of the EMPLOYEEDB database. RUN $SYSTEM/PRINTAUDIT; FILE AUDIT (TITLE=(SYSDBA)EMPLOYEEDB/AUDIT1 ON HRAUDIT); FILE DASDL (TITLE=(SYSDBA)DESCRIPTION/EMPLOYEEDB ON HR); Starts a session to view an audit file of the EMPLOYEEDB database, and file-equates the database description file to enable the use of structure names as selection criteria. System Response PLEASE ENTER REQUEST Strategy for Beginning a Request When you begin a view of audit information, requesting a limited number of lines for a key type of audit record can be useful to help you find your way in the audit file. For example, consider the following request syntax, and the meaning of various parts of the syntax in the subsequent table: PRINT STACK = 459-461, 499, RECTYPE = C ALPHA LINES = 1 This syntax segment... Limits the view to... STACK = 459 461, 499 RECTYPE = C ALPHA LINES = 1 Audit records containing the specified stack numbers Control type audit records An alpha representation of the data on the right side of the printed output The first line of data for each audit record After you view the first lines of several records, you are likely to have sufficient information to decide how to proceed with a narrowing of your request to view the audit file. 10 10 3850 8198 001

Requesting an Audit File View Entering a Request Your Action PRINT TIME 10/0497 @ 13:24:00 TO 10/04/97 @ 13:30:00 ALPHA LINES = 1 Prints the first lines of records created between 1:24 and 1:30 p.m. on 10/4/97. System Response REQUEST COMPLETE PLEASE ENTER REQUEST Additional Sample Requests This request displays all control records in the audit trail (useful when looking for a point of termination for a rebuild or rollback operation). DISPLAY RECTYPE = C ALPHA LINES = 1 This request displays all control records in the audit trail within a given time frame (useful when looking for a point of termination for a rebuild or rollback operation). DISPLAY TIME 10/04/97 @ 12:00:00 TO 10/04/97 @ 13:30 RECTYPE = C ALPHA LINES = 1 This request prints three types of records from the second through the ninth-from-last block in the audit file. PRINT 2 *-9, RECTYPE = BTR, ETR, FGTBLK This request extracts to a file named PRINTAUDIT/REPORT/<request task number> SAC records from records containing the mix number 3721. EXTRACT MIX = 3721, RECTYPE = SAC This request prints to a file named PRINTAUDIT/REPORT/<request task number> SAC records from records containing the mix number 3721. PRINT STACK = 459-461, 499, RECTYPE = C 3850 8198 001 10 11

Requesting an Audit File View Multiple Line Request Your Action DISPLAY TIME 11/16/97 @ 17:45:00 TO 11/17/97 @ 08:00:00% Press the Transmit key. System Response #% Your Action PROGRAM (PROD)DAILY/SALES/TOTALS ON HUBPACK, STRUCTURE = 14-18 Displays records for structures 14 through 18 created by the specified program from 5:45 p.m. on 11/16 through 8 a.m. on 11/17. Ending a SYSTEM/PRINTAUDIT Session QUIT Ends the session. Ordering the Contents of a View Introduction PRINTAUDIT needs to receive parameters in an order that logically narrows down the records selected for the view. Figure 10 4 shows how selections narrow the focus of the view. (1) Time Interval (2) Program Name (3) Structure 21, Block 5 (4) Field Figure 10 4. Narrowing the Focus of an Audit File View Request In Figure 10 4, the numbers in parentheses show the order of the parameters for the request, from the parameter that includes the most records to the parameter that includes the fewest records. PRINTAUDIT initially considers only the records that the program created during the time interval. Then PRINTAUDIT produces the data change records for the field in block 5 of structure 21. 10 12 3850 8198 001

Requesting an Audit File View Order and Purpose of Request Parameters Table 10 2 shows the order in which you state the parameters for your request, whether parameters are required or optional, and why you use each parameter. Table 10 2. Order and Purpose of Request Parameters Order Parameter Required or Optional Why You Use 1 Viewing destination 2 Time, ABSN, or block interval 3 Stack number 4 Program mix number 5 Program name 6 Structure ID 7 Field offset in database record 8 Record type Required Optional Optional Optional Optional Optional; required for a field Optional Optional; not used with a field Direct view to printer, terminal, or file. Limit view to a portion of the file; no entry selects the whole file. Stack numbers of programs that caused database changes Mix numbers of programs that caused database changes Name of the program that caused database changes Number of the data structure Data change records for one field in one data structure (for meaningful results) Specific record type mnemonics 3850 8198 001 10 13

Requesting an Audit File View Understanding Interval Types Introduction An interval focuses the view on a portion of the specified audit file between a starting point and an ending point. The types of intervals you can use are listed in Table 10 3. Table 10 3. Types of Intervals for Audit File Views Interval Type Time ABSN Block None Definition Records between an earlier timestamp and a later timestamp Records between an audit block of one serial number and a later audit block of another serial number (An ABSN is an increasing reference across audit files.) Records between an earlier physical file block and a later physical file block (A block is an increasing reference within a single audit file.) All records in the file Time and ABSN interval types can have an ending point in the specified audit file or in a subsequent audit file. PRINTAUDIT automatically makes the switch to the next audit file in the series. Table 10 4 lists the interval types you can designate in a PRINTAUDIT request and provides information about how to express each. Use this table when you perform the interactive process of requesting an audit file view. 10 14 3850 8198 001

Requesting an Audit File View Table 10 4. Interval Types and Examples Interval Type Order and How to Express Examples Entire File No interval entry No interval entry Time Interval ABSN Range Interval Block Range Interval Order: Earlier to later Form: Decimal integers (Table 10 5) Conditions: See Table 10 6. Later time can be in a subsequent audit file; audit file switch is automatically performed. Order: Smaller to equal or larger Form: Unsigned decimal or hexadecimal integers (1 through 9999) Larger ABSN can be in a subsequent audit file; audit file switch is automatically performed. Order: Earlier to later within one file Form: Unsigned decimal or hexadecimal integers for actual block numbers; for later block, an asterisk (*); for a relative last block, an asterisk with a minus sign ( ). No audit file switching. TIME 10/04/97 @ 13:24:00 TO 10/04/97 @ 13:30:00 TIME 10:55:35 TO 11:55:59 SERIAL 6785 6843 SERIAL 0421 0421 SERIAL 0006 1231 14 2 *-4 (Block 2 to fourth from last block) 10 * (Block 10 to last block) 6 *-15 (Block 6 to 15 th from last block) 3850 8198 001 10 15

Requesting an Audit File View Using Timestamps You can use timestamp values to specify time intervals in the audit file. A timestamp records the date and time of the creation of the audit record. The format of time and date stamps in audit records is as follows: AUDIT TIME STAMP = 02/22/1997 07:06:05 DATETIMESTAMP = 02/22/1996 07:06:06 You specify date and time stamps by using the format shown in Table 10 5. Table 10 5. Values for Date and Time in Time Interval Option Value Month Integer between 1 and 12 Day Integer between 1 and 31 Year Integer between 0 and 99 Hour Minute Second Integer between 0 and 23 (the value 0 represents midnight) Integer between 0 and 60 (represents the number of minutes past the hour; 60 equals the next hour exactly) Integer between 0 and 60 (represents the number of seconds past the minute and hour; 60 equals the next minute exactly) 10 16 3850 8198 001

Requesting an Audit File View PRINTAUDIT verifies the timestamps you specify for an interval with the results shown in Table 10 6. Table 10 6. Results of PRINTAUDIT Verification of Timestamps If PRINTAUDIT... Then PRINTAUDIT... Finds the beginning and ending timestamps Does not find either timestamp Does not find the beginning timestamp Does not find the ending timestamp Receives one timestamp only Receives a beginning timestamp with a date but without a time Receives an ending timestamp with a date but without a time Receives a beginning timestamp with a time but without a date Receives an ending timestamp with a time but without a date Targets all records between the timestamps regardless of whether the records contain a timestamp. Does not target any records. Targets records for the next timestamp period after the beginning timestamp, if the next timestamp is prior to the ending timestamp. Otherwise, does not target any records. If the audit file ends before the ending timestamp is reached, looks for the next audit file in the series. If the audit file ends before the ending timestamp is reached prior to the end of the file, targets the record before the audit record that contains that timestamp. Targets that timestamp as the beginning timestamp and the end of the file as the end of the interval (no audit switching occurs). Targets the record at the beginning of the date specified (if necessary, audit switching occurs). Targets the record at the end of the date specified (if necessary, audit switching occurs). Assumes today s date (if necessary, audit switching occurs). Assumes the date of the beginning timestamp (if necessary, audit switching occurs). Selection Parameters and Examples Table 10 7 lists the selection parameters you can include in a request for an audit file view and provides several examples of each selection parameter. Use this table when you want to specify a selection in your request for an audit file view. The use of an equal sign (=) in all examples in Table 10 7 is optional. 3850 8198 001 10 17

Requesting an Audit File View Table 10 7. Selection Parameters and Examples Order Parameter, Order Within, and Form of Expression Examples 1 Stack number, series, range Order: Smaller to larger Form: Unsigned decimal or hexadecimal integer 2 Mix number, series or range Order: Smaller to larger Form: Unsigned decimal integer 3 Program title Order: None to one program title Form: File title 4 Structure name or number, series of names or numbers, or range of numbers 5 Field Order: Smaller to equal or larger Form: Unsigned decimal integer Block number or range within the structure or structures Order: Smaller to equal or larger Form: Unsigned decimal or hexadecimal integer Order: None to one field offset for one structure Form: Unsigned decimal or hexadecimal integer 6 Record type Order: None Form: Record type mnemonics; C for control records; D for data change records STACK = 643 STACK = 234-236 STACK = 234, 235, 238 MIX = 3721 MIX = 4589, 4598,4621 MIX = 4578-4601 PROGRAM = (PROD)HELIX/1 ON ARC STRUCTURE = 21-28 STRUCTURE = 16, 18, 19 STRUCTURE = 21 BLOCK = 2 STRUCTURE = 56 BLOCK = 3-6 STRUCTURE = 34 BLOCK = WAF = 10 FIELD 2,15 = 24 FIELD 4 = 17 RECTYPE = BTR, ETR, FGTBLK RECTYPE = IDSBLK, ODDSC C, RDSC, RDSO D 10 18 3850 8198 001

Requesting an Audit File View Related Information Topics For information about... Refer to... Audit files Section 3 Enterprise Database Server Utilities Operations Guide Audit record types Generating a customized PRINTAUDIT program Interpreting views of audit files Interval specifications PRINTAUDIT program Selection criteria Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide 3850 8198 001 10 19

Requesting an Audit File View 10 20 3850 8198 001

Section 11 Updating and Reorganizing the Database In This Section Explaining how to perform the update and reorganization operations are beyond the scope of this guide. However, this section includes information about Reasons for database structural change Planning for an update or a reorganization Online set garbage collection 3850 8198 001 11 1

Changing Database Structures Changing Database Structures Business Reasons for Change The logical database structures that reflect the real-world data map of the business can require change from time to time because of changes in the business or in the environment in which the business operates. The following examples describe business situations that result in structural database changes. New Employee Information Creative Samples Inc. adds a skill evaluation to its employee profile. The profile contains two pieces of information: skill level and rating within skill level. How can these data be added to the data structures? A workable plan is to define the two new data items in DASDL and perform a procedure to add two new fields in the PERSON data set record. Elimination of a Product Because many people now have computers, the typewriter product of Creative Samples Inc. is becoming obsolete. Sales have fallen off, and the company has decided not to sell the product in the future. How should the DBA reflect this change in the database? The database defines a product data set with fields for each product. The DBA plans to make a change 1 week after the remaining product has been recalled to the warehouse. After final product totals have been recorded, the DBA deletes the fields for the product. New Requirements of the Postal Service The postal service requires that the full 9-digit postal codes be used starting in 2 months. The current data item definition for postal codes in the EMPLOYEEDB database requires 5 digits. What should happen to meet the new requirement? The DBA modifies the postal code data item definition to allow 9 digits. Later on, a program updates the database with the additional digits. Other Reasons for Change Another reason for change to physical database files is the need to compress unused file space within structures and to return this space to the system. Garbage collection is the common name for this type of reorganization. 11 2 3850 8198 001

Planning an Update or a Reorganization Types of Structure Change Many types of structure change become necessary during the life of a database, including Adding a new data set with its sets and subsets Adding a set or subset to an existing data set Adding data items to an existing data set Deleting a data set, set, or subset Deleting data items from a data set Reordering a data set Garbage collection Major Operations That Effect Structure Change Two major computer operations effect changes to database structures: An update to the DASDL source file, together with a recompilation of DASDL and the associated tailored software for the database A reorganization that makes the actual changes to the physical database files Some database changes require both operations, and some require only one of these operations. The person in charge of making the database changes must understand fully the operations necessary for the database changes that need to be made. This person also needs to understand what types of database changes can be performed together. Planning an Update or a Reorganization Planning Tasks Structural database changes require careful planning and timing. The person directing the restructuring needs to understand thoroughly the entire reorganization process. This person also needs to identify the Structures to be included in the reorganization Optimum time for the reorganization based on database usage, application programs that might need code alterations and recompiling as a result of the reorganization, and other resource use factors Benefits of performing an online or offline reorganization Resources to handle the update and reorganization operations as well as auxiliary operations such as database dumps prior to and after the reorganization Auxiliary personnel needed to help with the reorganization Resources within the organization and at Unisys Corporation that are available to help solve problems that arise during the reorganization 3850 8198 001 11 3

Online Set Garbage Collection Preparing Yourself to Understand Reorganizations Before you perform a database reorganization, study thoroughly the sources of information given under Related Information Topics in this section. In addition, Make a plan for the reorganization, including the timing of major steps. Consult with someone in your own organization who has experience, or call your Unisys Support Center for advice. Preparing the Database for an Update or a Reorganization Before performing a DASDL update, you should always back up the database description file. Before a reorganization, you should dump the database. It is highly recommended that you perform an offline dump of the database and back up its associated files. Doing so simplifies reloading of the database if you encounter a problem during the reorganization process. Online Set Garbage Collection Garbage collection is an important form of reorganization and should be performed regularly. Garbage collection consolidates deleted or unused space in data sets, sets, and subsets and returns this space to the system. In addition, records in a data set can be physically reordered and index structures can be rebalanced to achieve a uniform coarse/fine table distribution and thereby minimize access time. The Enterprise Database Server Extended Edition addresses the goal of greater database availability by providing an online garbage collection facility for disjoint index sequential sets and subsets. GARBAGE COLLECT Command Use the Visible DBS GARBAGE COLLECT command to initiate, terminate, and monitor the status of an online garbage collection of sectioned or nonsectioned disjoint index sequential sets. The GARBAGE COLLECT command operation has the following characteristics: Runs online. Runs up to 10 garbage collections in parallel. Does not lock out users. Requires that the INDEPENDENTTRANS option be set for the database. 11 4 3850 8198 001

Online Set Garbage Collection Either succeeds completely or makes no change to the structure. For example, if a halt/load occurs while the garbage collection task is running, the system discards the results of the operation and makes no change to the set. Cannot run at the same time as An online dump A database reorganization A reconstruction on any structure in the database A Remote Database Backup takeover For example, if you execute the GARBAGE COLLECT command during a reorganization, the system does not initiate the garbage collection and instead displays an error message. Alternative to a Reorganization The GARBAGE COLLECT command is an advantageous alternative to a reorganization when you need to Consolidate unused space in sets or subsets. Rebalance index structures to optimize access through sets. The benefits are The database remains online and available. Any failure of the garbage collection task has no impact on the database. Related Information Topics For information about... Refer to... DASDL update DASDL Reference Manual Enterprise Database Server Utilities Operations Guide Database dump Section 7 Enterprise Database Server Utilities Operations Guide Garbage collection Reorganization Enterprise Database Server Utilities Operations Guide Enterprise Database Server Utilities Operations Guide 3850 8198 001 11 5

Online Set Garbage Collection 11 6 3850 8198 001

Section 12 TranStamp Locking and Record Serial Numbers (RSNs) In This Section This section includes information about TransStamp locking Record serial numbers (RSNs) TranStamp Locking Definition TranStamp locking is a new type of locking algorithm for data sets that are defined in DASDL with the keyword EXTENDED, a syntactical requirement for data set sectioning. Purpose The new TranStamp locking algorithm makes the data record an integral part of the locking process. With the aid of a unique transaction identifier, TranStamp locking provides a substantial increase in record-lock related performance. Components of TranStamp Locking The TranStamp locking algorithm for Enterprise Database Server Extended Edition uses two components: A TranStamp identifier created when a program performs a BEGIN-TRANSACTION operation The identifier is unique to the program and the transaction. A TranStamp field in each data record that stores the TranStamp identifier of any program that locks the record TranStamp locking causes each record size to be expanded by one word to accommodate the TranStamp field. 3850 8198 001 12 1

TranStamp Locking How Traditional Locking Works With the traditional locking algorithm, the lock table contains one entry for every record locked in the database. Each time a program attempts to lock a record, the Enterprise Database Server must search through the lock table to determine if another program has already locked that record. If the record is already locked, the program must wait for the record to be freed. If the record is not locked, a new entry must be added to the table. As the number of locked records increases, so does the effort to determine if a record is already locked. Whenever a program frees a record, the corresponding entry in the lock table must be removed. When a program performs an END-TRANSACTION operation, the entries for all records that had been locked must be removed. The Enterprise Database Server must, therefore, search through the table to determine the entries to be removed. As the number of locked records increases, so does the effort required to search the table. Disadvantages of Traditional Locking The traditional locking mechanism has the following disadvantages: Because the lock table is a fixed-size resource, it limits a single program to 50,000 locked records. As a program locks more records, More entries are required in the lock table. The work required to free records during an END-TRANSACTION operation increases. How TranStamp Locking Works When the program attempts to lock a record, the Enterprise Database Server Extended Edition first retrieves the value in the record TranStamp field and compares it to the entries in the lock table. One of two events occurs: If a corresponding lock table entry is found, the record is known to be locked and the program attempting to lock the record must wait for the record to be freed. If no corresponding lock table entry is found, the Enterprise Database Server Extended Edition locks the record by placing the program TranStamp identifier into the record TranStamp field, and then allows the program to continue. When the program performs its END-TRANSACTION operation, the Enterprise Database Server Extended Edition removes the program TranStamp identifier from the lock table. The removal of the identifier automatically frees all records locked by that program, because it is the presence of a TranStamp identifier in the lock table that determines if a record is locked. Leftover values in the record TranStamp field are of no concern since the value in the field is never reused. In addition, the next time the record is locked, the TranStamp identifier of the locking program is placed in the TranStamp field. 12 2 3850 8198 001

TranStamp Locking Results of TranStamp Locking Changing to TranStamp locking yields these results: The data record is made a part of the locking scheme, eliminating much of the overhead required to manage internal lock tables. The size of the lock table is reduced from one entry per locked record to one entry per transaction. The limit on the number of records that can be locked by a single program is eliminated because the lock table requirements for a given program are fixed. The overhead associated with END-TRANSACTION operations is reduced because all records can be freed by invalidating the TranStamp identifier associated with the lock (as opposed to freeing each locked record). Support of Traditional and TranStamp Locking The Enterprise Database Server Extended Edition software supports both the traditional and TranStamp locking algorithms. TranStamp locking is in effect for a structure for which the keyword EXTENDED is specified. Traditional locking is in effect for structures not defined with the keyword EXTENDED. A transaction can involve structures using both the traditional locking and TranStamp locking algorithms. However, the benefits of TranStamp locking are only partially realized. The benefits of TranStamp locking are fully realized only when all structures involved in a transaction are using TranStamp locking. 3850 8198 001 12 3

Record Serial Numbers (RSNs) Record Serial Numbers (RSNs) Definition An RSN is a unique number assigned to each data set record. An RSN is guaranteed to be unique within a data set, but not within the database. That is, once an RSN is used within a data set, that RSN is never used again within that data set, but can be used for a record in another data set. RSNs begin at 1 for each data set and increase by one each time a record is created, regardless of whether the transaction creating the record is successful. Internally, RSN is stored as a single word. The word is divided into two parts: the integer part (bit 38 to 0) and the overflow part (bit 45 to 39). When RSN is less than 2**39, it is stored as an integer. If RSN is equal to or greater than 2**39, then bit 45 to 39 contains the quotient of the RSN divided by 2**39, and the arithmetic remainder is stored in the lower 39 bits. The algorithm of assigning the RSN value is demonstrated with the following ALGOL code: DEFINE RSN_OVFLF = [45:7] #, RSN_INTEGERF = [38:39] #, UNDEFINED = REAL (NOT FALSE)#, BUMP_RSN (RSN_VALUE) = BEGIN IF RSN_VALUE.RSN_INTEGERF = UNDEFINED.RSN_INTEGERF THEN BEGIN RSN_VALUE.RSN_INTEGERF :=0; RSN_VALUE.RSN_OVFLF := RSN_VALUE.RSN_OVFLF + 1; END ELSE RSN_VALUE.RSN_INTEGERF := RSN_VALUE.RSN_INTEGERF + 1; END#; Based on the algorithm, the maximum RSN is 2**46 1. Applications can use a REAL item to access RSN and apply the algorithm to calculate the value. In the following ALGOL example, A_RSN simulates an RSN retrieved from the database, and B_RSN is the calculated RSN value: DOUBLE B_RSN, P1; REAL B, A_RSN, P2; B := 2**39; A_RSN := 4"010000000003"; P1 := (A_RSN.[45:7]) * (B); P2 := A_RSN.[38:39]; B_RSN := P1 +P2; 12 4 3850 8198 001

Record Serial Numbers (RSNs) Purpose RSNs enable an internal optimization within sets that allow duplicates but that do not declare DUPLICATES FIRST or DUPLICATES LAST. An RSN primarily serves as a tiebreaker value for sets that allow duplicate keys. The Enterprise Database Server appends the tiebreaker value to the actual set key to create unique entries in the set. In the Enterprise Database Server Standard Edition, the tiebreaker is the absolute address (AA) word a value that specifies the physical location of the record in the data set. Note: For the remainder of this topic, the term duplicate set means a set with duplicates that does not declare DUPLICATES FIRST or DUPLICATES LAST. How the AA Word Works as a Tiebreaker During the garbage collection of a data set, the movement of the records necessitates a fix-up of spanning sets to ensure that the set entries continue to point at the corresponding data set records. The fix-up process works differently for a nonduplicate and duplicate set. Nonduplicate set Only the pointer portion of a set entry changes. The key values are not modified, and the fix-up process is relatively fast compared with the work required to fix up a duplicate set. Duplicate set When the AA word functions as a tiebreaker, the key portion of the set entry must be modified. The key is made up of both the DASDL-defined field and the AA value of the target record. Since the data set record has moved, its AA word value has been changed. Consequently, garbage collection of a data set spanned by a duplicate set usually results in regeneration of the set and is substantially more expensive than a fix-up of the AA words in the existing set. How the RSN Works as a Tiebreaker When the Enterprise Database Server Extended Edition substitutes an RSN for an AA word as the tiebreaker value, the duplicate set no longer has a key that needs to be changed during a garbage collection of the data set. In addition, a fix-up of the AA word in the set is now possible because the only place the AA word appears is in the pointer portion of the entry. The key value, consisting of the DASDL-defined key and the RSN value, is unchanged. 3850 8198 001 12 5

Record Serial Numbers (RSNs) Record Expansion to Include the RSN Field The Enterprise Database Server Extended Edition inserts an RSN field in all records of data sets for which the keyword EXTENDED is specified. All sectioned data sets have an RSN field because the keyword EXTENDED is required to section a data set. Because an RSN value is stored as part of the data record, the data record is expanded by one word. RSN and Application Programs The field containing the RSN value for each record can be made visible to user programs by declaring in DASDL a data set item of type RSN. Otherwise, the RSN field is not visible. From an application program perspective, RSN items are read-only fields. If an RSN item is declared in DASDL, a program can interrogate the RSN item and use it as a key in a spanning set. 12 6 3850 8198 001

Section 13 Scenarios for Using Enterprise Database Server Extended Edition In This Section This section outlines Enterprise Database Server Extended Edition solutions to the following problems: Data set capacity reaching Enterprise Database Server limits Database performance limited by audit trail throughput Database performance limited by set contention The section also explains how to evaluate Enterprise Database Server Extended Edition features in a general transaction processing environment. 3850 8198 001 13 1

Scenarios for Using Enterprise Database Server Extended Edition Scenario 1: Data Set Capacity Reaching Limits Situation The database contains key structures whose volume grows rapidly and steadily from month to month. Records in these data sets are seldom removed, causing everincreasing data set populations. Capacity Problem Enterprise Database Server Standard Edition data set limits are two-fold: The maximum population is 268,435,455 records. The maximum size is 48 gigabytes. It is possible to work around the population specification, but the 48-gigabyte limit is a firm limit. Solutions You can solve the capacity problem in one of three ways: Increase the capacity of the data set by specifying physical file attributes. Logically separate the data set so that the resulting structures fit within the physical parameters dictated by the Enterprise Database Server. Divide data sets into sections. Increasing Data Set Capacity by Specifying File Attributes Normally, you specify in DASDL the data set population value using the POPULATION option. DASDL uses the POPULATION value to set values for AREAS, AREASIZE, and BLOCKSIZE if these values are not already specified. The Enterprise Database Server monitors the data set population by counting the number of areas allocated to the data set (not by counting the number of records in the data set). You can increase the population by increasing the number of records that fit within a single area. In this way, the actual number of data set records can exceed the value specified in the POPULATION option. An alternate means of controlling data set population is to specify values for the file attributes AREAS, AREASIZE, and BLOCKSIZE. Because the DASDL POPULATION value is the product of the values of these three attributes, a data set can reach a maximum size of 48 gigabytes. 13 2 3850 8198 001

Scenarios for Using Enterprise Database Server Extended Edition Logically Separating a Data Set You can break a data set into multiple structures. To do so successfully, you must assess the consequences of such a break. For instance, you must determine the application changes that are necessary, the support that is required for users, and the maintenance overhead. Conceptually, breaking the data set into multiple structures is a simple means of circumventing the population limitations of the Enterprise Database Server. Realistically, however, this method is impractical at worst and, at best, awkward. Procedure To separate a data set, perform the following steps. Step Action 1 Decide on these factors: The number of data sets into which the existing data set is to be divided The criteria that defines where structure records are to be placed The spanning sets to be created for each new structure 2 Perform a DASDL update. 3 Write a program (or programs) to move the data from the existing data set into the new structures. 4 Modify the application programs that are to use the multiple structures and the criteria to separate the records. The existence of multiple structures requires An increase in the number of implicit or explicit data set and set calls made by the application programs The logic that enables access to each new data set and set 5 To redistribute records among the various structures in the future, repeat steps 2 through 4. 6 Include the new structures in the WFL jobs for dumping and restoring the database. Dividing Data Sets into Sections The sectioned data set feature of the Enterprise Database Server Extended Edition accomplishes the same result as manually dividing the data set into multiple structures, but without the difficulty. 3850 8198 001 13 3

Scenarios for Using Enterprise Database Server Extended Edition Sectioning a data set physically divides the logical data set into multiple physical files and eliminates two problems: The population limit of a single file. The need to change application programs. The Enterprise Database Server handles all the mapping of the logical data set to the physical files. Procedure To section a data set, perform the following steps. Step Action 1 Decide on these factors: The number of sections into which the data set is to be divided The distribution of records is random, but the application view is still of a single data set. Whether to section the spanning sets for performance purposes 2 Update the DASDL description for the data set by specifying For the data set, the options EXTENDED and SECTIONS, with a value for the number of sections For any sets to be sectioned, the option SECTIONS, with section bounds specifications 3 Reorganize the data set with its sets. The Enterprise Database Server Extended Edition distributes data set records among the sections. The data set appears as a single data set to application programs. Database WFL jobs remain valid. 4 Recompile application programs that access the data set. 13 4 3850 8198 001

Scenarios for Using Enterprise Database Server Extended Edition Scenario 2: Database Performance Limited by Audit Trail Throughput Situation When an application program makes a change to the database, the change is recorded in the Enterprise Database Server audit trail. In a transaction-intensive environment where many programs simultaneously make changes to a database, the throughput of the database can be limited by the rate at which changes can be recorded in the Enterprise Database Server audit trail. Audit Output Bottleneck Problem When many programs are simultaneously updating the database, the rate at which audit images are generated can exceed the speed at which those images can be transferred to disk. When an excess of audit images occurs, the Enterprise Database Server prevents application programs from making any other database changes until the pending audit write has completed and a new write to disk can begin. Audit trail throughput is influenced by many factors, including disk drive performance, the number of programs writing to the audit pack, the number of physical drives in the audit pack family, the number of audit buffers used by the Enterprise Database Server, the contents of the audit buffers, and the number of physical files used to make up a logical audit file. Audit throughput is only as fast as the slowest component of the system. Solutions You can solve the audit trail throughput problem in one of three ways: Reduce the amount of data being written to the audit trail. Improve the efficiency of Enterprise Database Server I/O. Increase the throughput of the disk subsystem. Reducing the Data Being Written to the Audit Trail The goal of data reduction is to reduce the amount of data being written to disk. A change in the means of representing information can result in less data even though the amount of information being written is the same. When an application program makes a change to a data record and stores the change, the Enterprise Database Server records two images of the data record in the audit trail: An image of the record as it appeared before the change An image of the record after the change 3850 8198 001 13 5

Scenarios for Using Enterprise Database Server Extended Edition Many applications change (and therefore store) the same record multiple times. Consolidating the changes to a given record so that there is only one store operation for that record reduces audit trail requirements for that record by at least 50 percent. Unfortunately, application program changes are not always feasible because of the application architecture, a requirement to interact with other components, or other business requirements. However, system designers and programmers should be aware of the effect of store operations on the Enterprise Database Server audit trail. Improving the Efficiency of Enterprise Database Server I/O Suppose you have made application programs as efficient as possible by reducing the amount of data being written to the audit trail. The next most significant factor to address for increasing audit trail throughput is the speed and efficiency with which the Enterprise Database Server can transfer the audit images to the audit media. In general, the transfer of audit images is most efficient when an audit write is always in progress and no audit blocks are waiting to be written. Ideally, just as one write completes, the next audit block fills to the point where it is ready to be written. In addition, because of the fixed I/O overhead for both the Enterprise Database Server and the disk subsystem, audit trail writes are most efficient when a large amount of data is being written. In other words, the efficiency is maximized when the largest amount of audit information is written with the least amount of overhead. Increasing the Size of Audit Blocks Increasing the size of audit blocks enables the Enterprise Database Server to transfer more data to disk in a single I/O with the least amount of overhead. The default audit block size for the Enterprise Database Server is 900 words. You can modify the audit block size value in the database DASDL or by using a Visible DBS command. Sites where audit throughput is an issue typically increase the audit block size significantly. The Enterprise Database Server performs audit I/Os up to four times larger than the specified block size as a means of improving I/O efficiency. In addition, a boxcar effect increases the amount of data carried in each I/O. The boxcar algorithm introduces a very slight delay before writing an audit block that is much smaller than the maximum size allowed. The slight delay provides other programs an opportunity to place additional audit data into the audit block, essentially giving that additional data a free ride to the audit trail. Increasing the Number of Audit Buffers Historically, the Enterprise Database Server has had exactly two buffers for the audit trail. The Enterprise Database Server used one buffer for the I/O in progress, while the other buffer was being filled by programs making changes to the database. Once the I/O completed, the two buffers switched roles. In some circumstances, it was possible for the second audit buffer to be filled before the I/O in progress had completed. When this occurred, application programs were prevented from making additional changes until the I/O finished and the roles of the buffers were switched. 13 6 3850 8198 001

Scenarios for Using Enterprise Database Server Extended Edition The maximum number of audit buffers is now 11. The presence of additional buffers does not itself increase audit throughput. The ideal situation still requires that an audit buffer be filled and ready to write at the same instant that the previous I/O completes. However, by providing additional audit buffers, the Enterprise Database Server is better able to absorb small delays in audit I/O when combined with small bursts in audit activity, thus alleviating application program stoppages. However, under high sustained activity combined with restricted I/O throughput, program stoppages are still possible once all 11 audit buffers are filled. As part of the Enterprise Database Server Extended Edition package, you can increase or decrease the number of audit buffers available for use by the Enterprise Database Server. This capability provides additional flexibility in managing audit throughput for sectioned audit files. Sectioning Audit Files The Enterprise Database Server Extended Edition provides a feature that enables you to divide a single logical audit file into multiple physical files (refer to Section 2). When an audit file is sectioned, the Enterprise Database Server performs writes to the audit trail in parallel, whereas previously the Enterprise Database Server performed all audit I/Os serially. By performing writes in parallel, I/O capacity is greatly increased. Increased I/O capacity boosts the amount of audit data that can be transferred in a given amount of time, and thus raises the transaction processing potential of the database. Increasing the Throughput of the Disk Subsystem Probably the least desirable solution is the quite viable one of supplying faster disk drives. The expense inherent in purchasing and installing new hardware accounts for the undesirability of this solution. However, sectioned audit files require that the audit pack family consist of multiple disk drives. If you specify sectioned audit files with a single pack audit family, the activity of multiple physical files causes excessive head movement, negating the benefits of sectioned audit files. 3850 8198 001 13 7

Scenarios for Using Enterprise Database Server Extended Edition Scenario 3: Database Performance Limited by Set Contention Situation Most application programs access a data set by way of a spanning set. Because of the random nature of online transaction processing, access by way of a set is the only means to efficiently and quickly locate desired records. For large batch jobs, traversal through the data set in a predefined order can be guaranteed only by accessing the data set through a set in which that order is specified. Index sequential sets in the Enterprise Database Server provide for extremely fast searching of the set for specific entries. However, because changes being made to the data set can affect the entries in the set, searching (and modifying) the set must be done in such a way as to guarantee the integrity of the set at all times. Integrity is guaranteed by locking portions of the set while a search (or modification) is in progress. Resource Contention Problem Database performance can be limited by the number of accesses being performed against a set at any given time: Because portions of the set are locked to ensure data integrity, access through the set is occasionally single-threaded. As the number of programs accessing a set increases, the probability increases that a program must wait for another program to finish before the former program can access the set. As the number of waiting programs increases, so does the overhead within the Enterprise Database Server to manage the waiting tasks. In the Enterprise Database Server, index sequential sets are implemented as a tree. To minimize resource contention, the Enterprise Database Server has the ability to lock individual branches within the tree structure. However, in order to lock the lower levels of the tree, the upper levels of the tree must, at least for a short time, also be locked. It is the locking at the upper levels of the tree that cause the most resource contention. Solutions The solution strategy for this resource contention problem is to eliminate the point of contention or at least minimize the amount of contention. The best solution would be to remove the lock completely and therefore eliminate the point of contention. However, given data integrity requirements, some amount of locking is necessary. 13 8 3850 8198 001

Scenarios for Using Enterprise Database Server Extended Edition The following solutions, therefore, concentrate on reducing the number of programs that require traversal through a set at any given time: Logically separating a data set Using multiple sets Using Enterprise Database Server Extended Edition sectioned sets Logically Separating a Data Set Separating a data set is described in Scenario 1. If each data set has its own collection of spanning sets, contention is reduced to those programs accessing the subset of data contained in that data set. However, the cost to implement this solution is relatively high. Using Multiple Sets The same reduction in contention can be obtained by creating multiple sets for a single data set using the same keys. Some application programs can use one set, while other programs use another. This approach minimizes contention for resources among programs inquiring upon the set. However, contention still exists on all sets between programs inquiring upon the set and programs modifying the set (records being added or deleted, or keys being changed). Using Enterprise Database Server Extended Edition Sectioned Sets The concept of sectioned sets merges the two previous solutions into one feature. When a set is sectioned, the data set remains a single logical structure while the Enterprise Database Server divides the set into multiple logical sections. You define the set section boundaries by specifying key values. Since the key ranges defining the sections do not change dynamically, locking of a given section can be accomplished without restricting access to any other sections. Thus, a sectioned set has the set independence achieved by logically separating a data set, while maintaining the single data set image provided by using multiple sets. Sectioned sets appear as a single logical set. Since the logical schema of the set is not altered, no application program changes are required. A simple DASDL update to change the set into a sectioned structure, along with the corresponding reorganization and possibly the recompilation of the application program, is all that is required to implement a sectioned set. 3850 8198 001 13 9

Scenarios for Using Enterprise Database Server Extended Edition Scenario 4: A General Transaction Processing Environment Situation In a general transaction processing environment, what Enterprise Database Server Extended Edition features can be used to draw out the best performance for your system or database? Solutions Achieving the best performance in a system or database involves a combination of solutions mentioned in the previous three scenarios. Any system usually experiences one or two primary performance bottlenecks. Once these bottlenecks are widened, a new bottleneck tends to appear. Therefore, no one simple answer exists; achieving the best possible performance is an iterative task of identifying and relieving each bottleneck as it appears. A general strategy for using Enterprise Database Server Extended Edition features on a database follows. Because employing Enterprise Database Server Extended Edition features is essentially a database-tuning exercise, keep in mind that the following strategy is generalized and is not applicable in all situations. Sectioning Audit Files Sectioning audit files is a logical first step in migrating a database to the Enterprise Database Server Extended Edition because The audit trail is a known bottleneck for many databases. Sectioning audit files for a database is simple, requiring only a DASDL update. However, sectioning an audit file is only recommended when the audit pack family consists of multiple physical disks. When specifying the number of sections for the audit file, do not exceed the number of disk packs available. A number equal to the number of physical disks provides the best chance that the individual sections appear on different physical spindles and that writes to each section do not interfere with each other. Sectioning Sets Sectioning a spanning set is expected to provide a noticeable improvement in database throughput when the spanning set is being used by multiple programs, and when records are being added or deleted from the spanned data set. The most difficult part of sectioning a set is identifying the boundary values for the set sections. In general, specify boundary values so that an even probability exists that an application program will access a specific set section. Depending on the applications, 13 10 3850 8198 001

Scenarios for Using Enterprise Database Server Extended Edition this might mean that each set section does not necessarily have the same number of entries. However, if application programs are known to access a specific set of records, the boundary specifications can be defined so that each application program accesses its own set section. Sectioning Data Sets Sectioning a data set is most useful to handle data set capacity issues. Performance in a transaction processing environment is more likely to be improved by sectioning the spanning sets used most by the application programs rather than by sectioning the data set itself. However, sectioning a data set can improve performance in some circumstances. When a data set is sectioned, data record sizes are increased by two words to accommodate the TranStamp Locking and RSN fields. Reorganization of all spanning sets is required because of the doubling of the AA word. If you know beforehand that sectioning of a data set is needed, you might find it advantageous to specify the sectioning of both the data set and its spanning sets during the same DASDL update to reduce the number of required set generations. Using TranStamp Locking and RSNs The system provides TranStamp locking and RSNs automatically when a data set is sectioned. Therefore, the recommendation to use these features applies only when you do not plan to section the data set. TranStamp locking provides the most benefit when programs lock a large number of records. Each data set involved in such a transaction should be declared in the database DASDL with the EXTENDED attribute and, optionally, the SECTIONS option. When a transaction involves both data sets with TranStamp locking and data sets without TranStamp locking, the full benefit of TranStamp locking is not realized. RSNs provide improved performance during the garbage collection of sets declared as having duplicates but that do not specify DUPLICATES FIRST or DUPLICATES LAST. RSNs do not provide improved performance during transaction processing. However, because of their benefit during garbage collection, RSNs provide improved reorganization performance and reduced downtime during a reorganization. 3850 8198 001 13 11

Scenarios for Using Enterprise Database Server Extended Edition 13 12 3850 8198 001

Section 14 Support Policy and Release Compatibility Overview Information about the support policy and release compatibility is discussed in the Release and Support Policy Overview and the ClearPath MCP Migration Guide. Refer to the following table for specific details. For information about... Refer to... Deimplementations Differences between releases Hardware platform migration Migration and compatibility Returning to a previous release Release levels Software support periods Supplemental Support Packages (SSPs) Types of software releases ClearPath MCP Migration Guide Release and Support Policy Overview 3850 8198 001 14 1

Support Policy and Release Compatibility Overview 14 2 3850 8198 001

Section 15 Installation Process Overview In This Section This section provides an overview of the following topics: Preinstallation preparation Data management products Keys file General installation requirements 3850 8198 001 15 1

Preparing for the Installation Process Preparing for the Installation Process Before you create or upgrade your data management environment, consider the following questions: Am I a new ClearPath MCP user? ClearPath MCP comes with the Enterprise Database Server preloaded and preconfigured; do not reinstall it on your system. If you ordered any optional data management software for your MCP system, you must use the SI or Installation Center program to install this software. The optional data management products that can be installed are Advanced Data Dictionary System (ADDS) Database Certification Utility (DBCERTIFICATION) DM Interpreter Enterprise Database Server Inquiry Enterprise Database Server Transaction Processing System (TPS) Extended Retrieval with Graphic Output (ERGO) On-Line REPORTER III Remote Database Backup REPORTER III Am I a new ClearPath MCP user who purchased optional data management software, am I upgrading any optional data management software on my ClearPath MCP system, or am I a new data management user? New and existing users need to follow different steps to install data management software. Procedures tailored to new ClearPath MCP or new data management users are provided in Section 17, Creating a Data Management Environment. Procedures tailored to the needs of existing data management users are provided in Section 18, Upgrading an ADDS Environment to a New Release Level, and Section 19, Upgrading a Non-ADDS Environment to a New Release Level. What requirements must I fulfill before I can create my data management environment? Before you start loading software onto your ClearPath MCP server, you should consider the memory requirements of the different data management products. Because a number of the data management products use a Screen Design Facility Plus (SDF Plus) screen interface, you also should be familiar with the requirements SDF Plus places on your environment. Section 16, Understanding Data Management Environment Requirements, explains the memory and SDF Plus requirements for a data management environment. How do I load my data management software onto a ClearPath MCP server? Use the Simple Installation (SI) or the Installation Center program to load your software onto your ClearPath MCP server. You can use these programs to load your software in one step or in stages. The use of the SI program is fully described in the 15 2 3850 8198 001

Data Management Products Simple Installation Operations Guide. Refer to the Installation Center Operations Guide for information about the Installation Center program. The procedures in this guide provide you with the information you need to use the SI or Installation Center program to load your data management software. After I have loaded my data management software, is there anything else I need to do? You might need to perform some or all of the following tasks: If you are loading ADDS, you might need to create or modify the SL configuration file. If you are upgrading your software, you need to upgrade your dictionaries and databases. Upgrading dictionaries and databases to a new release level is described in Section 18, Upgrading an ADDS Environment to a New Release Level, and Section 19, Upgrading a Non-ADDS Environment to a New Release Level. If you are a new ADDS user, you must complete the installation of your dictionary. Installing a dictionary for the first time is described in Section 17, Creating a Data Management Environment. Data Management Products The SI or Installation Center program aids your software installation process. Using the SI or Installation Center program, you can choose to Load all your software (not just the data management products) automatically. Load your data management products automatically. Load each data management product individually. For the SI or Installation Center program to identify the products that you want to load, there are special identifiers known as style identifiers associated with each product. Also associated with each product is a short name. Refer to the Product Catalog for information about style identifiers. When you use the SI or Installation Center program to load your software, either designate that all the data management software should be loaded at one time or use the individual style identifiers to load your software in stages. For detailed information about running the SI program, refer to the Simple Installation Operations Guide. For detailed information about running the Installation Center program, refer to the Installation Center Operations Guide. 3850 8198 001 15 3

Keys File Keys File In addition to your software, you receive a keys file. The SI and Installation Center programs use the keys file to determine the software packages you have purchased. The keys file is also used by the Enterprise Database Server, and ADDS software when it is running, so do not remove the keys file from your MCP server after you have completed the installation process To use the Enterprise Database Server Extended Edition features, you must install the Enterprise Database Server software key. To use the Enterprise Database Server Hot Software Update feature, you must install the UPDATE key. If, by mistake, you remove the keys file from your MCP server and then attempt to run the Enterprise Database Server or ADDS software, a message similar to the following message is displayed on your workstation: You do not have the keys to use this product. To make, for example, your ADDS software run correctly, perform the following steps: 1. Load the keys file onto your MCP server and then initiate the IK MERGE command. 2. Use the THAW (Thaw Frozen Library) system command to thaw the *SYSTEM/SIM/SUPPORT library. 3. Run the data management software. 15 4 3850 8198 001

General Installation Requirements General Installation Requirements Installing data management software involves the following steps: Loading software onto your system Creating or modifying an SL configuration file Refer to Appendix C, SL (Support Library) System Command Association, for additional information. Issuing one or more commands If you already use data management products, the installation process might also include upgrading existing Advanced Data Dictionary System (ADDS) dictionaries Enterprise Database Server databases While there is an order in which some of the installation steps must be performed, you can tailor the process both the order and the time frame for the process to suit the needs of your site. Some examples of installation order follow: If you use ADDS, then you must have your dictionary ready for use before you can define or modify any databases. If your site is using several databases, you can choose to upgrade one database to the new release level, use the new software for some period of time, and then decide several days, weeks, or even months later to upgrade your other databases to the new release level. In addition, if you are using ADDS and your system is running with security administrator status enabled, refer to the Security Administration Guide or to your security administrator for information on the security requirements for the SYSTEM/ADDS/UTILITIES program. The SYSTEM/ADDS/UTILITIES program is provided to aid in installing, updating, and changing ADDS dictionaries. 3850 8198 001 15 5

General Installation Requirements Products with SDF Plus Screen Interfaces If you are using a data management product that has an SDF Plus screen interface, you also need to have either the SDF Plus product or the SDF Plus run-time environment available on your system. If you purchase a package that requires the use of the SDF Plus run-time environment but you do not purchase the SDF Plus product, your installation package supplies you with the files necessary to create the SDF Plus run-time environment. However, you cannot develop your own SDF Plus screens or applications unless you purchase the SDF Plus product. Data management products that use an SDF Plus screen interface are included in the following packages: Advanced Data Dictionary System (ADDS) Remote Database Backup The files associated with SDF Plus are loaded onto your MCP server by the SI or Installation Center program either when you load your system software or when you load the Supplemental Support Package (SSP). However, if you choose not to load all the new system software or the SSP before loading your data management software, you must load at least the following SDF Plus files: *SYSTEM/SDFPLUS/FORMSSUPPORT *SYSTEM/SDFPLUS/DICTMANAGER *SYSTEM/SDFPLUS/ARCHIVEMANAGER *SYSTEM/SDFPLUS/COMMANAGER *SYSTEM/SDFPLUS/FORMSPROCESSOR For more information on SDF Plus, refer to Section 16, Understanding Data Management Environment Requirements. 15 6 3850 8198 001

Section 16 Understanding Data Management Environment Requirements In This Section This section discusses the following topics: Memory requirements Planning for VSS-2 SDF Plus physical requirements Enterprise Database Server database physical limitations 3850 8198 001 16 1

Memory Requirements Memory Requirements Memory requirements are provided for the following elements of a data management environment. Environment Element Basic Databases Comments Excludes ADDS and screen-based products. Includes database use only (excluding ADDS). Applications Screen management ADDS Remote Database Backup Includes SDF Plus and screens you develop. Includes basic run-time support. Includes primary and secondary hosts. OLE DB Open Distributed Transaction Processing (formerly known as Open/OLTP) Note: The memory requirements given in this section are based on measurements taken on ClearPath MCP servers running various data management applications. The values given should be used as estimates of your memory requirements and not as absolute values. The memory requirements of your data management environment depend on such factors as the size and number of databases you are using, the number of users, the number of applications, and the complexity and number of queries being processed. 16 2 3850 8198 001

Memory Requirements The following table provides estimates of the memory requirements for the data management environment. Data Management Environment Element Memory Requirements for the First Database Memory Requirements for Each Additional Database Notes Basic Enterprise Database Server only 240 KB 0 KB SYSTEM/ACCESSROUTINES is the only required file. ADDS 5600 KB Not applicable Sufficient to provide minimal support for the ADDS dictionary. OLE DB 140 KB 56 KB For each application that interfaces to the dictionary, add 500K bytes. Open Distributed Transaction Processing Remote Database Backup on the primary host Remote Database Backup on the secondary host 55 KB 55 KB 250 KB 200 KB Sufficient for a complete Remote Database Backup environment on the primary host. For each database using the Catchup process, add 90K bytes. 400 KB 320 KB Sufficient for a complete Remote Database Backup environment on the secondary host. For each database using the Catchup process, add 120K bytes. 3850 8198 001 16 3

Memory Requirements The following table provides estimates of the database memory requirements. Database Number of Classes, Tables, or Data Sets Memory Requirements Notes Each Enterprise Database Server Database 1 25 50 100 200 210 KB 410 KB 640 KB 1100 KB 2050 KB The memory requirements are estimates of the memory usage of several Enterprise Database Server databases. The memory is shared by all users of the database. The memory usage of five different Enterprise Database Server databases was measured to obtain the information in the two columns to the left. Each test database consisted of some number of data sets and one restart data set. Each data set contained 10 items and had 2 indexes. The measurements were made immediately after the database had been opened and one retrieval operation had been performed on a data set. The measurements shown in the previous table were taken after only one retrieval operation was performed. As a result, very few database buffers are allocated; that is, the ALLOWEDCORE memory in use at the time of the measurement was very low. Further database access might result in more memory being used. And, as the database is used, unneeded code and data memory might be overlaid, thereby lowering the memory usage. The following table provides estimates of the memory requirements for the application environment. Application Environment Element Memory Requirements for the First User Memory Requirements for Each Additional User Notes Basic SDF Plus run-time screen management environment 800 KB 90 to 1100 KB The additional user memory requirements apply whether the user is running user-written screens or data management environment screens. 16 4 3850 8198 001

Planning for VSS-2 Calculating the Total Memory Requirements for Your Site The following example illustrates memory requirement calculations. Example 1 For a small Enterprise Database Server environment, assume the following requirements: 240 KB for the basic Enterprise Database Server environment 410 KB for a database with 25 data sets 1100 KB for a database with 100 data sets 200 KB for an application, depending upon the particular application Planning for VSS-2 Virtual sector size, version 2 (VSS-2) is an enabling technology for attaching 512-byte sector disks to the ClearPath MCP environment. VSS-2 provides improved write performance over that provided by VSS-1 technology. Attaching 512-byte sector disks using VSS-2 versus VSS-1 technology is attractive for an environment in which write performance is a priority over use of the full disk capacity. VSS-2 is provided on CIOMbased host systems for disks attached by way of a SCSI-2W (wide) or Fibre Channel connection. VSS-2 is a combination of CIOM I/O processor microcode, channel microcode, hardware, and ClearPath MCP code that maintains a 180-byte logical sector image for the application. Two 180-byte logical sectors are stored in each physical 512-byte sector, with the remaining 152 bytes padded to zero by the VSS-2 hardware. When write requests are a multiple of two logical sectors (360 bytes) and use VSS-2 technology, applications benefit from improved write performance when compared to a disk of the same size formatted with VSS-1 technology. However, the usable capacity of a VSS-2 disk is 70 percent of the physical disk capacity. All new database structures that use the DASDL defaults for structure blocking automatically are created with VSS-2 aligned block sizes. VSS-2 blocking is compatible with previously existing disk technologies. There are no performance penalties when using VSS-2 aligned files or database structures on a non-vss-2 device. When inspecting block sizes for potential migration to VSS-2 compatible blocking, note that DASDL behavior is to ensure that no more than 29 words can be wasted for any block. Thus, the most efficient block size adjustment could be the addition of a filler to each record rather than changing the number of records per block. This block size adjustment is particularly important if there are no explicit declarations for blocking, such as when using the DASDL defaults. DASDL is not allowed to add data items (fillers). Therefore, allowing it to add records could make some blocks quite large. It is highly recommended that schemas first be compiled for syntax with the $ALIGNMENT option set. This compilation obtains an estimate of the number of structures that are not already VSS-2 aligned. Many are already aligned. The 3850 8198 001 16 5

SDF Plus Physical Requirements $ALIGNMENT option directs DASDL to flag non-vss-2 aligned structures with an informational message. This option enables database administrators to easily discover where blocking changes might be desirable. Note: Because DASDL does not allow more than 29 words of wasted space in a block, the actual block size can be different from the explicit declaration. Thus, if an explicit 60- word aligned declaration wastes more than 29 words, the result might not be optimal (60-word aligned) for VSS-2. The option VSS2OPTIMIZE can be applied either globally or to individual structures. It only applies to those existing structures that have allowed DASDL to determine their blocking attributes. This option does not have any effect on structures with user-specified blocking. The VSS2OPTIMIZE option is a one-time option. Once the option has been used to trigger the reorganization of a structure to VSS-2 blocking, it has no further effect on that structure. Its effects cannot be reversed by removing the option or by setting its value to FALSE. User-specified values always take precedence. Consequently, the effects can be altered by providing specific values. Settings specified at the structure level override those at the global level. The VSSWARN option is the run-time version of the $ALIGNMENT option. It instructs the Accessroutines to provide a notification message whenever a non-vss-2 aligned structure is first opened on a VSS-2 device. The message is emitted, by means of a WFL instruction block, when the first user opens the structure during any instantiation of the database. This VSSWARN option is valuable in identifying those situations in which a VSS-2 disk has been configured into a pack family and the resident structures were inadvertently not modified for VSS-2 blocking. SDF Plus Physical Requirements The SDF Plus run-time environment is required by the following management products that use an SDF Plus screen interface: Advanced Data Dictionary System (ADDS) Remote Database Backup Before beginning a session with a product that uses an SDF Plus screen interface, you need to ensure that your ClearPath MCP server terminal is properly configured. Refer to the reference manual for your terminal for specific configuration instructions. It is recommended that the configuration include an adequate data communications buffer size. The minimum buffer size is 1920 bytes. If you are running an application that uses forms containing many fields and literal text strings, such as ADDS, increase the buffer size to at least 2235 bytes. 16 6 3850 8198 001

SDF Plus Physical Requirements Note: If the first part of a form is displayed, but the last part never appears, or if the terminal does not switch to forms mode, then the data communications buffer might be too small. Depending on the terminal setting, you might need to transmit from the home position on the terminal. If you are using the Command and Edit (CANDE) message control system (MCS), you can inhibit the display of messages with the RO MESSAGE command. To allow messages to be displayed, use the SO MESSAGE command. Or, you can refresh the screen to discard any messages that interfere with your SDF Plus screen interface session. SDF Plus works with supported terminals and terminal emulators. For SDF Plus you might need to change your existing terminal register settings, or you might need to use one of the following tools to change your existing terminal attribute settings: Use Network Definition Language II (NDLII) or Interactive Datacomm Configurator (IDC) for terminals connected by either of the following processors: Ä Ä Network support processors (NSPs)/line support processors (LSPs) Data communications data link processors (DCDLPs) Use terminal gateway for terminals connected by communications processor data link processors (CPDLPs). The following list identifies the configuration display attributes that you should set for ClearPath MCP server terminals on which screens developed with SDF Plus are displayed: Each page contains 24 lines. Each line contains 80 characters. Set auto-skip to next forms field. The Tab key only moves the cursor to the next tab; it does not write an arrow ( >) first. The Clear key causes only unprotected fields to be cleared when the terminal is in forms mode. The Return key only moves the cursor; it does not write anything to memory. The Return key moves the cursor to column 1 of the next line. The following list identifies the configuration data communications attributes that you must set for ClearPath MCP server terminals on which screens developed with SDF Plus are displayed: No action is taken when receiving a start-of-header (SOH) message. Interpret data communications DC1 as hold-in-receive mode. Interpret data communications LF as a line feed without a carriage return. 3850 8198 001 16 7

Enterprise Database Server Database Physical Limitations Data communications HT handles tabbing by moving the data communications pointer to the next tab stop; it does not write an arrow ( >) first. Data communications DC2 interpretation complements the forms mode of the data communications pointer page. The terminal is in a receive-ready state after completing the interpretation of a data communications message. No line-at-a-time transmission. The Return key only moves the cursor; it does not write anything to memory. The data communications carriage return only moves the data communications pointer; it does not write anything to memory. The data communications carriage return moves the data communications pointer to column 1 of the next line. Enterprise Database Server Database Physical Limitations The DASDL compiler imposes the following limitations on Enterprise Database Server databases: Enterprise Database Server allows a maximum of 4095 tasks to have the same database open. The actual number of tasks that have any given database open is governed not only by the Enterprise Database Server maximum limit, but also by the number of tasks that are allowed to be running on the system. The system maximum is dependent upon both the style of system and the level of ClearPath MCP. The Enterprise Database Server allows a maximum of 4095 concurrent open actions for each database structure. A LIMITERROR exception is returned when this limit is exceeded. DASDL allows a single database to describe up to 4095 active and deleted structures. Of the 4095 structures, 4000 structures can be active. Of the total number of active structures, 1000 data sets can be declared. For partitioned structures, the number of structures is multiplied by the value of the OPEN PARTITIONS option, but is independent of the actual number of partitions that currently exist. Each database can contain a maximum of approximately 3000 global items, including global data items and disjoint data sets, sets, subsets, Accesses, and remaps. Each data set in the database can contain a maximum of approximately 3000 items, including data items and embedded data sets, sets, subsets, Accesses, and remaps. Records can contain at most 24,570 bytes. Items of type ALPHA can contain at most 4095 bytes. Items of type NUMBER can contain at most 23 digits. Data sets can contain at most 268,435,456 records. 16 8 3850 8198 001

Section 17 Creating a Data Management Environment In This Section This section describes the following installation tasks: Determining your installation environment Installation overview Loading your software Verifying SDF Plus libraries Installing the ADDS dictionary Configuring Remote Database Backup for use with a nonusercoded database Running two versions of Enterprise Database Server Running a second version of Remote Database Backup on your system Configuring the Open Distributed Transaction Processing product 3850 8198 001 17 1

Determining Your Installation Environment Determining Your Installation Environment This section discusses general installation procedures for creating a data management environment for the first time. If you are upgrading your data management environment, refer to the following table to determine the procedures you should follow. If you are already using... Then... ADDS Follow the procedures explained in Section 18, Upgrading an ADDS Environment to a New Release Level. Enterprise Database Server, but you are installing ADDS for the first time Perform the following steps: 1. Install ADDS by following the procedures in this section. 2. Upgrade your Enterprise Database Server databases by following the procedures in Section 18, Upgrading an ADDS Environment to a New Release Level, or Section 19, Upgrading to a Non-ADDS Environment to a New Release Level. 17 2 3850 8198 001

Installation Overview Installation Overview Installing a data management environment for the first time involves the steps outlined in the following table. Detailed installation instructions follow in this section. Step Action 1 Use the SI program or the Installation Center program to load all the required software onto your ClearPath MCP server. For more information, refer to Loading Your Software later in this section. 2 If you purchased any of the products that use screen interfaces, verify that the latest versions of the SDF Plus libraries are installed. The products that use screen interfaces are Advanced Data Dictionary System (ADDS) Remote Database Backup For more information, refer to Verifying SDF Plus Libraries later in this section. If you are not installing any products that use screen interfaces, skip this step. 3 If you are using ADDS, ensure that the SL configuration file identifies the location of your data management software and describes your ADDS usage requirements. 4 If you are using Remote Database Backup with a nonusercoded database, perform the tasks listed under Configuring Remote Database Backup for Use with a Nonusercoded Database later in this section. If you are not using Remote Database Backup or not assigning Remote Database Backup with a nonusercoded database, skip this step. 5 Install the ADDS dictionary software by following the instructions provided in Installing the ADDS Dictionary later in this section. If you do not use ADDS or any software that uses ADDS, skip this step. 6 Install the Open Distributed Transaction Processing product by following the instructions provided in Configuring the Open Distributed Transaction Processing Product later in this section. If you do not use the Open Distributed Transaction Processing product, skip this step. 7 Verify the support library definition for your newly installed software. Refer to Appendix C, SL (Support Library) System Command Associations, for additional information. 3850 8198 001 17 3

Loading Your Software Loading Your Software To load your data management software, load the software from the ClearPath MCP media to your MCP server using the Simple Installation (SI) or Installation Center program. For a list of data management packages you can install, refer to the Software Product Catalog. For information on the SI program, refer to the Software Installation Operations Guide. For information about the Installation Center program, refer to the Installation Center Operations Guide. Note: If you are using a Remote Database Backup system, load the appropriate software on both host systems. Verifying SDF Plus Libraries Verify that the latest versions of the SDF Plus libraries have been installed if your products use a screen interface. The data management products that use an SDF Plus screen interface are Advanced Data Dictionary System (ADDS) Remote Database Backup If the latest versions of the SDF Plus libraries have not been installed, use the SI or Installation Center program to install them. For detailed information on the SI program, refer to the Simple Installation Operations Guide. For detailed information about the Installation Center program, refer to the Installation Center Operations Guide. The style identifier associated with the SDF Plus product is SDF. If old versions of these library files are installed, you must ensure that the files are not currently being used while you are updating the files. Use the LIBS (Library Task Entries) system command to check for in-use files. Note: It is not sufficient to use the CANDE FILES command to check for in-use files because any file that is assigned as a system library is listed as in use. Before you install any data management product that has a screen interface, ensure that none of the form library files are in use. If you install the new form libraries over form libraries that are in use, the new form libraries are overwritten when the existing in-use form library files are closed. As a result, you must install the new form library files again. To avoid this problem, use the CANDE FILES FORMLIB command to check for in-use form library files. 17 4 3850 8198 001

Installing the ADDS Dictionary Installing the ADDS Dictionary To install the ADDS dictionary software, complete the following steps. Notes: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. For information about data dictionary security, refer to the ADDS Operations Guide. If the SYSTEM/ADDS/UTILITIES program is running with an insufficient security status, for example, it does not have SECADMIN status, the installation process finishes with the following message: Dictionary database process requested was successful, but was not SL'ed. ADDS will run on all supported ClearPath systems provided that the MAX MIX is set to 9999 or less, and that the MAX STACK is set to 4095 or less. Step Action 1 Type INSTALL on the Simple Installation Program (HOME) screen. Transmit the screen. 2 Type the name of the software installation file, and type an X in the box next to the Select Additional Files Categories to Copy/Install field. Transmit the screen. The Generation Data portion of the Category of Files to Select (FILES) screen is displayed. 3 Run the SI program in menu mode and select the Modules, Tables option. 4 Use the ADDSDB/= syntax to select all the ADDSDB files and the file DESCRIPTION/ADDSDB. No other files need to be selected. During the installation process, the file DMSUPPORT/ADDSDB is generated using the SYSTEM/ADDS/UTILITIES program. 5 Run the SYSTEM/ADDS/UTILITIES program. The system displays the Install/Update/Resize/Change an ADDS Dictionary (ADDSDB) screen. 6 Type the name you want to assign to your dictionary in the Enter Data Dictionary Name field. By default the dictionary is called DATADICTIONARY. 3850 8198 001 17 5

Installing the ADDS Dictionary Step Action 7 Type INSTALL in the Selection field, and then transmit the screen. The system displays a series of screens that enable you to set values for the following items: Accessroutines title Audit trail attributes Control file security guard file name Datarecovery title Dictionary global defaults Dictionary properties Primary audit copy job Primary audit options Primary audit security guard file name Reconstruct title Recovery title Reorganization title Resident buffer limits Secondary audit copy job Secondary audit options Secondary audit security guard file name 8 Tailor the values on the screens to suit your purposes. Any fields to which you do not specifically provide values are assigned default values. For detailed information on the values you can supply and the default values, refer to the DASDL Reference Manual. 9 To review the properties assigned to your dictionary, type Y in the Do You Wish to Review field on the DICTDOINSTALL screen. To bypass reviewing the properties, type N in the Do You Wish to Review field on the DICTDOINSTALL screen. Transmit the screen. The dictionary installation process is initiated. 10 After the dictionary installation process is complete, type QUIT in the Action field. Transmit the screen to exit the SYSTEM/ADDS/UTILITIES program. 11 Make the DMSUPPORT library and the database control file for each dictionary public if the dictionary is to be accessible from multiple usercodes. 12 Back up the dictionary, following the steps detailed in Section 18, Upgrading an ADDS Environment to a New Release Level. 17 6 3850 8198 001

Configuring Remote Database Backup for Use with a Nonusercoded Database Configuring Remote Database Backup for Use with a Nonusercoded Database If you are running Remote Database Backup with a nonusercoded database, you must perform the following tasks: Provide a queue for Remote Database Backup-related tasks. Facilitate the NFT task under the AFS mode. Recompile Remote Database Backup system software. Modify the RDB support library. For details on how to use the Remote Database Backup product, refer to the Remote Database Backup Operations Guide. Providing a Queue for Remote Database Backup-Related Tasks For a nonusercoded database, Remote Database Backup automatically creates, at run time, the required WFL job for initiating processes on the secondary host. For RDB Server and all related tasks to run unimpeded, you must ensure that a queue exists to handle the tasks by either Confirming that the attributes of the default queue on the secondary host do not prevent Remote Database Backup processes from running correctly. For example, if the mix limit for the default queue is set to 0 (zero), no Remote Database Backup processes can run on the secondary host. Configuring and specifying a queue other than the default queue to handle all Remote Database Backup tasks. You can use the MARC screens to check queue numbers and the attributes associated with them. Facilitating the NFT Task Under the AFS Mode Under the AFS mode, certain Remote Database Backup-related tasks (including the RDB Server, the ACR Server, and Tracker) execute under the database usercode. However, BNA does not allow the NFT task to run without a usercode. To enable the NFT task to run, provide the NFTINFOFILE file with a privileged usercode and password during an RDB support library modification. You can perform the modification on both the primary and secondary hosts, or you can perform it on the primary host and copy the modified RDB support library to the secondary host. Recompiling Remote Database Backup System Software If you are recompiling either the SYSTEM/RDBSUPPORT or the SYSTEM/RDBSERVER program, use the MP (Mark Program) system command. Use syntax similar to the following: MP *SYSTEM/RDBSUPPORT + TASKING 3850 8198 001 17 7

Configuring Remote Database Backup for Use with a Nonusercoded Database Modifying the RDB Support Library for a Nonusercoded Database When you use a nonusercoded database with Remote Database Backup, prepare for its proper functioning by performing the following steps: Step Action 1 Establish a privileged usercode (and password) as a remote user on the primary and secondary hosts. 2 Modify the title attribute of the NFTINFOFILE file to the usercode and password established in step 1, using a WFL statement with the following syntax: WFL MODIFY *SYSTEM/RDBSUPPORT; FILE NFTINFOFILE (TITLE = <usercode>/<password>); The RDB support library must be modified on both hosts or modified on one host and copied to the other host. 3 (Optional) Specify a queue other than the default queue to handle all Remote Database Backup tasks on the secondary host, using a WFL statement with the following syntax: WFL MODIFY *SYSTEM/RDBSUPPORT; TASKSTRING = "<queue number>" 4 Prepare the RDB support library at both hosts using the SL (Support Library) system command. The syntax is as follows: SL RDBSUPPORT = *SYSTEM/RDBSUPPORT ON DISK 17 8 3850 8198 001

Running Two Versions of Enterprise Database Server Running Two Versions of Enterprise Database Server Audit Reader Support Library Perform the following procedure for each version of the Enterprise Database Server audit reader support library you want to run. Change the usercode, pack name, and system library name for each version. Step Action 1 Copy the Enterprise Database Server software to the desired usercode or pack. In the following examples, the Enterprise Database Server software is copied to the usercode on the SYSTEST pack. 2 Prepare the audit reader support library using the SL (Support Library) system command as follows: SL <new name> =(SSRTEST)SYSTEM/DMAUDITLIB ON SYSTEST 3 Modify the code files as shown in the following examples. Supply an alternate system library name for the function name of the libraries in each code file. WFL MODIFY (SSRTEST)SYSTEM/COPYAUDIT ON SYSTEST; LIBRARY AUDITLIB (FUNCTIONNAME="<new name>"); LIBRARY AUDIT2LIB (FUNCTIONNAME="<new name>"); WFL MODIFY (SSRTEST)SYSTEM/PRINTAUDIT ON SYSTEST; LIBRARY AUDITLIB (FUNCTIONNAME="<new name>"); WFL MODIFY (SSRTEST)SYSTEM/ACCESSROUTINES ON SYSTEST; LIBRARY AUDITLIB (FUNCTIONNAME="<new name>"); 3850 8198 001 17 9

Running Two Versions of Enterprise Database Server DMUPDATE Support Library Perform the following procedure for each version of the Enterprise Database Server DMUPDATE support library you want to run. Change the usercode, pack name, and system library name for each version. Step Action 1 Copy the Enterprise Database Server software to the desired usercode or pack. In the following examples, the Enterprise Database Server software is copied to the usercode on the SYSTEST pack. 2 Modify the code files as shown in the following examples. Supply an alternate system library name for the function name of the libraries in each code file. WFL MODIFY (SSRTEST)SYSTEM/DMUPDATE ON SYSTEST; LIBRARY UPDATELIB (FUNCTIONNAME="<new name>"); WFL MODIFY (SSRTEST)SYSTEM/ACCESSROUTINES ON SYSTEST; LIBRARY UPDATELIB (FUNCTIONNAME="<new name>"); 3 Prepare the DMUPDATE support library using the SL (Support Library) system command as follows: SL <new name> =(SSRTEST)SYSTEM/DMUPDATE ON SYSTEST 17 10 3850 8198 001

Running a Second Version of Remote Database Backup on Your System Running a Second Version of Remote Database Backup on Your System Perform the following procedure to run another version of Remote Database Backup on your system: Step Action 1 Copy the Enterprise Database Server and Remote Database Backup software to the desired usercode or pack at both the primary and secondary hosts. In the following example, the software is copied to the usercode on the SYSTEST pack at both hosts. 2 Prepare the RDB support library using the SL (Support Library) system command at both hosts as follows: SL <new name> = (SSRTEST)SYSTEM/RDBSUPPORT ON SYSTEST 3 After creating the new support library name, run the following WFL job on both the primary and secondary hosts: BEGIN JOB MODIFY/RDB; MODIFY (SSRTEST)SYSTEM/ACCESSROUTINES ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); LIBRARY RDBLIB (LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/COPYAUDIT ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/DMRECOVERY ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/DMCONTROL ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/DMUTILITY ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/RDBUTILITY ON SYSTEST; LIBRARY RDBSUPPORT(LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); MODIFY (SSRTEST)SYSTEM/RDBSERVER ON SYSTEST; LIBRARY RDBLIB (LIBACCESS=BYFUNCTION,FUNCTIONNAME="<new name>"); END JOB. 4 For each of the databases using the second version of Remote Database Backup, run Database Operations Center, bring up the HOSTINFO screen for the database, and enter the location of the new RDB server for the database. Be sure to update this information for both the primary and secondary hosts. 3850 8198 001 17 11

Configuring the Open Distributed Transaction Processing Product Configuring the Open Distributed Transaction Processing Product To upgrade your Enterprise Database Server databases and use the Open Distributed Transaction Processing product for the first time, use the following procedure. Step Action 1 Upgrade your Enterprise Database Server software and databases using the procedures outlined in either Section 18, Upgrading an ADDS Environment to a New Release Level, or Section 19, Upgrading a Non-ADDS Environment to a New Release Level. 2 For each database you want to access in global transactions, add the following elements to the DASDL source file: Open Distributed Transaction Processing option INDEPENDENTTRANS option, if it is not set already REAPPLYCOMPLETED option, if your DBA determines that it is required for databases uses that are not related to Open Distributed Transaction Processing (Optional) Code file title for the RMSUPPORT library 3 Ensure that naming conflicts do not exist between the structures in your DASDL source file and the structures that the system adds to your database to support the Open Distributed Transaction Processing product. The following DASDL source statements are compiled and automatically added to your database description file when you use the Open Distributed Transaction Processing product: RX-SIBDESCS DATA SET ( RX-TIMESTAMP REAL; RX-SIBINX REAL; RX-PART REAL; RX-SZ REAL; RX-SIBDESC GROUP (RX-SIBDESC-WORDS REAL OCCURS 100;); ) POPULATION=100000; RX-SIBDESCS-SET SET OF RX-SIBDESCS KEY (RX-TIMESTAMP, RX-SIBINX, RX-PART); RX-GLOBAL-TR DATA SET ( RX-TIMESTAMP REAL; RX-GLOBAL-ID ALPHA (146); RX-STATE REAL INITIALVALUE = 0; RX-THREAD REAL; RX-SIBINX REAL; RX-AFN REAL; RX-ADDR REAL; RX-SEG REAL; ); RX-GLOBAL-TR-SET SET OF RX-GLOBAL-TR KEY IS RX-GLOBAL-ID; 17 12 3850 8198 001

Configuring the Open Distributed Transaction Processing Product Step Action 4 Compile the DASDL source file with the UPDATE option. 5 If the ZIP compiler control option is not set, use the following syntax to recompile the DMSUPPORT library and the RMSUPPORT library from CANDE: C DATABASE/DMSUPPORT AS $<dmsupport title>; COMPILER FILE DASDL=DESCRIPTION/<database name> C DATABASE/RMSUPPORT AS $<rmsupport title>; COMPILER FILE DASDL=DESCRIPTION/<database name> 6 Run DMUTILITY with the INITIALIZE option, as shown in the following syntax, to initialize the new data structures: RUN $SYSTEM/DMUTILITY("DB=<database name> INITIALIZE RX-GLOBAL-TR, RX-SIBDESCS") Note: If the database DASDL source file in step 4 includes the INITIALIZENEW option, this step is performed automatically. 7 Back up the updated database, description file, DMSUPPORT library, RMSUPPORT library, and other tailored software. 8 Upgrade your applications to take advantage of the Open Distributed Transaction Processing features. If you are using models of your database, perform the following steps for each model to make the RMSUPPORT library title unique: Step Action 1 Rename the RMSUPPORT library in the DASDL source file before compiling the model. 2 Use the DATABASE/WFL/COMPILEDB software to recompile the RMSUPPORT library for the model. 3850 8198 001 17 13

Configuring the Open Distributed Transaction Processing Product 17 14 3850 8198 001

Section 18 Upgrading an ADDS Environment to a New Release Level In This Section This section describes the following tasks associated with upgrading the ADDS environment to a new release level: Upgrading ADDS dictionaries Upgrading Enterprise Database Server databases Backing up Enterprise Database Server databases Upgrading Remote Database Backup environments If you do not use ADDS or if you have some databases that do not use ADDS, refer to Section 19, Upgrading a Non-ADDS Environment to a New Release Level, for the procedures to upgrade your data management environment or to upgrade those particular databases to a new release level. 3850 8198 001 18 1

Upgrade Overview Upgrade Overview In general, all your ClearPath MCP server software does not have to be at the same release level. You can, therefore, upgrade one database to the new level, while leaving other databases at the previous level. However, the primary and secondary databases in a Remote Database Backup system must be at the same release level. For information about software compatibility, refer to the ClearPath MCP Migration Guide. ADDS runs on all supported ClearPath MCP systems provided that the MAX MIX option is set to 9999 or less, and the MAX STACK option is set to 4095 or less. Note: While you can choose to load your software and upgrade dictionaries and databases in stages, the upgrade process for any one dictionary or database must follow the order shown. Upgrading your data management environment to the new release level consists of the following tasks: Step Action 1 Use the SI or Installation Center program to load all the required software onto your ClearPath MCP server. 2 Upgrade your ADDS dictionaries. For more information, refer to Upgrading ADDS Dictionaries later in this section. 3 Install Remote Database Backup by following the instructions provided in Upgrading a Remote Database Backup Environment later in this section. If you are not upgrading Remote Database Backup, skip this step. 4 Upgrade your databases. For more information about upgrading Enterprise Database Server databases, refer to Upgrading Enterprise Database Server Databases later in this section. Upgrading ADDS Dictionaries The process for upgrading ADDS dictionaries is integrated with the process for loading the data management software onto your ClearPath MCP server. The following procedure describes the steps to upgrade your dictionaries to a new release level and, at the same time, load your new data management software onto your ClearPath MCP server. For clarity, the procedure is split into two parts preparing for the upgrade process and performing the upgrade process. The first part preparing for the upgrade process uses your existing software and requires the presence of the keys file for that software. The second part performing the upgrade process assumes that you are loading the new software onto a pack called SYSNEW under the asterisk (*) directory. While you might choose to overwrite your existing ADDS software, it is recommended that you do not overwrite your existing Enterprise Database Server software when you load your new software. Instead, load the Enterprise Database Server software onto a different 18 2 3850 8198 001

Upgrading ADDS Dictionaries pack so that you can continue to use the old software until all your existing databases have been upgraded to the new release level. ADDS will run on all supported ClearPath systems provided that the MAX MIX is set to 9999 or less, and that the MAX STACK is set to 4095 or less. Preparing to Upgrade Your ADDS Dictionary Use the following procedure to clean up and to back up your existing dictionary. Note: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. Step Action 1 Delete any unnecessary data in the dictionary. 2 Assign the SYSTEM/ADDS/UTILITIES program the appropriate privileges if your system is running with security administrator status enabled. For information on the required privileges, refer to the Security Administration Guide or contact your security administrator. 3 As long as the family change bit is set, subsequent DASDL updates do not return the family designations to the designations that were appropriate before the SYSTEM/DMCONTROL change. If you used the SYSTEM/DMCONTROL command to change pack families of the Enterprise Database Server code files, perform the following tasks to prevent mismatches during the upgrade to the ADDS dictionary. Mismatches could occur because the control file and the ADDS dictionary now specify different locations for the Enterprise Database Server code files. a. Reset the family change bit using the SYSTEM/DMCONTROL command. b. Run the SYSTEM/ADDS/UTILITIES program to update only the specifications for the Enterprise Database Server code files to their current location. c. List the control file and ensure that the family change bit has been reset. Use the following syntax: RUN $SYSTEM/DMUTILITY ("db = <db name> list <db name>/control") 4 Use library maintenance to copy and compare the description file and the DMSUPPORT library for your dictionary. 5 Use the DMUTILITY program that matches the existing level of your dictionary to offline dump the ADDS database files. 6 Use the COPYAUDIT utility that matches the existing level of your dictionary to copy the current audit files. 7 Retitle or remove the ADDSDB/RUNTIMEDATA file that is currently being used with your dictionary. 3850 8198 001 18 3

Upgrading ADDS Dictionaries Recording the Current Dictionary Properties Use the following procedure to print your current dictionary properties in preparation for performing the upgrade process. This procedure uses a script feature that records your screen actions. The script feature is included with your database software. Note: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. Step Action 1 Run the SYSTEM/ADDS/UTILITIES program. The system displays the Install/Update/Resize/Change an ADDS Dictionary (ADDSDB) screen. 2 Type the name of your dictionary in the Enter Data Dictionary Name field, and type UPDATE in the Selection field. Transmit the screen. The system displays the Install/Update ADDS Dictionary (OPTIONS) screen. 3 Type *RECORD <script file name> in the Action field, replacing <script file name> with the name you choose for the script file. Transmit the screen. This action starts the recording of your screen actions. The system displays the message Recording Script in the status line. 4 Type Y in the More Properties, Such As (Database Parameters) field on the OPTIONS screen. Transmit the screen. 5 Without changing any screen fields, continue transmitting screens from the home position until the ADDS Dictionary Installation/Update (DICTDOINSTALL) screen is displayed. 6 Type *QUIT in the Action field, and transmit the screen. This action stops the recording of your screen actions. The system displays the message Script Recording Complete in the status line. 7 Type *PRINT <script file name> in the Action field, and transmit to write the captured screens. The system displays the message Script Printed in the status line. 8 Type QUIT in the Action field and transmit to exit the SYSTEM/ADDS/UTILITIES program without updating the dictionary. 9 Use the SYSTEM/BACKUP program to write the screens. 18 4 3850 8198 001

Performing the Upgrade Process on Your ADDS Dictionary Performing the Upgrade Process on Your ADDS Dictionary After recording the properties of your current dictionary and backing up all the files associated with the dictionary, you are ready to upgrade your data management software and ADDS dictionary to the new release level. Note: Whether you need to upgrade your ADDS dictionary schema depends on the software level to which you are upgrading. To determine if you need to change your ADDS schema, refer to the ClearPath MCP Migration Guide and the Release and Support Policy Overview. If you do not need to perform a schema upgrade, you can skip the following dictionary upgrade steps: Loading an initial set of software Upgrading the ADDS schema Backing up the dictionary Instead, you can load all of the software at once. You can upgrade your ADDS dictionaries either with or without fallback capabilities. Select the procedure that best fits the needs of your dictionary. Both procedures are detailed in this section. Upgrading ADDS Dictionaries with Fallback Capabilities The following procedure enables you to return to the previous level of the ADDS dictionary without losing any data created at the new release level. Note: Upgrading ADDS dictionaries without fallback capabilities is discussed later in this section. Step Action 1 Back up the dictionary. 2 Load the initial set of software. 3 Upgrade the ADDS schema. 4 Back up the dictionary. It is important that you save this backup if you want to back up to a previous release. 5 Load the remaining software. 6 Update the SL configuration file. 7 Update the dictionary. 8 Back up the dictionary. 3850 8198 001 18 5

Performing the Upgrade Process on Your ADDS Dictionary Loading an Initial Set of Software Use the following procedure to load part of your ADDS software and SDF Plus. The remainder of the new software is loaded onto your ClearPath MCP server later in the ADDS upgrade process. Note: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. Step Action 1 Use the Result of 'No File'/Release Version Check (REPORT) menu of the SI program to install only the following files from the IDD style: SYSTEM/ADDS/MANAGER/CONFIG SYSTEM/ADDS/MANAGER SYSTEM/ADDS/UTILITIES SYMBOL/ADDS/PROPERTIES FORMLIB/ADDSUTIL/1ADDSUTIL ADDSDB/RUNTIMEDATA Notes: Leave the rest of the ADDS software and all the other data management software at the old release level. If you load all the new data management software onto the ClearPath MCP server at this time, the installation process fails. You can overwrite the old files with the newer versions. If you load the ADDS software items elsewhere, such as on another disk or under a different usercode, you must modify your SL configuration file to include lines that identify the location of the old and the new software. 2 Verify that the new versions of the SDF Plus libraries have been installed. For information about verifying SDF Plus libraries, refer to Verifying SDF Plus Libraries in Section 17. 3 Verify that all your Enterprise Database Server database software is still at the previous release level. 18 6 3850 8198 001

Performing the Upgrade Process on Your ADDS Dictionary Upgrading the ADDS Schema Upgrade the ADDS schema and run-time data to the new release level by performing the following steps. Notes: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. For information about data dictionary security, refer to the ADDS Operations Guide. Whether you need to upgrade your ADDS schema depends on the software level to which you are upgrading. To find out whether you need to change your ADDS schema, refer to the ClearPath MCP Migration Guide. Step Action 1 Run the newly installed SYSTEM/ADDS/UTILITIES program. The system displays the Install/Update/Resize/Change an ADDS Dictionary (ADDSDB) screen. 2 Type the name of your dictionary in the Enter Data Dictionary Name field, type CHANGE in the Selection field, and then transmit the screen. The system displays the Change ADDS Dictionary (CHANGE) screen. 3 Type the name of your run-time data file in the Addsdb Update File field, and then transmit the screen. By default, the run-time data file is called ADDSDB/RUNTIMEDATA. This step initiates the ADDS schema and run-time data update processes. 4 After the update processes have finished, type QUIT in the Action field and transmit to exit the SYSTEM/ADDS/UTILITIES program. Backing Up the Dictionary Perform the following steps to back up the dictionary: Step Action 1 Use library maintenance to copy and compare the description file and the DMSUPPORT library for your dictionary. 2 Use the DMUTILITY program to offline dump the ADDS database files. 3 Use the COPYAUDIT utility to copy the current audit files. 3850 8198 001 18 7

Performing the Upgrade Process on Your ADDS Dictionary Updating the Dictionary Update your dictionary to the new release level by performing the following steps. Notes: To use the SYSTEM/ADDS/UTILITIES program, you must have data dictionary administrator (DDA) privileges. For information about data dictionary security, refer to the ADDS Operations Guide. If the SYSTEM/ADDS/UTILITIES program is running with an insufficient security status, for example, it does not have SECADMIN status, the installation process finishes with the following message: Dictionary database process requested was successful, but was not SL'ed. Refer to the Security Administration Guide or contact your security administrator for information on assigning the appropriate privileges to the SYSTEM/ADDS/UTILITIES program. Step Action 1 Run the new version of the SYSTEM/ADDS/UTILITIES program. The system displays the Install/Update/Resize/Change an ADDS Dictionary (ADDSDB) screen. 2 Type the name of your dictionary in the Enter Data Dictionary Name field, and type UPDATE in the Selection field. Transmit the screen. The system displays the Install/Update ADDS Dictionary (OPTIONS) screen. If you have not inhibited the display of messages by using the CANDE RO MESSAGE command, then a series of messages might be displayed on your screen. These messages can be ignored. Use the REFRESH action command to remove the messages from your screen. However, do not ignore any messages displayed in the Messages field at the bottom of your screen. Note: Watch for warning messages that indicate pursuing a course of action might require you to re-enter all dictionary options. To prepare for such an event, follow the instructions earlier in this section under Recording the Current Dictionary Properties. 3 To use the existing properties for the new version of your dictionary, type N in the More Properties, Such As (Database Parameters) field on the OPTIONS screen, and then transmit the screen. The system displays the ADDS Dictionary Installation/Update (DICTDOINSTALL) screen. 18 8 3850 8198 001

Performing the Upgrade Process on Your ADDS Dictionary Step Action 4 To tailor the properties of your dictionary, type Y in the More Properties, Such As (Database Parameters) field on the OPTIONS screen, and then transmit the screen. A series of screens are displayed that enable you to tailor your dictionary properties. Complete the screens as required. The final screen is the DICTDOINSTALL screen. Note: If you decide to tailor the properties, you should have available the printout from the procedure Recording the Current Dictionary Properties presented earlier in this section. 5 To review the properties assigned to your dictionary, type Y in the Do You Wish to Review field on the DICTDOINSTALL screen. To bypass reviewing the properties, type N in the Do You Wish to Review field on the DICTDOINSTALL screen. Transmit the screen. The dictionary update process is initiated. The update process does not complete instantaneously. The time for the update process to complete depends on the size of your dictionary and the type of ClearPath MCP server you are using. 6 After the dictionary update process is complete, type QUIT in the Action field and transmit the screen to exit the SYSTEM/ADDS/UTILITIES program. 7 Make the DMSUPPORT library and the database control file for each dictionary public if the dictionary is to be accessed from multiple usercodes. 8 Back up the dictionary, following the steps detailed in Backing Up the Dictionary earlier in this section. 3850 8198 001 18 9

Performing the Upgrade Process on Your ADDS Dictionary Upgrading ADDS Dictionaries Without Fallback Capabilities The following procedure eliminates the partial loading of your data management software. However, if you use this procedure, you cannot return to the previous release level of ADDS without losing data created at the new release level. Recovering the files backed up before the upgrade process is the only means of returning to the previous release level. Step Action 1 Perform a backup. This backup is to be used if you need to return to a previous version of a file. 2 Use the SI or Installation Center program to load your data management and SDF Plus products. For more information, refer to Loading Your Software in Section 17. 3 Update the SL configuration file. 4 Update the dictionaries, following the steps detailed in Updating the Dictionary earlier in this section. 5 Back up the database files, following the steps detailed in Backing Up the Dictionary earlier in this section. 6 Update the ADDS schema by performing the steps described in Upgrading the ADDS Schema earlier in this section. 7 Run the new data management software. 8 As a precautionary measure, back up the upgraded database files, following the steps detailed in Backing Up the Dictionary earlier in this section. 18 10 3850 8198 001

Upgrading Enterprise Database Server Databases Upgrading Enterprise Database Server Databases You can use the SL configuration file to require the use of ADDS whenever a database generation process occurs. If your Enterprise Database Server database has this ADDS requirement, you must use the following procedure to upgrade your Enterprise Database Server database. If you do not require the use of ADDS, you can use the following procedure or the procedure described in Section 19, Upgrading a Non-ADDS Environment to a New Release Level. Once you have loaded your new data management software and upgraded your ADDS dictionary, you are ready to upgrade your Enterprise Database Server databases to the new software release. Perform the following steps to upgrade your Enterprise Database Server databases. Note: In a Remote Database Backup environment, perform all backups on the primary host. Step Action 1 Bring down your database. If you are using Remote Database Backup, refer to Upgrading a Remote Database Backup Environment later in this section for instructions on bringing down your database. 2 Back up your database. For instructions, refer to Backing Up an Enterprise Database Server Database later in this section. 3 Run the SYSTEM/IEMANAGER program 4 Type the name of your dictionary in the Dictionary Name field, and type UTIL in the Choice field. Transmit the screen. The system displays the Data Base Utilities (UTIL) screen. 5 Perform the following steps: a. Type GO SESSION in the Action field, or type SESSION in the Choice field. b. Transmit the screen. The system displays the Session Options (SESSION) screen. c. Designate your database type as DMSII. d. To return to the UTIL screen, type HOME in the Action field, and transmit the screen. 3850 8198 001 18 11

Backing Up an Enterprise Database Server Database Step Action 6 Type GENDB in the Choice field, and transmit the screen. The system displays the Data Base Generation Options (DBGO) screen. 7 Provide information about the database you want to upgrade, as follows: a. Type 3 in the Selection field. b. Typed Y in the Update field. c. Type Y in the Zip field. d. Transmit the screen. The database update process is initiated. Note: You can also use the DBGO screen to ensure that any code file titles designated in the DASDL source file reflect the target software for the restored environment. For more information on designating code file titles, refer to the DASDL Reference Manual. 8 (Optional) To upgrade another database, repeat step 7. 9 After the update process is complete, type QUIT in the Action field, and transmit the screen to exit the SYSTEM/IEMANAGER program. 10 To complete the upgrade process, back up your database. For instructions, refer to Backing Up an Enterprise Database Server Database later in this section. Backing Up an Enterprise Database Server Database Use the following procedure to back up an Enterprise Database Server database. Note: In the Remote Database Backup environment, perform this backup procedure on the primary host. Step Action 1 Use library maintenance to copy and compare the description file, the DMSUPPORT library, the RECONSTRUCT code file, and if necessary, the RMSUPPORT library for your database. 2 Save a current copy of the DASDL source file. 3 Use the DMUTILITY program that matches the existing level of your database to offline dump the Enterprise Database Server database files. 4 Use the COPYAUDIT utility that matches the existing level of your database to copy the current audit files. 18 12 3850 8198 001

Upgrading a Remote Database Backup Environment Upgrading a Remote Database Backup Environment If you use Remote Database Backup, you need to upgrade Remote Database Backup in addition to upgrading your database. The following procedure indicates where in the process of upgrading your database you need to perform the tasks for upgrading Remote Database Backup. Perform the tasks in the order presented. Note: User programs continue to run without being recompiled. There is no need to disable, enable, or reclone the secondary database to complete this software update. Step Action 1 Bring down your databases. For mode-specific procedures, see Bringing Down Your Databases later in this section. 2 Back up your database on the primary host. For instructions, refer to Backing Up an Enterprise Database Server Database earlier in this section. Note: In addition to the steps described under the heading Backing Up an Enterprise Database Server Database, use library maintenance to copy and compare the RDB control file. 3 Load the new data management software on the primary host, using the SI or Installation Center program. For information about the SI program, refer to the Simple Installation Operations Guide. For information about the Installation Center program, refer to the Installation Center Operations Guide. 4 If you are recompiling Remote Database Backup system software, and either the SYSTEM/RDBSUPPORT or the SYSTEM/RDBSERVER program is recompiled, use the MP (Mark Program) system command with syntax similar to the following: MP *SYSTEM/RDBSUPPORT + TASKING 5 If you are using Remote Database Backup with nonusercoded databases, see Modifying the RDB Support Library for a Nonusercoded Database later in this section. 6 Upgrade the primary database. If you are upgrading an Enterprise Database Server database, refer to Upgrading Enterprise Database Server Databases earlier in this section. 7 Load the new data management software on the secondary host, using the SI or Installation Center program. For information about the SI program, refer to the Simple Installation Operations Guide. For information about the Installation Center program, refer to the Installation Center Operations Guide. 3850 8198 001 18 13

Upgrading a Remote Database Backup Environment Step Action 8 If you are recompiling Remote Database Backup system software, and either the SYSTEM/RDBSUPPORT or the SYSTEM/RDBSERVER program is recompiled, use the MP (Mark Program) system command with syntax similar to the following: MP *SYSTEM/RDBSUPPORT + TASKING 9 Upgrade the secondary database. For instructions about upgrading the secondary database, see Upgrading the Secondary Database later in this section. 10 Back up your upgraded database on the primary host. For instructions, refer to Backing Up an Enterprise Database Server Database earlier in this section. Note: In addition to the steps described under the heading Backing Up an Enterprise Database Server Database, use the DMUTILITY program to perform a database dump and back up regenerated tailored software. 11 Perform one of the following actions: If Remote Database Backup is operating under ABW mode, access your databases. You do not need to disable, enable, or reclone the database on the secondary host. If Remote Database Backup is operating under AFS, SCA, or NSC mode, close the audit file before allowing the database on the secondary host to open. Bringing Down Your Databases The method you use to bring down a database depends on the audit file transmission mode (ABW, AFS, SCA, or NSC) you have set. The following procedures address bringing down your database under ABW mode and bringing down your database under AFS, SCA, or NSC mode. Use the method that matches your situation. Bringing Down a Database Under ABW Mode Under ABW mode, follow these steps to prepare for the database upgrade: Step Action 1 Make sure that the primary and secondary databases being upgraded are synchronized. 2 Bring down the databases on both hosts normally. 18 14 3850 8198 001

Upgrading a Remote Database Backup Environment Bringing Down a Database Under AFS, SCA, or NSC Mode In AFS, SCA, or NSC mode, follow these steps to prepare for the database upgrade: Step Action 1 Bring down the database on the primary host normally. 2 Use the Report screen of the Database Operations Center to make sure that all audit files except the current audit file have been transferred to the secondary host and have been applied to the secondary database. 3 Use the COPYAUDIT utility or library maintenance to transfer the current audit file on the primary host to the secondary host. 4 Use the Acknowledge option of the Database Operations Center to acknowledge the transferred audit file on the secondary host. 5 Wait for the audit file to be applied to the secondary database. 6 Bring down the database on the secondary host normally. Providing a Queue for Remote Database Backup-Related Tasks The mechanism used to start processes on a secondary host differs for usercoded and nonusercoded databases: For usercoded databases, processes are initiated directly under the database usercode. For nonusercoded databases, processes are initiated using a WFL job. For a nonusercoded database, Remote Database Backup automatically creates, at run time, the required WFL job for initiating processes on the secondary host. For RDB server and all related tasks to run unimpeded, you must ensure that a queue exists to handle the tasks by either Checking that the attributes of the default queue on the secondary host do not prevent Remote Database Backup processes from running correctly. For example, if the mix limit for the default queue is set to 0 (zero), no Remote Database Backup processes can run on the secondary host. Configuring and specifying a queue other than the default queue to handle all Remote Database Backup tasks. You can use the MARC screens to check queue numbers and the attributes associated with them. 3850 8198 001 18 15

Upgrading a Remote Database Backup Environment Modifying the RDB Support Library for a Nonusercoded Database When you use a nonusercoded database with Remote Database Backup, modify the new RDB support library by completing the following steps: Step Action 1 Establish a privileged usercode (and password) as a remote user on the primary and secondary hosts. 2 Modify the title attribute of the NFTINFOFILE file to the usercode and password established in step 1, using a WFL statement with the following syntax: WFL MODIFY *SYSTEM/RDBSUPPORT; FILE NFTINFOFILE (TITLE = <usercode>/<password>); The RDB support library must be modified on both hosts. 3 If you use a queue other than the default queue to handle all Remote Database Backup tasks on the secondary host, enter a WFL statement with the following syntax at the secondary host: WFL MODIFY *SYSTEM/RDBSUPPORT; TASKSTRING = "<queue number>" 4 Prepare the RDB support library using the SL (Support Library) system command. The syntax is as follows: SL RDBSUPPORT = *SYSTEM/RDBSUPPORT ON DISK 18 16 3850 8198 001

Upgrading a Remote Database Backup Environment Upgrading the Secondary Database To upgrade the secondary database, perform the following steps: Step Action 1 Use library maintenance to copy the new description file and the regenerated tailored software to the secondary host. 2 Use the DMCONTROL program to update the database control file at the secondary host with the new description file. If the control file resides on a pack with a family name different from the family name specified on the primary host, you must file-equate the following files when you run the DMCONTROL program: FILE CF(TITLE=<control file title including family name>); FILE CFOLD(TITLE=<control file title including family name>) 3 If the audit file transmission mode for the primary host is AFS, SCA, or NSC, close the audit file on the primary host before allowing the secondary database to open. 4 If you do not perform the file equation, the DMCONTROL program waits on a REQUIRES PK <pack name> <database>/control message. If this message is displayed, perform the following actions: Use the OF (Optional File) system command to force the DMCONTROL program to proceed without the optional file. Use the FA (File Attribute) system command to declare the correct file. Note: Ignore any warning messages regarding the family change bit. The family change bit was set when the database was last cloned and should remain set. 3850 8198 001 18 17

Upgrading a Remote Database Backup Environment When the Audit File Transmission Mode Is AFS, SCA, or NSC If the audit file transmission mode for the primary host is AFS, SCA, or NSC, then perform the following steps before allowing the database on the secondary host to open: Step Action 1 Open the database for update on the primary host with any application, and then initiate the Visible DBS command AUDIT CLOSE. 2 If you are using SCA or NSC mode, manually transfer to the secondary host any audit files created since the primary database was upgraded, up to and including the audit file closed in the previous step. The audit files are copied automatically if you are using AFS mode. 3 If you are using SCA or NSC mode, use the Acknowledge option of the Database Operations Center to acknowledge the transferred audit file on the secondary host. The transferred audit files are acknowledged automatically if you are using AFS mode. The secondary database can now be accessed. You do not need to disable, enable, or reclone the database on the secondary host. Facilitating the NFT Task Under the AFS Mode Under the AFS mode, certain Remote Database Backup-related tasks (including the RDB server, the ACR Server, and Tracker) execute under the database usercode. However, BNA does not allow the NFT task to run without a usercode. To enable the NFT task to run, provide the NFTINFOFILE file with a privileged usercode and password during an RDB support library modification. You can perform the modification on both the primary and secondary hosts, or you can perform it on the primary host and copy the modified RDB support library to the secondary host. 18 18 3850 8198 001

Section 19 Upgrading a Non-ADDS Environment to a New Release Level In This Section This section describes the following tasks associated with upgrading a non-adds environment to a new release level: Loading the data management software Verifying SDF Plus libraries Upgrading Enterprise Data Server databases Backing up an Enterprise Database Server database Upgrading a Remote Database Backup environment If you are using the ADDS product or if you have some databases that use the ADDS environment, refer to Section 18, Upgrading an ADDS Environment to a New Release Level, for the procedures to upgrade your data management environment or upgrade those particular databases to a new release level. 3850 8198 001 19 1

Upgrade Overview Upgrade Overview In general, all your ClearPath MCP server software does not have to be at the same release level. You can, therefore, upgrade one database to the new level, while leaving other databases at the previous level. However, the primary and secondary databases in a Remote Database Backup system must be at the same release level. For information about software compatibility, refer to the ClearPath MCP Migration Guide. Upgrading your data management environment to the new release level consists of the following tasks: Step Action 1 Use the SI or Installation Center program to load all the required software onto your ClearPath MCP server. For more information, refer to Loading the Data Management Software later in this section. 2 Install Remote Database Backup by following the instructions provided in Upgrading a Remote Database Backup Environment later in this section. If you are not upgrading Remote Database Backup, skip this step. 3 Upgrade your databases. For more information about upgrading Enterprise Database Server databases, refer to Upgrading Enterprise Database Server Databases later in this section. Loading the Data Management Software To load your data management software, complete the following step. Note: It is assumed that you are loading the new software onto a pack called SYSNEW and that the software releases are the most current ClearPath MCP releases. Step Action 1 Load the software from the ClearPath MCP media to your MCP server using the SI or Installation Center program. For information on the SI program, refer to the Simple Installation Operations Guide. For information on the Installation Center program, refer to the Installation Center Operations Guide. Note: If you are using a Remote Database Backup system, load the appropriate software on both host systems. 19 2 3850 8198 001

Verifying SDF Plus Libraries Verifying SDF Plus Libraries Verify that the latest versions of the SDF Plus libraries have been installed if your products use a screen interface. The data management products that use an SDF Plus screen interface are Advanced Data Dictionary System (ADDS) Remote Database Backup If the latest versions of the SDF Plus libraries have not been installed, use the SI or Installation Center program to install them. For detailed information on the SI program, refer to the Simple Installation Operations Guide. For detailed information about the Installation Center program, refer to the Installation Center Operations Guide. The style identifier associated with the SDF Plus product is SDF. Or, you can use the UTL style identifier and request that the following files be installed: *SYSTEM/SDFPLUS/ARCHIVEMANAGER *SYSTEM/SDFPLUS/COMMANAGER *SYSTEM/SDFPLUS/DICTMANAGER *SYSTEM/SDFPLUS/FORMSPROCESSOR *SYSTEM/SDFPLUS/FORMSSUPPORT If old versions of these library files are installed, you must ensure that the files are not currently being used while you are updating the files. Use the LIBS (Library Task Entries) system command to check for in-use files. Note: It is not sufficient to use the CANDE FILES command to check for in-use files because any file that is assigned as a system library is listed as in use. Before you install any data management product that has a screen interface, ensure that none of the form library files are in use. If you install the new form libraries over form libraries that are in use, the new form libraries are overwritten when the existing in-use form library files are closed. As a result, you must install the new form library files again. To avoid this problem, use the CANDE FILES FORMLIB command to check for in-use form library files. 3850 8198 001 19 3

Upgrading Enterprise Database Server Databases Upgrading Enterprise Database Server Databases Once you have loaded your new data management software, you are ready to upgrade your Enterprise Database Server databases to the new software release. Perform the following steps to upgrade your Enterprise Database Server databases. Note: In a Remote Database Backup environment, perform all backups on the primary host. Step Action 1 Bring down your database. If you are using Remote Database Backup, refer to Upgrading a Remote Database Backup Environment later in this section for instructions on bringing down your database. 2 Back up your database. For instructions, refer to Backing Up an Enterprise Database Server Database later in this section. 3 Perform a DASDL update, using the new Enterprise Database Server software. In case you need to return to the old release level, do not make any changes to the DASDL description of the database during this update run. Ensure that any code file titles designated in the DASDL source file reflect the target software for the new environment. For more information on designating code file titles, refer to the DASDL Reference Manual. 4 Use the new DMCONTROL program to update the control file. 5 Use the new DATABASE/WFL/COMPILERACR file to update the tailored software. 6 To complete the upgrade process, back up your upgraded database. For instructions, refer to Backing Up an Enterprise Database Server Database later in this section. The upgraded database can now be accessed. User programs do not need to be recompiled. 19 4 3850 8198 001

Backing Up an Enterprise Database Server Database Backing Up an Enterprise Database Server Database Perform the following steps to back up an Enterprise Database Server database. Note: In a Remote Database Backup environment, perform this backup procedure on the primary host. Step Action 1 Use library maintenance to copy and compare the description file, the DMSUPPORT library, the RECONSTRUCT code file, and if necessary, the RMSUPPORT library for your database. 2 Save a current copy of the DASDL source file. 3 Use the DMUTILITY program that matches the existing level of your database to offline dump the Enterprise Database Server database files. 4 Use the COPYAUDIT utility that matches the existing level of your database to copy the current audit files. Upgrading a Remote Database Backup Environment If you use Remote Database Backup, you need to upgrade Remote Database Backup in addition to upgrading your database. The following procedure indicates where in the process of upgrading your database you need to perform the tasks for upgrading Remote Database Backup. Refer to the Database Operations Getting Started Guide for information about the Database Operations Center. Perform the tasks in the order presented. Note: User programs continue to run without being recompiled. There is no need to disable, enable, or reclone the secondary database to complete this software update. Step Action 1 Bring down your databases. For mode-specific procedures, see Bringing Down Your Databases later in this section. 2 Back up your database on the primary host. For instructions, refer to Backing Up an Enterprise Database Server Database earlier in this section. Note: In addition to the steps described under the heading Backing Up an Enterprise Database Server Database, use library maintenance to copy and compare the RDB control file. 3850 8198 001 19 5

Upgrading a Remote Database Backup Environment Step Action 3 Load the new data management software on the primary host, using the SI or Installation Center program. For information about the SI program, refer to the Simple Installation Operations Guide. For information about Installation Center, refer to the Installation Center Operations Guide. 4 If you are recompiling Remote Database Backup system software, and either the SYSTEM/RDBSUPPORT or the SYSTEM/RDBSERVER program is recompiled, use the MP (Mark Program) system command with syntax similar to the following: MP *SYSTEM/RDBSUPPORT + TASKING 5 If you are using Remote Database Backup with nonusercoded databases, see Modifying the RDB Support Library for a Nonusercoded Database later in this section. 6 Upgrade the primary database. If you are upgrading an Enterprise Database Server database, refer to Upgrading Enterprise Database Server Databases earlier in this section. 7 Load the new data management software on the secondary host, using the SI or Installation Center program. For information about the SI program, refer to the Simple Installation Operations Guide. For information about the Installation Center program, refer to the Installation Center Operations Guide. 8 If you are recompiling Remote Database Backup system software, and either the SYSTEM/RDBSUPPORT or the SYSTEM/RDBSERVER program is recompiled, use the MP (Mark Program) system command with syntax similar to the following: MP *SYSTEM/RDBSUPPORT + TASKING 9 Upgrade the secondary database. For instructions about upgrading the secondary database, see Upgrading the Secondary Database later in this section. 10 Back up your upgraded database on the primary host. For instructions, refer to Backing Up an Enterprise Database Server Database earlier in this section. Note: In addition to the steps described under the heading Backing Up an Enterprise Database Server Database, use the DMUTILITY program to perform a database dump and back up regenerated tailored software. 11 Perform one of the following actions: If Remote Database Backup is operating under ABW mode, access your databases. You do not need to disable, enable, or reclone the database on the secondary host. If Remote Database Backup is operating under AFS, SCA, or NSC mode, close the audit file before allowing the database on the secondary host to open. 19 6 3850 8198 001

Upgrading a Remote Database Backup Environment Bringing Down Your Databases The method you use to bring down a database depends on the audit file transmission mode (ABW, AFS, SCA, or NSC) you have set. The following procedures address bringing down your database under ABW mode and bringing down your database under AFS, SCA, or NSC mode. Bringing Down a Database Under ABW Mode Under ABW mode, follow these steps to prepare for the database upgrade: Step Action 1 Make sure that the primary and secondary databases being upgraded are synchronized. 2 Bring down the databases on both hosts normally. Bringing Down a Database Under AFS, SCA, or NSC Mode In AFS, SCA, or NSC mode, follow these steps to prepare for the database upgrade: Step Action 1 Bring down the database on the primary host normally. 2 Use the Report screen of the Database Operations Center to make sure that all audit files except the current audit file have been transferred to the secondary host and have been applied to the secondary database. 3 Use the COPYAUDIT utility or library maintenance to transfer the current audit file on the primary host to the secondary host. 4 Use the Acknowledge option of the Database Operations Center to acknowledge the transferred audit file on the secondary host. 5 Wait for the audit file to be applied to the secondary database. 6 Bring down the database on the secondary host normally. 3850 8198 001 19 7

Upgrading a Remote Database Backup Environment Providing a Queue for Remote Database Backup-Related Tasks The mechanism used to start processes on a secondary host differs for usercoded and nonusercoded databases: For usercoded databases, processes are initiated directly under the database usercode. For nonusercoded databases, processes are initiated using a WFL job. For a nonusercoded database, Remote Database Backup automatically creates, at run time, the required WFL job for initiating processes on the secondary host. For RDB server and all related tasks to run unimpeded, you must ensure that a queue exists to handle the tasks by either Checking that the attributes of the default queue on the secondary host do not prevent Remote Database Backup processes from running correctly. For example, if the mix limit for the default queue is set to 0 (zero), no Remote Database Backup processes can run on the secondary host. Configuring and specifying a queue other than the default queue to handle all Remote Database Backup tasks. You can use the MARC screens to check queue numbers and the attributes associated with them. 19 8 3850 8198 001

Upgrading a Remote Database Backup Environment Modifying the RDB Support Library for a Nonusercoded Database When you use a nonusercoded database with Remote Database Backup, modify the new RDB support library by completing the following steps: Step Action 1 Establish a privileged usercode (and password) as a remote user on the primary and secondary hosts. 2 Modify the title attribute of the NFTINFOFILE file to the usercode and password established in step 1, using a WFL statement with the following syntax: WFL MODIFY *SYSTEM/RDBSUPPORT; FILE NFTINFOFILE (TITLE = <usercode>/<password>); The RDB support library must be modified on both hosts. 3 If you use a queue other than the default queue to handle all Remote Database Backup tasks on the secondary host, enter a WFL statement with the following syntax at the secondary host: WFL MODIFY *SYSTEM/RDBSUPPORT; TASKSTRING = "<queue number>" 4 Prepare the RDB support library using the SL (Support Library) system command. The syntax is as follows: SL RDBSUPPORT = *SYSTEM/RDBSUPPORT ON DISK 3850 8198 001 19 9

Upgrading a Remote Database Backup Environment Upgrading the Secondary Database To upgrade the secondary database, perform the following steps: Step Action 1 Use library maintenance to copy the new description file and the regenerated tailored software to the secondary host. 2 Use the DMCONTROL program to update the database control file at the secondary host with the new description file. If the control file resides on a pack with a family name different from the family name specified on the primary host, you must file-equate the following files when you run the DMCONTROL program: FILE CF(TITLE=<control file title including family name>); FILE CFOLD(TITLE=<control file title including family name>) 3 If the audit file transmission mode for the primary host is AFS, SCA, or NSC, close the audit file on the primary host before allowing the secondary database to open. 4 If you do not perform the file equation, the DMCONTROL program waits on a REQUIRES PK <pack name> <database>/control message. If this message is displayed, perform the following actions: Use the OF (Optional File) system command to force the DMCONTROL program to proceed without the optional file. Use the FA (File Attribute) system command to declare the correct file. Note: Ignore any warning messages regarding the family change bit. The family change bit was set when the database was last cloned and should remain set. When the Audit File Transmission Mode Is AFS, SCA, or NSC If the audit file transmission mode for the primary host is AFS, SCA, or NSC, then perform the following steps before allowing the database on the secondary host to open: Step Action 1 Open the database for update on the primary host with any application, and then initiate the Visible DBS command AUDIT CLOSE. 2 If you are using SCA or NSC mode, manually transfer to the secondary host any audit files created since the primary database was upgraded, up to and including the audit file closed in the previous step. The audit files are copied automatically if you are using AFS mode. 3 If you are using SCA or NSC mode, use the Acknowledge option of the Database Operations Center to acknowledge the transferred audit file on the secondary host. The transferred audit files are acknowledged automatically if you are using AFS mode. The secondary database can now be accessed. You do not need to disable, enable, or reclone the database on the secondary host. 19 10 3850 8198 001

Upgrading a Remote Database Backup Environment Facilitating the NFT Task Under the AFS Mode Under the AFS mode, certain Remote Database Backup-related tasks (including the RDB server, the ACR Server, and Tracker) execute under the database usercode. However, BNA does not allow the NFT task to run without a usercode. To enable the NFT task to run, provide the NFTINFOFILE file with a privileged usercode and password during an RDB support library modification. You can perform the modification on both the primary and secondary hosts, or you can perform it on the primary host and copy the modified RDB support library to the secondary host. 3850 8198 001 19 11

Upgrading a Remote Database Backup Environment 19 12 3850 8198 001

Section 20 Returning to a Previous Release Level In This Section After upgrading to a new release level of data management software, you might need to return to a previous release level. The procedures in this section explain how to accomplish this task for ADDS Enterprise Database Server Remote Database Backup 3850 8198 001 20 1

Returning to a Previous Release Overview Returning to a Previous Release Overview In general, the process for returning to the previous release level of any database management software involves the following steps: Step Action 1 Back up your existing database or dictionary files. 2 Use the SI or Installation Center program to load the software associated with the release level to which you want to return. 3 Ensure your SL configuration file and schema source files point to the reloaded software. 4 Restore the files from the backup medium you created before moving to the later release level. 5 Perform the steps that are specific to ADDS, Enterprise Database Server, or Remote Database Backup. 6 Back up your restored database or dictionary files. If you are returning more than one dictionary or database to a previous release level, back up all of your existing databases and dictionary files before using the SI or Installation Center program to reload software. If you do not complete all the required backups at the start, you might encounter mismatch problems between the levels of your dictionaries and databases and the levels of such programs as DMUTILITY. 20 2 3850 8198 001

Returning to a Previous Release Level of ADDS Returning to a Previous Release Level of ADDS Use the following procedure if you need to return to a previous release level of ADDS. Note: In the following procedure, it is assumed that you are restoring the files to a pack called SYSOLD. Step Action 1 Use library maintenance to copy and compare the description file and the DMSUPPORT library for your ADDS database files. 2 Use the DMUTILITY program to offline dump the ADDS database files. 3 Use the COPYAUDIT utility to copy the current audit files. 4 Retitle or remove the ADDSDB/RUNTIMEDATA file. 5 Use the SI or Installation Center program to restore the old data management software including the previous version of the ADDSDB/RUNTIMEDATA file and the SYMBOL/ADDS/PROPERTIES file if the software is not already present on your ClearPath MCP server. 6 Ensure that the information contained in the SL configuration file is appropriate for the restored release level. If your old SL configuration file is still on your MCP server, do not perform this step. 7 Perform the following steps to restore files from the backup medium you created when you were preparing to upgrade your ADDS dictionary to the new release level: a. Remove the ADDSDB/CONTROL file. b. Use the DMUTILITY program to restore the previous version of the ADDSDB/CONTROL file. c. Use the WFL COPY command to restore the DESCRIPTION/ADDSDB and DMSUPPORT/ADDSDB files. To determine which version of backup files to restore, refer to Upgrading ADDS Dictionaries with Fallback Capabilities and Upgrading ADDS Dictionaries Without Fallback Capabilities in Section 18. You cannot return the dictionary to the previous release level without restoring these backup files. The backup process is described under Backing Up the Dictionary in Section 18. 8 Use the SL (Support Library) system command to associate your dictionary with the DMSUPPORT library. For example, if your dictionary is called DATADICTIONARY and you have restored the DMSUPPORT library to the asterisk (*) directory on the SYSOLD pack, use the following command: SL DATADICTIONARY = *DMSUPPORT/ADDSDB ON SYSOLD 3850 8198 001 20 3

Returning to a Previous Release Level of ADDS Step Action 9 Enter the following command from any terminal. The database usercode and family identify the location of your restored dictionary files. RUN $SYSTEM/DMCONTROL("RECOVER UPDATE"); FILE DASDL (TITLE =(<database usercode>) DESCRIPTION/ADDSDB on <database family>) 10 Enter an AX (Accept) system command in the following format after the system accepts the command entered in step 9: <mix number> AX <audit number> The audit number is the number found in the last audit file title. The naming convention for audit files is <database name>/audit nnnn. For example, if ADDSDB/AUDIT9 is the last audit file title, the number 9 should be entered as part of the AX command. 11 Run the SYSTEM/ADDS/UTILITIES program located on the SYSOLD pack. The system displays the Install/Update/Resize/Change an ADDS Dictionary (ADDSDB) screen. 12 Type the name of your dictionary in the Enter Data Dictionary Name field, type CHANGE in the Selection field, and then transmit the screen. Using the CHANGE option does not change the schema. Only the run-time data level is restored to the previous release level. 13 After the change process has finished, type QUIT in the Action field to exit the SYSTEM/ADDS/UTILITIES program. 14 Use library maintenance to copy and compare the description file and the DMSUPPORT library for your ADDS database files. 15 Use the DMUTILITY program that matches the existing level of your dictionary to offline dump the ADDS database files. 16 Use the COPYAUDIT utility that matches the existing level of your dictionary to copy the current audit files. If you repeat the upgrade and restoration process for a dictionary, the process takes less time when repeated because the schema remains at the higher release level and no reorganization is required. You can use a restored dictionary with older data management software. However, features introduced with the newer data management software cannot be used with the restored dictionary. 20 4 3850 8198 001

Returning to a Previous Release Level of Enterprise Database Server Returning to a Previous Release Level of Enterprise Database Server Use the following procedure to restore an Enterprise Database Server database to a previous release level. This procedure can be used only if the DASDL description of the database was not changed during or after the upgrade process. Notes: If your database is Remote Database Backup capable, refer to the procedure under Returning to a Previous Release Level of Remote Database Backup later in this section. If your Enterprise Database Server database is defined in ADDS, you need to restore your dictionary to the previous release level before restoring your database. For more information on this task, refer to Returning to a Previous Release Level of ADDS earlier in this section. Step Action 1 Use library maintenance to copy and compare the description file and the DMSUPPORT library for your Enterprise Database Server database files. If you are using the Open Distributed Transaction Processing product, use library maintenance to also copy and compare the RMSUPPORT library. 2 Save a current copy of the DASDL source file. 3 Use the DMUTILITY program that matches the existing level of your database to perform an offline dump of the Enterprise Database Server database files. 4 Use the COPYAUDIT utility that matches the existing level of your database to copy the current audit files. 5 Use the SI or Installation Center program to restore the previous version of the Enterprise Database Server software. Note: If the old software is already on your ClearPath MCP server, do not perform this step. 6 Perform the following steps to restore files from the backup medium you created when you were preparing to upgrade your database to the new release level: a. Remove the database control file. b. Use library maintenance to restore the DASDL source file, the description file, the DMSUPPORT library, and if necessary, the RMSUPPORT library. c. Use the DMUTILITY program to restore the control file. d. Ensure that all family locations are correct in the restored control file. If a DMCONTROL FAMILY option was used to change family locations since the time of the dump, or if the dump is from the other host of a Remote Database Backup system where family locations differ, then you need to run the DMCONTROL program with the FAMILY option to correct any differences. 7 Ensure that any code file titles designated in the DASDL source file reflect the target software for the restored environment. For more information on designating code file titles, refer to the DASDL Reference Manual. 3850 8198 001 20 5

Returning to a Previous Release Level of Remote Database Backup Step Action 8 Perform a recover update operation on the restored control file by using the restored DMCONTROL program. During the recover update operation, enter the last audit file number. In addition, if the RECOVERY option is not set to run automatically after the recover update operation, you must start the recovery process manually by running the DMRECOVERY program. If the AREAS attribute for a structure is affected as a result of enabling the POPULATIONINCR option while operating at the later level release, you must perform a reorganization prior to making the database available to application programs. In particular, the AREAS attribute must be set to values that are greater than or equal to the number of areas that are actually in use by the respective structure. 9 Use library maintenance to copy and compare the description file, the DMSUPPORT library, and if necessary, the RMSUPPORT library for your database. 10 Save a current copy of the DASDL source file. 11 Use the DMUTILITY program that matches the level of your restored database to perform an offline dump of the Enterprise Database Server database files. 12 Use the COPYAUDIT utility that matches the level of your restored database to copy the current audit files. User programs continue to run normally. Programs compiled with the new compilers run the restored Accessroutines code file if the update level in the description file was not changed by the DASDL update. The update level changes only when the description of the database changes. Returning to a Previous Release Level of Remote Database Backup Use the following procedure to restore a Remote Database Backup environment for Remote Database Backup databases to a previous release level: Step Action 1 Bring down all databases running on the current level of software before starting the restoration process. You need to bring down normally all databases on both the primary and secondary hosts. 2 On the primary host, use library maintenance to back up all affected database systems by copying and comparing the following files and software: DASDL source file Description file Tailored software RDB control file Remote Database Backup software DBMS software 3 Use the DMUTILITY program to perform an offline dump of each database. 4 Use the COPYAUDIT utility to copy the current audit files. 20 6 3850 8198 001

Returning to a Previous Release Level of Remote Database Backup Step Action 5 Use the SI or Installation Center program to restore the previous level of software to the appropriate packs on both systems. 6 Use the SL (Support Library) system command to associate the RDBSUPPORT library with the newly loaded SYSTEM/RDBSUPPORT code file. 7 On the primary host, perform the following steps: a. Remove the database control file. b. Use library maintenance to restore the previously backed-up DASDL source file, the description file, and the DMSUPPORT library. c. Use the DMUTILITY program to restore the control file that was created when you were preparing to upgrade your database to the new level. d. Use the DMCONTROL program to perform a recover update operation on the restored control file. During the recover update operation, enter the last audit file number. If the RECOVERY option is not set to run automatically after the recover update operation, you must start the recovery process manually either by opening the database or by running the DMRECOVERY program. The DMRECOVERY program waits with the following message: Audit update level is <n+1>. Recovery update level is <n>, OK to continue or DS to terminate. After the message appears, enter <mix number> AX OK e. Copy the description file and the DMSUPPORT library to the secondary host. 8 Use the DMUTILITY program to perform an offline dump of the entire database. 9 Use the COPYAUDIT utility to copy the current audit files. 10 Enable and reclone the database. This step is necessary because the remote capability is disabled during the recover update operation. When recloning, use the dump taken in step 8. 11 On the primary host, use library maintenance to back up all affected database systems. Copy and compare the RDB control file, the DASDL source file, the description file, and any tailored software. User programs do not normally need to be recompiled. If the update level of the restored database is different from the update level of the database running the new software, then you need to recompile the user programs. 3850 8198 001 20 7

Returning to a Previous Release Level of Remote Database Backup 20 8 3850 8198 001

Section 21 Installing Interim Corrections Without Closing the Database In This Section This section describes the tasks associated with the Hot Software Update process for the Enterprise Database Server software components. You use the DMUPDATE utility to install an Enterprise Database Server Interim Correction (IC) or Supplemental Support Package (SSP) without closing the database. The following tasks are discussed in this section: Understanding software updates with the DMUPDATE utility Planning for a software update with the DMUPDATE utility Customizing your software update using the DMUPDATE configuration file Understanding the software update types Performing a software update using the DMUPDATE utility Note: Refer to the IC cover letter for additional information about performing a software update with the DMUPDATE utility. 3850 8198 001 21 1

Understanding Software Updates with the DMUPDATE Utility Understanding Software Updates with the DMUPDATE Utility Performing a software update with the DMUPDATE utility provides enhanced database availability during installation of an Enterprise Database Server Interim Correction (IC) or a Supplemental Support Package (SSP). You can update your Enterprise Database Server software from an IC or SSP without bringing down your databases or applications. A software update with the DMUPDATE utility enables you to Run a database environment that does not need to be disabled or discontinued during software installation. Experience minimum performance interruptions to an application environment during installation. Use programmatic guidelines for user applications to participate in the software update process if the programs link directly to server libraries associated with an active database, such as DMSUPPORT and RDBSUPPORT. In general, all of the compatibility requirements stated for a general software upgrade apply to using the software update process to update your database software and run database systems. Refer to the ClearPath MCP Migration Guide for additional information about software compatibility. In addition, there might be a need for particular software updates when a problem has been corrected within the DMUPDATE utility and related software that affects the interface. This type of update to the system software might result in an inability to perform some or all of an update. In some cases, only the controlled update type is available. A software update with the DMUPDATE utility works with the Accessroutines code file titles, not specific databases. Refer to Understanding the Software Update Types later in the section. 21 2 3850 8198 001

Planning for a Software Update with the DMUPDATE Utility Types of Software Updates When performing a software update with the DMUPDATE utility, you can choose from the following three update types. Update Type Controlled Assisted Automatic Explanation Enables you to use your existing installation procedures while the database software keeps your database closed for you during the software update process. Blends your current installation process with the software update environment provided by the DMUPDATE utility. This update type enables you to initiate your current customized installation and update jobs. After these jobs complete, your database software is swapped to the newly installed software for your open databases. Your databases and applications remain open and running during the entire process. Enables a fully automated software installation, compilation, and update of Enterprise Database Server ICs. After these jobs complete, your database software is swapped to the newly installed software for your open databases. Your databases and applications remain open and running during the entire process. Planning for a Software Update with the DMUPDATE Utility It is recommended that you plan for having a software update operate during a nonpeak transaction period. An update affects applications for all databases that share the Accessroutines code file that is specified as participating in the update. Applications might reference many databases with different daily transaction profiles, all sharing the same Accessroutines code file. During certain phases of an assisted or automatic software update using the DMUPDATE utility, performance of data management statements is slightly slower. For example, data management statements such as FIND and CREATE take longer to execute the database procedure than the statements normally require. Once the database procedure associated with the data management statement is invoked and executed, there is no performance difference. When a software update with the DMUPDATE utility is ready to finish, all transaction activity is captured and stopped while implicit close and open operations are performed. The amount of time these operations take depends on the transactional activity of the database during the final phase of the software update. Refer to the printer backup file generated by running the DMUPDATE utility to evaluate and track the amount of time it takes to perform a software update for a specific database or code file. 3850 8198 001 21 3

Planning for a Software Update with the DMUPDATE Utility Software Components for a Software Update The following components are necessary to run a software update with the DMUPDATE utility. *SYSTEM/DMUPDATE This code file is the software update utility for the Enterprise Database Server components. This utility uses a configuration file as input that drives the software update process. SL DMUPDATESUPPORT = *SYSTEM/DMUPDATE ON DISK : ONEONLY This system library acts as a gatekeeper for all database open and close operations. It is also the provider of services during a software update. This system library code file is the same code file as the DMUPDATE utility. *TEMPLATE/DMUPDATE/CONFIG This text file is a template for creating a customized set of software update process directives. This file must be copied as another name and modified to describe your Enterprise Database Server environment. Caution If the DMUPDATESUPPORT library is not defined as a system library, the process attempting to access the library waits on an AX command. Note: The following code files need to be WFL-modified to point to the DMUPDATESUPPORT library: *SYSTEM/ACCESSROUTINES *SYSTEM/DMUPDATE 21 4 3850 8198 001

Planning for a Software Update with the DMUPDATE Utility Elements of the Configuration File The template configuration file contains examples and explanations describing how to use the different types of software updates. You must copy the template configuration file and save it as a newly named file. You can then edit the file to include the configuration and other information needed to actively update your database environment. The template configuration file can be found under *TEMPLATE/DMUPDATE/CONFIG and comprises the following elements. Comments Tags Element Explanation Identified with a % (percent) sign. Comments are anything that appear on a line following a %. Represented as either a single tag, such as [<tag value>], or as a set of tags, such as [<tag value>] [<tag value>]. The tags define the start of a DMUPDATE definition area, which includes the directives that control a software update. At least one tagged area is required in a configuration file. Use the <tag value> parameter to select a definition area during a software update. The area following a tag is selected by providing the tag value as the input parameter when you run the DMUPDATE utility to perform the software update activities. For example, to invoke the DMUPDATE definition area that begins with the tag [DEFAULT] in the configuration file MYTESTCONFIG, your RUN command would look as follows: RUN SYSTEM/DMUPDATE("DEFAULT"); FILE CONFIG = MYTESTCONFIG; Directive entries Control a software update. The list of directives are enclosed within a block that starts and ends with the curly bracket symbols ({ }). Directive Entries The configuration file contains the following directive entries: UPDATELIB Entry The UPDATELIB entry defines the FUNCTIONNAME attribute of the SL (Support Library) DMUPDATESUPPORT. UPDATETYPE Entry The UPDATETYPE entry defines the type of software update that is to be performed: controlled, assisted, or automatic. These software update types are discussed later in this section. 3850 8198 001 21 5

Planning for a Software Update with the DMUPDATE Utility ACR Entry The ACR entry is a list of the titles for your database Accessroutines code files. Update this list to reflect the titles of the Accessroutines code files that are used by each of your databases that are to be updated by the process. Note: The software update activity occurs for all databases sharing the same Accessroutines code file titles. Each title needs to be listed only once, even though many databases might use a title. WFL Entry The WFL entry is a list of WFL file titles to be initiated. These WFL files define your software installation requirements for the automatic and assisted software update types as well as your database tailored software update process for your database DMSUPPORT and RECONSTRUCT code files. You cannot initiate DMCONTROL during an assisted or automatic software update, and you should not attempt to perform a DASDL update during an assisted or automatic software update. If you attempt a DMCONTROL update for a database that is open during an assisted or automatic update, DMCONTROL waits for exclusive use of the database control file. Notes: The handling of DMCONTROL and DASDL updates is an important change from your previous IC installation process. If your process for installing an IC update requires that you perform a DASDL update or DMCONTROL update for title changes, you must perform an IC update using a controlled software update, or you can install the software by the standard method without the software update process. Each WFL entry in the list is initiated one at a time, in the order that they appear in the list. The next WFL entry in the list is started as soon as the currently running WFL job completes. Refer to Limitations and Considerations later in this section for additional information. If any WFL file title in the list includes START statements to initiate other WFL files and if these started WFL files are still running when the initiating WFL job completes, the initiation of the next WFL file title in the list might overlap execution of WFL jobs started by previously initiated WFL jobs in the list. Refer to the WFL Reference Manual for additional information about WFL jobs. COPYFROM and COPYTO Entries The COPYFROM and COPYTO entries are used with an automatic software update as input parameters to the Simple Installation integration. Refer to Understanding the Software Update Types later in this section for additional information about these entries. 21 6 3850 8198 001

Customizing Your Software Update Using the DMUPDATE Configuration File Customizing Your Software Update Using the DMUPDATE Configuration File The DMUPDATE configuration file is a collection of tagged areas that include directives used during the software update process. A valid file includes at least one tag and one area of directives, but could include many tags at the beginning of each area and many other tagged areas. Note: The tags do not represent keywords. Their purpose is to be matched to the input parameter when a software update is initiated. The first tagged area in a customized DMUPDATE configuration file is important because it can be used as the default area. The default area is invoked when you do not provide an input parameter while running the DMUPDATE utility. In the template configuration file, the tag [DEFAULT] is provided as the first tag to further identify the first area as the default tagged area, but this tag is not required. Using the sample tags in the template configuration, the following RUN statements cause the same software update area to be invoked: RUN $SYSTEM/DMUPDATE; FILE CONFIG=MYTESTCONFIG; RUN $SYSTEM/DMUPDATE("DEFAULT"); FILE CONFIG=MYTESTCONFIG The template configuration file begins with a comment header. It is followed by examples of the different software update types (controlled, assisted, and automatic) that are available for use during an update. 3850 8198 001 21 7

Customizing Your Software Update Using the DMUPDATE Configuration File Comment Header % This configuration file is specifically for % Enterprise Database Server IC and SSP installations % Copy this template file as a local file that you can % alter to represent your specific installation requirements. % The tags within the [tag] areas will be used when % you actually use the customized configuration file. % The tags can be nearly anything that you would like to use % for identifying which set of configuration options you will % use when you use the configuration file. If you use the configuration % file without specifying a tag to search for, the first % configuration area will be used. In this template, it is % identified with the tag [DEFAULT] as a combination tag and % comment to signify that it will be used by default. % The tags are matched with % the input string when used: % RUN $SYSTEM/DMUPDATE("DEFAULT"); % FILE CONFIG = MYCUSTOMCONFIG; % If you are using the automatic update type, the default % location of the SYSTEM/SIMPLEINSTALL file should be accessible % on the system halt/load pack when SYSTEM/DMUPDATE is run. % RUN $SYSTEM/DMUPDATE ("AUTOMATIC"); % FILE CONFIG=MYAUTOCONFIG; % FILE INSTALL_DATA = *INSTALLDATAFILE/SYSTEM/DMSII/CD ON DISK; Each area of the configuration file can have many possible tag choices. Any one of the identifying tags can be used to tag this area during a software update. 21 8 3850 8198 001

Understanding the Software Update Types Understanding the Software Update Types When performing a software update with the DMUPDATE utility, you can choose from the following three software update types: controlled, assisted, and automatic. Controlled Software Update A controlled software update provides an integration path from your current process. This software update controls access to your databases in coordination with your established installation process. A controlled software update marks all of the databases using the specified ACR code files as participating in a software update, and then waits for the participating databases to close. The DMUPDATE support library keeps the databases closed by not allowing any new database open operations for databases using the specified ACR code file. When all of the participating databases are closed, the DMUPDATE utility displays a message indicating that it is time to initiate installation of the new software followed by an AX (Accept) system command. Once the DMUPDATE utility is waiting for the AX response, the system is ready for Completion of your manual software installation Completion of your system library commands Either a DASDL or DMCONTROL update, or an update of your tailored database control files Since the databases are not open during this time, you can perform DASDL updates and DMCONTROL updates. Note: While the DMUPDATE utility is waiting for an AX response, any database open operation that involves one of the participating ACR code files specified in the configuration file receives an OPENERROR113 error result. When all installation and update processes complete, respond to the waiting AX entry, which signals to the waiting DMUPDATE task that all installation and update tasks are complete and it is safe to allow database open operations. Once the DMUPDATE utility receives the AX response, successful database open operations resume for the participating ACR code files specified in the configuration file. 3850 8198 001 21 9

Understanding the Software Update Types Examples Example 1 This example selects the [CONTROLLED] area of the MYDBCONFIG configuration file. RUN $SYSTEM/DMUPDATE("CONTROLLED"); FILE CONFIG = MYDBCONFIG; Example 2 This example selects the first area of the configuration file as the default area because it is run without a parameter. RUN $SYSTEM/DMUPDATE; FILE CONFIG = MYDBCONFIG; Example 3 The following example shows the configuration file for a controlled software update: [DEFAULT] [IC] [CONTROLLED] [CONTROLLED][ic] { UPDATETYPE CONTROLLED; %<CONTROLLED/ASSISTED/AUTOMATIC>; UPDATELIB DMUPDATESUPPORT; % The SLed IC Update Library ACR { % fully specified ACR titles, including USERCODE AND PACK *SYSTEM/ACCESSROUTINES ON DISK; *SYSTEM/ACCESSROUTINES/FULL ON DISK; } % Wait on AX for completion of user initiated software installation % and database update activities } 21 10 3850 8198 001

Understanding the Software Update Types Assisted Software Update Examples An assisted software update blends current installation processes with the software update environment provided by the DMUPDATE utility. Your databases stay open and your applications continue to run during the entire process. An assisted software update marks all of the databases using the specified ACR code files as participating in a software update. Participation includes an automated swap from the running ACR code file to the newly installed ACR code file. An assisted software update does not cause or require the participating databases to close. An assisted software update goes through the following phases: 1. Marks the databases for an automated swap. 2. Initiates the user-specified WFL jobs to install the new software and update the database tailored code files. 3. Performs the automated database swap to the new database software. The automated database swap in the third phase represents an implicit close and reopen of each database that was marked in the first phase. At the end of the automated database swap, the open databases are running the newly installed software. Example 1 This example runs an assisted software update: RUN $SYSTEM/DMUPDATE("ASSISTED"); FILE CONFIG = MYDBCONFIG; Example 2 The following example shows the configuration file for an assisted software update: [ASSISTED][assisted] { UPDATETYPE ASSISTED; %<CONTROLLED ASSISTED AUTOMATIC>; UPDATELIB DMUPDATESUPPORT; %The SLed IC Update Library ACR { %fully specified ACR titles, including USERCODE AND PACK *SYSTEM/ACCESSROUTINES ON DISK; *SYSTEM/ACCESSROUTINES/FULL ON DISK; } WFL {% wfls to be started for installation and database update % (ADMIN)WFL/COPY/DMSII/SOFTWARE ON DBPACK; % (ADMIN)WFL/COMPILE/ALL/DMSUPPORT ON DBPACK; % (PROD)WFL/SIGNAL/MONITOR/APPS ON DBPACK("UPDATE"); % (PROD)WFL/START/MONITOR/APPS ON DBPACK; % (ADMIN)WFL/DISPLAY ON DBPACK("SYSTEM SOFTWARE UPDATED"); } } 3850 8198 001 21 11

Understanding the Software Update Types Automatic Software Update An automatic software update is an automated installation, compilation, and update process. It requires that the usercode running the DMUPDATE utility have all of the privileges required for running the installation using the Simple Installation program. An automatic software update is similar to an assisted software update, except that there is an integration with the Simple Installation program for installing the IC or SSP. An automatic software update customizes input to the Simple Installation program using the COPYFROM and COPYTO entries in the configuration file, and an INSTALLDATAFILE file to build the installation copy WFL file. Then the DMUPDATE utility executes the WFL source during the update process. When the DMUPDATE process completes, any specified user WFL files are initiated. These user WFL files recompile the database tailored software before the ACR code file swap initiates. After all user WFL files are completed, the utility performs the swap automatically. The COPYTO entry in the configuration file is required. This entry specifies the usercode and destination pack to which the software is copied. The COPYFROM entry is optional. This entry specifies the usercode and source pack if the software has already been copied to a disk pack. If the COPYFROM entry is not present, the default media contained in the file INSTALLDATAFILE is assumed. Examples Example 1 This example uses the $SYSTEM/DMUPDATE command to run an automatic software update: RUN $SYSTEM/DMUPDATE ("AUTOMATIC"); FILE CONFIG = MYDBCONFIG; FILE INSTALL_DATA = *INSTALLDATAFILE/SYSTEM/DMSII/CD ON DISK; Example 2 This example runs an automatic software update specifying the Simple Installation code file location. If *SYSTEM/SIMPLEINSTALL is not located on the system halt/load pack, you can equate it as well as the specific INSTALLDATAFILE file that is to be used. RUN $SYSTEM/DMUPDATE("AUTOMATIC"); FILE CONFIG = MYDBCONFIG; FILE INSTALL_DATA = *INSTALLDATAFILE/SYSTEM/DMSII/CD ON SYSPACK; FILE SIMPLEINSTALL_FILE = *SYSTEM/SIMPLEINSTALL ON SYSPACK; 21 12 3850 8198 001

Understanding the Software Update Types Example 3 The following example shows the configuration file for the automatic software update: [automatic][automatic] { UPDATETYPE AUTOMATIC; %< CONTROLLED ASSISTED AUTOMATIC>; UPDATELIB DMUPDATESUPPORT; % The SLed IC Update Library ACR { %fully specified ACR titles, including USERCODE and PACK *SYSTEM/ACCESSROUTINES ON DISK; *SYSTEM/ACCESSROUTINES/FULL ON DISK; } %COPYFROM and COPYTO are used by Simple Install COPYFROM (NEWREL) ON RELPACK; %where the files are located COPYTO *ON DISK; %where to install them WFL { %WFLS to be started after auto installation, pre-swap % (ADMIN)WFL/COMPILE/ALL/DMSUPPORT ON DBPACK; % (PROD)WFL/SIGNAL/MONITOR/APPS ON DBPACK("UPDATE"); % (PROD)WFL/START/MONITOR/APPS ON DBPACK; % (ADMIN)WFL/DISPLAY ON DBPACK("SYSTEM SOFTWARE UPDATED"); } } Security Requirements An automatic software update requires that the usercode running the DMUPDATE utility have all of the privileges required for performing the installation by using the Simple Installation program. The Simple Installation privileges include running the DMUPDATE utility from a privileged usercode (PU) with SYSTEMUSER status. If a security administrator has been defined on your system, the privileges must also include this privilege. If the DMUPDATE utility is run from an operator display terminal (ODT), you must modify the SYSTEM/DMUPDATE code file to have a PU status. For additional information, refer to the Simple Installation Guide. 3850 8198 001 21 13

Understanding the Software Update Types Files Used During a Software Update Using the DMUPDATE Utility The following figure identifies the files that can be used during a software update: The following files are defined with the DMUPDATE utility. CONFIG Your configuration file drives the software update process. A template file is included with your system software release and is called TEMPLATE/DMUPDATE/CONFIG. Copy this file and alter it to meet your software update needs. It should be equated as follows: RUN $SYSTEM/DMUPDATE; FILE CONFIG = <your customized configuration file title>; INSTALL_DATA Use the INSTALL_DATA file during an automatic software update to specify the exact INSTALLDATAFILE file to be used for the software installation. An INSTALLDATAFILE file is included with every software distribution, and it is important to use the correct file. File-equate this file as follows. (INSTALLPACK is a pack on which a particular IC has been temporarily loaded.) LP RUN $SYSTEM/DMUPDATE; FILE INSTALL_DATA=*INSTALLDATAFILE/SYSTEM/DMSII/CD ON INSTALLPACK; The LP file is the output printer backup file created during a software update. SIMPLEINSTALL_FILE Use the SIMPLEINSTALL_FILE file during an automatic software update to specify the location of the SYSTEM/SIMPLEINSTALL program if it is not the standard Simple Installation code file that is present as *SYSTEM/SIMPLEINSTALL on the halt/load pack. This file does not need to be equated unless it is necessary to use a version of Simple Installation other than the standard version. It can be equated as follows: RUN $SYSTEM/DMUPDATE; FILE SIMPLEINSTALL_FILE = <code file title>; 21 14 3850 8198 001

Performing a Software Update Using the DMUPDATE Utility Performing a Software Update Using the DMUPDATE Utility Perform the following steps using the DMUPDATE utility to update your running database. Step Action 1 Prepare a DMUPDATE configuration file for use with your installation. This action consists of copying the template configuration file as a local file using your own title, such as MYDBCONFIG. 2 Customize your file (MYDBCONFIG) to represent your database system software environment including names in the tags that you will use when you initiate the update. 3 Choose the software update type you want to use (controlled, assisted, or automatic). You can either delete tagged areas of the file that define other types or leave the tagged areas there to be used another time. 4 Add your custom WFL entries in the order that they should be initiated. These WFL files are initiated in the order you specify. 5 Save the configuration file. You might be able to use the same file for the next software update. 6 Initiate a controlled or an assisted software update with the following syntax if you chose a tagged area called DEFAULT, tagged in MYDBCONFIG with [DEFAULT]: RUN $SYSTEM/DMUPDATE("DEFAULT"); FILE CONFIG = MYDBCONFIG; Initiate an automatic software update with the following syntax: RUN $SYSTEM/DMUPDATE("DEFAULT"); FILE CONFIG = MYDBCONFIG; FILE INSTALL_DATA = *INSTALLDATAFILE/SYSTEM/DMSII/CD ON DISK; Note: This integration with Simple Installation requires that the usercode that has initiated the DMUPDATE utility have appropriate privileges for installing system software. 3850 8198 001 21 15

Performing a Software Update Using the DMUPDATE Utility Backing Out an Installed IC The software update process does not determine whether the software being installed is newer or older than the currently running software release. There are checks to ensure that the software being installed is compatible with the running software. Your installation preparation should include saving your current software environment before replacing it with the new software environment. If these saved files are kept on a specific pack, then a back-out area can be defined either within a special configuration file used for backing out an installation or in an area in a general configuration file. You can use the previously defined steps to perform a software update with your defined back-out area or back-out configuration file. An example of a configuration file is as follows: [ASSISTED] { UPGRADETYPE ASSISTED; UPDATELIB DMUPDATESUPPORT; ACR {*SYSTEM/ACCESSROUTINES ON DISK;} WFL { (ADMIN)WFL/COPY/STAR/DMSII/SOFTWARE ON DBPACK; (ADMIN)WFL/COMPILE/ALL/DMSUPPORT ON DBPACK; } } [BACKOUT] { UPDATETYPE ASSISTED; UPDATELIB DMUPDATESUPPORT; ACR {*SYSTEM/ACCESSROUTINES ON DISK;} WFL {(ADMIN)WFL/COPYBACK/STAR/DMSII/SOFTWARE ON DBPACK; (ADMIN)WFL/COMPILE/ALL/DMSUPPORT ON DBPACK; } } Note: A back-out area should not be placed as the first area in a general configuration file. If you run the DMUPDATE utility and do not provide a parameter, the first area serves as the default area. Limitations and Considerations When performing a software update, the following considerations apply: When the DMUPDATE utility is running, it marks all open databases that are using the specified ACR code files If your databases participate in the Open Distributed Transaction Processing environment, you can use the DMUPDATE utility only with the controlled software update type. 21 16 3850 8198 001

Performing a Software Update Using the DMUPDATE Utility If a database is not open when the DMUPDATE utility is initiated, but uses an ACR code file that is specified in the activated configuration file, an attempt to open the database while the utility is running results in an OPENERROR113 error message. If a database is already open and participating in a software update, database open operations on that database are allowed. New database open operations join the phase of the software update that is currently executing. If a WFL list in the configuration file includes one or more WFL entries that require parameters, refer to the compatibility matrix on the Unisys Product Support Web site for information regarding MCP and WFL parameters. Applications that lock records outside of transaction state might encounter deadlock failures during an assisted or an automatic software update. Use a controlled software update to avoid the deadlocks. The following error occurs when a compatibility problem exists between the currently running DMUPDATESUPPORT system library and the running DMUPDATE utility. This situation requires that all databases on the system be brought down so that a new DMUPDATESUPPORT library can be established during the software installation process. DMUPDATE LEVEL MISMATCH WITH DMUPDATESUPPORT The following error occurs during an assisted or automatic software update when a compatibility problem exists between an open database and the newly installed Accessroutines code file. Use a controlled software update to avoid this error. SHELL LEVEL ERROR If your databases participate in the Open Distributed Transaction Processing environment, use the DMUPDATE utility with only the controlled software update. If your databases participate in the Remote Database Backup environment, use the DMUPDATE utility with only the controlled software update. If your databases participate in the Enterprise Application Environment, use the DMUPDATE utility with only the controlled software update. Caution If you attempt to open a database on a system running a ClearPath MCP release that is not within the compatibility matrix, an error occurs. 3850 8198 001 21 17

Performing a Software Update Using the DMUPDATE Utility Software Updates That Require a DASDL or DMCONTROL Update You cannot perform an assisted or automatic software update when your software update process requires DASDL or DMCONTROL updates. You can only perform a controlled software update. During a controlled software update while the DMUPDATE task is waiting on the AX entry for all installation and update activities to complete, perform your DASDL and DMCONTROL updates along with all of the other installation and update activities. Including or Excluding Particular Databases During a Software Update You might need to isolate particular databases so that they can be included or excluded in a software update. Reasons for isolating a database include Testing and qualifying activities Strict availability constraints The software update process operates on all open databases with the Accessroutines code file titles selected in the ACR list provided in the user configuration file. You can isolate your databases to be included or excluded in the software update by specifying that certain databases use specific Accessroutines code files, either saved on separate packs or saved under designated usercodes. For example, databases DB1 and DB2 have DASDL specifications for their Accessroutines code files as follows: DB1 ACCESSROUTINES = *SYSTEM/ACCESSROUTINES ON DISK DB2 ACCESSROUTINES = (TEST)SYSTEM/ACCESSROUTINES ON DISK Your configuration file can have either of the following specifications, which exclude one or the other database from participating in the software update. The first specification includes DB1, but excludes DB2. The second specification includes DB2, but excludes DB1. ACR { *SYSTEM/ACCESSROUTINES ON DISK;} ACR { (TEST)SYSTEM/ACCESSROUTINES ON DISK;} 21 18 3850 8198 001

Performing a Software Update Using the DMUPDATE Utility Conditions That Result in an Aborted Software Update Using the DMUPDATE Utility The following conditions result in an aborted software update process: SYSTEM/DMRECOVERY is running for any database using an Accessroutines code file specified in the ACR entry list. The currently running ClearPath MCP software does not support the DMUPDATE utility. Simple Installation aborts or the WFL task generated by Simple Installation aborts. Any of the WFL files specified in the WFL entry list abort. The DMUPDATE utility is not compatible with the active DMUPDATESUPPORT system library. A newly installed Accessroutines code file is not compatible with any of the running databases using that Accessroutines code file. The SYSTEM/DMUPDATE running task is discontinued. Conditions That Result in a Skipped Database The following conditions cause particular databases to be skipped over during a software update, but the DMUPDATE utility is not aborted. During an assisted or automatic software update, one or more open databases are using an Accessroutines code file that is specified in the ACR entry list, and these databases do not have INDEPENDENTTRANS set. During an assisted or automatic software update, one or more open databases are using an Accessroutines code file that is specified in the ACR entry list, and these databases have PARTITIONS set. An open database is using an Accessroutines code file that is not specified in the ACR entry. Checking On the Success Status of the DMUPDATE Utility You can determine the success of a DMUPDATE run in the Work Flow Language (WFL). At the completion of the run, examine the task attribute TASKVALUE of the DMUPDATE task. The task attribute TASKVALUE when used in WFL is called VALUE. The TASKVALUE task attribute has the following meanings. Value Meaning 0 Error (fatal or nonfatal) 1 OK 2 Warnings given If a system halt/load interruption occurs while the DMUPDATE utility is running, and there is a possibility that the interruption occurred during the software installation part of the update, restart the entire DMUPDATE process. Errors and warnings display and are reported in the printer backup file. 3850 8198 001 21 19

Performing a Software Update Using the DMUPDATE Utility Checking On the Status of Open Databases Using DMUPDATESUPPORT You can use a command to the system library DMUPDATESUPPORT to determine which Accessroutines code files are currently in use by open databases. The output of this command includes every open database that is registered with the DMUPDATESUPPORT system library. This library is also referred to as the gatekeeper. The gatekeeper keeps track of information pertaining to the open databases. As databases come up and down on the system, the gatekeeper produces register and unregister messages when the opening databases provide information to the gatekeeper. To produce a list of all active libraries on the system, enter the system command LIBS from the ODT. Select the entry that resembles the following running system library: 1234 Ctrl All 6 SL Job *SYSTEM/DMUPDATE ON DISK From the ODT, enter the following command to the running library task 1234: 1234 AX STATUS Next, enter the following ODT system command: MSG A list of registered databases is produced by the AX STATUS command, as follows: * 1234 13:24 DISPLAY:***REGISTERED DATABASES***, * 1234 13:24 DISPLAY:. * 1234 13:24 DISPLAY:08935 (DEV)CUSTOMERDB. * 1234 13:24 DISPLAY: RELEASEID: MCP 11.0 [52.140.000] (52.140.0216). * 1234 13:24 DISPLAY: CODEFILE: *SYSTEM/ACCESSROUTINES ON DISK. ACR { (TEST)SYSTEM/ACCESSROUTINES ON DISK;} 21 20 3850 8198 001

Appendix A DASDL Definition for Sample Database In This Appendix This appendix contains the entire DASDL definition for the sample EMPLOYEEDB database used in this guide. The sample employee database was designed to be simple enough to understand thoroughly. The database is sufficiently typical that it can convey many capabilities of Enterprise Database Server. Probably you can relate to the contents of this database (employee records) because of your work experience or the work experience of someone you know. Studying the sample definition can help you create a definition for a new database or understand the definition for an existing database. For more information about DASDL, DASDL database options, and database structure, refer to the DASDL Reference Manual. Database Definition Format The definition follows a rigid format for parts of the DASDL code. This consistent, but optional, formatting pattern makes it easy to Read the code. Quickly identify structures and structure elements. Update the definition efficiently. About Explanations in the Definition Preferred coding practice includes explanations called comments within the code that describe what the programmer intends the code to do. You can formulate comments in several ways. However, the comments in the sample database use the most common format: begin each comment with a percent sign (%). The system recognizes text following the percent sign to the end of the line as a comment. You can format several lines of comments with percent signs. The percent sign can appear in any column on a line. 3850 8198 001 A 1

DASDL Definition for Sample Database The two other ways of marking comments can have a maximum of 255 characters: Enclosing the comment within quotes ( ) Beginning the comment with the word COMMENT and ending the comment with a semicolon (;) Things to Notice in the Sample Definition While you study the sample definition, notice that The first part is devoted to choosing optional Enterprise Database Server functions that affect how the definition is compiled and how the database structures work. The second part is devoted to defining the database structures, one data set at a time. Definitions for sets and subsets for a data set (if any) occur immediately after the data set to which the sets and subsets belong. The division of DASDL into these two major parts is not required; neither is the placement of sets and subsets after their data sets. However, both of these conventions are helpful when locating parts of the code to change. For the same purposes, the definition also follows optional but rigid spacing of code elements. Summary of EMPLOYEEDB Structures Table A 1 provides a summary listing of the EMPLOYEEDB data sets and their associated sets and subsets. The table might prove helpful as another view of the sample database structures. Table A 1. Summary of Sample EMPLOYEEDB Database Structures Data Sets Sets Subsets PERSON PERSON-SET EMPLOYEE PREVIOUS-EMPLOYEE MANAGER PROJECT-EMPLOYEE EMP-MGR FAMILY EDUCATION INTERIM- MANAGER FAMILY-SET EDUCATION-SET INTERIM-SET PROJECT PROJECT-SET SUPER-PROJECTS DEPARTMENT DEPARTMENT-SET A 2 3850 8198 001

DASDL Definition for Sample Database Table A 1. Summary of Sample EMPLOYEEDB Database Structures Data Sets Sets Subsets ASSIGNMENT PROJECT- PERSON ASSIGNMENT-SET STAFF-ASSIGNED PROJ-ASSIGN PROJPERSON-SET PROJECT-TEAM EMPLOYEEDB Database Definition %Introduction to the EMPLOYEEDB database. The purpose of this %database is to store human resource data for every person %connected with Creative Samples, Inc. %Data set structures contain records for the following categories: %person, family, education, interim manager, project, department, %assignment, and project person. %Sets and subsets with appropriate keys index the data set %structures for a variety of departmental uses. %This audited database can accommodate an increase in the number %of employees of Creative Samples, Inc. %Options and defaults are subject to continuing assessment by the %DBA who will initiate a change in these options and defaults when %database performance or reliability require it. %Run SYSTEM/DMCONTROL automatically to create or update the %control file. $SET DMCONTROL %Start a WFL job automatically to compile the DMSUPPORT library %and the RECONSTRUCT program. $SET ZIP %Do not perform an update; this database is new. %UPDATE; %Run SYSTEM/DMUTILITY to initialize the new data sets. INITIALIZE; %Cause CHECKSUM and REBLOCK when necessary. Set REBLOCKFACTOR and 3850 8198 001 A 3

DASDL Definition for Sample Database %BUFFERS values as indicated. Store all files on HUBPACK family. DEFAULTS ( CHECKSUM = TRUE, REBLOCK = TRUE, REBLOCKFACTOR = 1, BUFFERS = 0 + 0 PER RANDOM USER OR 2 PER SERIAL USER, PACK = HUBPACK %For data sets, run DIGITCHECK. Store the data set on HUBPACK %pack. DATA SET ( DIGITCHECK = TRUE, PACK = HUBPACK ), %Space-holder for specific default specifications for sets. %Currently, global defaults apply to sets. % SET % ( % % ), %Set the initial values of expression types as indicated. ALPHA (INITIALVALUE IS BLANKS), NUMBER (INITIALVALUE IS 0), BOOLEAN (INITIALVALUE IS 0), REAL (INITIALVALUE IS 0) ); %Audit database changes and set ADDRESSCHECK, KEYCOMPARE, %INDEPENDENTTRANS and STATISTICS. OPTIONS ( ADDRESSCHECK, %Always set. AUDIT, %Comment out for unaudited database. KEYCOMPARE, %Always set. INDEPENDENTTRANS, %Set when needed. REAPPLYCOMPLETED, %Set when needed. STATISTICS ); %Set the initial values for the system: ALLOWEDCORE, CONTROLPOINT, %SYNCPOINT, and SYNCWAIT parameters with the values indicated. PARAMETERS ( ALLOWEDCORE = 200000, %Normal running value for the %database. A 4 3850 8198 001

DASDL Definition for Sample Database CONTROLPOINT = 1, %Setting depends on the site. SYNCPOINT = 100, %Setting depends on the site. SYNCWAIT = 1 %Needs INDEPENDENTTRANS. ); %Name Enterprise Database Server system code files and title tailored %database code files as follows: ACCESSROUTINES = SYSTEM/ACCESSROUTINES; DATARECOVERY = SYSTEM/DMDATARECOVERY; RECOVERY = SYSTEM/DMRECOVERY; DMSUPPORT = (SYSDBA)DMSUPPORT/EMPLOYEEDB ON HR; RECONSTRUCT = (SYSDBA)RECONSTRUCT/EMPLOYEEDB ON HR; REORGANIZATION = (SYSDBA)REORGANIZATION/EMPLOYEEDB ON HR; %Generate duplicate audit trails, locate the primary on HRAUDIT %pack and the secondary on HR1AUDIT pack, and copy both audits to %tape and remove them from disk. Run UPDATE EOF. AUDIT TRAIL ( AREAS = 10, AREASIZE = 100 BLOCKS, BLOCKSIZE = 3600 WORDS, PACK = HRAUDIT, COPY TO TAPE (DENSITY=BPI6250, 2300) 1 TIMES AND REMOVE DUPLICATED ON PACK = HR1AUDIT COPY TO TAPE (DENSITY=BPI6250) 1 TIMES AND REMOVE, UPDATE EOF = 3000 ); %Store the control file on HUBPACK pack and the database under %usercode SYSDBA. CONTROL FILE ( PACK = HUBPACK, USERCODE = SYSDBA ); %Create the following global data items: TOT-EMP POPULATION (1000) OF PERSON; TOT-SALARY AGGREGATE (12,02) SUM (SALARY) OF PERSON; HIGHEST REAL (08,02); %Create the restart data set (RST) that is required when the AUDIT TRAIL %option is set. RST RESTART DATA SET ( RDS-ID ALPHA(6) COMS-ID; RDS-PROG REAL COMS-PROGRAM; RDS-LOCATOR REAL COMS-LOCATOR; 3850 8198 001 A 5

DASDL Definition for Sample Database RDS-PROGRAM RDS-MIX-NO RDS-USER-INFO ), POPULATION = 100; ALPHA(48); NUMBER(6); ALPHA(300); %Create a set of restart data sets (RSTs) called RESSET with a key of %RDS-PROGRAM. Two or more set records can have identical keys. RESSET SET OF RST KEY IS (RDS-PROGRAM), DUPLICATES; %Create a standard data set named PERSON with the fields specified %for a maximum of 100 persons. PERSON DATA SET ( SOC-SEC-NO NUMBER (12); EMPLOYEE-ID NUMBER (12); NAME GROUP ( FIRST-NAME ALPHA (15); MID-INITIAL ALPHA (1); LAST-NAME ALPHA (20); ); BIRTH-DATE ALPHA (8); AGE NUMBER (2); MARITAL-STATUS NUMBER (1) INITIALVALUE = 1; CURRENT-RESIDENCE GROUP ( STREET ALPHA (30); CITY ALPHA (20); STATE ALPHA (2); ZIPCODE ALPHA (9); ); NEXT-OF-KIN GROUP ( RELATIONSHIP ALPHA (20); KIN-FIRST-NAME ALPHA (15); KIN-MID-INITIAL ALPHA (1); KIN-LAST-NAME ALPHA (20); KIN-PHONE-NO ALPHA (10); ); US-CITIZEN BOOLEAN; GENDER NUMBER (1); SPOUSE-SSN NUMBER (12); EMPLOYED NUMBER (1); HIRE-DATE ALPHA (8); SALARY REAL; EMPLOYEE-STATUS NUMBER (1); A 6 3850 8198 001

DASDL Definition for Sample Database MANAGER-SSN NUMBER (12); LEAVE-STATUS NUMBER (1); TERMINATION-REASO ALPHA (30); LAST-WORK-DATE ALPHA (8); TITLE NUMBER (1); OVERALL-RATING NUMBER (3,1); MANAGER-TITLE NUMBER (1); BONUS REAL; DEPT-NO NUMBER (12); HEAD-OF-DEPT NUMBER (12); ), POPULATION = 100; %Create a set of PERSON called PERSON-SET with a key of %SOC-SEC-NO. PERSON-SET SET OF PERSON KEY SOC-SEC-NO; %Create a subset of PERSON called EMPLOYEE with a key of %EMPLOYEE-ID. EMPLOYEE SUBSET OF PERSON WHERE (EMPLOYED > 0 AND EMPLOYED < 9) KEY EMPLOYEE-ID; %Create a subset of PERSON called PREVIOUS-EMPLOYEE with a key of %SOC-SEC-NO. PREVIOUS-EMPLOYEE SUBSET OF PERSON WHERE EMPLOYED = 9 KEY SOC-SEC-NO; %Create a subset of PERSON called MANAGER with a key of %EMPLOYEE-ID. MANAGER SUBSET OF PERSON WHERE EMPLOYED = 3 KEY EMPLOYEE-ID; %Create a subset of PERSON called PROJECT-EMPLOYEE with a key of %EMPLOYEE-ID. PROJECT-EMPLOYEE SUBSET OF PERSON WHERE EMPLOYED = 2 KEY EMPLOYEE-ID; %Create a subset of PERSON called EMP-MGR with a key of %MANAGER-SSN. EMP-MGR SUBSET OF PERSON WHERE (EMPLOYED > 0 AND EMPLOYED < 9 AND MANAGER-SSN > 0) KEY MANAGER-SSN; 3850 8198 001 A 7

DASDL Definition for Sample Database %Create a standard data set named FAMILY with the fields specified %for a maximum of 50 family members. FAMILY DATA SET ( PARENT-SSN NUMBER (12); CHILD-SSN NUMBER (12); ), POPULATION = 50; %Create a set of FAMILY called FAMILY-SET with a key of %PARENT-SSN. Two or more set records can have identical keys. FAMILY-SET SET OF FAMILY KEY PARENT-SSN, DUPLICATES; %Create a standard data set named EDUCATION with the fields %specified for a maximum of 100 persons. EDUCATION DATA SET ( SOC-SEC-NO NUMBER (12); DEGREE-OBTAINED NUMBER (1); YEAR-OBTAINED ALPHA (8); GPA REAL; ), POPULATION = 100; %Create a set of EDUCATION called EDUCATION-SET with a key of %SOC-SEC-NO. Two or more set records can have identical keys. EDUCATION-SET SET OF EDUCATION KEY SOC-SEC-NO, DUPLICATES; %Create a standard data set named PROJECT with the fields %specified for a maximum of 100 projects. PROJECT DATA SET ( PROJECT-NO NUMBER (12); PROJECT-TITLE ALPHA (20); RELEASE-LEVEL ALPHA (5); DEPT-NO NUMBER (12); SUBPROJ-OF NUMBER (12); PROJMGR-EMP-ID NUMBER (12); ), POPULATION = 100; %Create a set of PROJECT called PROJECT-SET with a key of %PROJECT-NO. PROJECT-SET SET OF PROJECT KEY PROJECT-NO; A 8 3850 8198 001

DASDL Definition for Sample Database %Create a subset of PROJECT called SUPER-PROJECTS with a key of %SUBPROJ-OF. Two or more subset records can have identical keys. SUPER-PROJECTS SUBSET OF PROJECT WHERE SUBPROJ-OF > 0 KEY SUBPROJ-OF, DUPLICATES; %Create a standard data set named DEPARTMENT with the fields %specified for a maximum of 50 departments. DEPARTMENT DATA SET ( DEPT-NO NUMBER (12); DEPT-TITLE ALPHA (20); DEPT-LOCATION ALPHA (1); DEPT-HEAD NUMBER (12); ), POPULATION = 50; %Create a set of DEPARTMENT called DEPARTMENT-SET with a key of %DEPT-NO. DEPARTMENT-SET SET OF DEPARTMENT KEY DEPT-NO; %Create a standard data set named ASSIGNMENT with the fields %specified for a maximum of 200 assignments. ASSIGNMENT DATA SET ( EMPLOYEE-ID NUMBER (12); ASSIGNMENT-NO NUMBER (12); START-DATE ALPHA (8); END-DATE ALPHA (8); EST-PERSON-HOURS NUMBER (3); RATING NUMBER (3,1); PROJECT-OF NUMBER (12); ), POPULATION = 200; %Create a set of ASSIGNMENT called ASSIGNMENT-SET with a key of %ASSIGNMENT-NO. Two or more set records can have identical keys. ASSIGNMENT-SET SET OF ASSIGNMENT KEY ASSIGNMENT-NO, DUPLICATES; %Create a set of ASSIGNMENT called STAFF-ASSIGNED with keys of %EMPLOYEE-ID and ASSIGNMENT-NO. Two or more set records can have %identical keys. STAFF-ASSIGNED SET OF ASSIGNMENT KEY (EMPLOYEE-ID, ASSIGNMENT-NO), DUPLICATES; 3850 8198 001 A 9

DASDL Definition for Sample Database %Create a set of ASSIGNMENT called PROJ-ASSIGN with a key of %PROJECT-OF. Two or more set records can have identical keys. PROJ-ASSIGN SET OF ASSIGNMENT KEY PROJECT-OF, DUPLICATES; %Create a standard data set named PROJECT-PERSON with the fields %specified for a maximum of 200 persons assigned to projects. PROJECT-PERSON DATA SET ( EMP-SOC-SEC-NO REAL; EMPLOYEE-ID REAL; PROJECT-NO NUMBER (12); ), POPULATION = 200; %Create a set of PROJECT-PERSON called PROJPERSON-SET with keys of %EMP-SOC-SEC-NO and EMPLOYEE-ID. Two or more set records can have %identical keys. PROJPERSON-SET SET OF PROJECT-PERSON KEY (EMP-SOC-SEC-NO, EMPLOYEE-ID), DUPLICATES; %Create a set of PROJECT-PERSON called PROJECT-TEAM with a key of %PROJECT-NO. Two or more set records can have identical keys. PROJECT-TEAM SET OF PROJECT-PERSON KEY PROJECT-NO, DUPLICATES; A 10 3850 8198 001

Appendix B Database Specifics Chart In This Appendix Keeping a database specifics chart for a database up-to-date can be helpful for personnel involved with your database maintenance. In addition, if you need to call the Unisys Support Center for advice, you have the necessary information available to give to the analyst so that he or she can help you resolve your problem. 3850 8198 001 B 1

Database Specifics Chart Database Specifics Chart Database Name Software Level MCP Enterprise Database Server Software Usercode File Name Pack Name Database Files Control File Description File DASDL Source Audit Files Primary Secondary Tailored Files DMSUPPORT RECONSTRUCT RMSUPPORT Database Backup Files Dump Type Offline Online Disk Dump Tape Dump QIC 4mm DAT 8mm DAT Open Reel Half-Inch Cartridge B 2 3850 8198 001

Appendix C SL (Support Library) System Command Associations In This Appendix This appendix provides a list of the SL (Support Library) system commands that need to be completed before you can run your data management software. Under most circumstances, these commands are issued automatically during the installation process. SL System Commands The following SL system commands are issued by the SI or Installation Center program as part of the data management software installation process: SL ADDSCSUPPORT SL ADDSSUPPORT SL SCODESUPPORT SL FORMSSUPPORT SL SDFPLUSDICTIONARY SL SDPARCHIVESUPPORT SL SDPFORMSPROCESSOR SL SDPCOMMGRSUPPORT SL DATABASESUPPORT SL RDBSUPPORT SL DMSIIAUDITSUPPORT SL DMUPDATESUPPORT = *SYSTEM/ADDS/CSUPPORT = *SYSTEM/ADDS/MANAGER/CONFIG = *SYSTEM/SCODE = *SYSTEM/SDFPLUS/FORMSSUPPORT = *SYSTEM/SDFPLUS/DICTMANAGER = *SYSTEM/SDFPLUS/ARCHIVEMANAGER = *SYSTEM/SDFPLUS/FORMSPROCESSOR = *SYSTEM/SDFPLUS/COMMANAGER = *SYSTEM/DATABASESUPPORT = *SYSTEM/RDBSUPPORT = *SYSTEM/DMAUDITLIB = *SYSTEM/DMUPDATE :ONEONLY 3850 8198 001 C 1

SL (Support Library) System Command Associations C 2 3850 8198 001

Index A AA word as tiebreaker, 12-5 data set sections, 2-8 abnormally terminating a program, 6-16 abort recovery, 8-4 flowchart, 8-5 messages, 8-6, 8-8 monitoring results of, 8-5 remedies for failed, 8-7 ABW mode, upgrading the Remote Database Backup environment under, 18-14, 19-7 accessing the database inquiry, 1-13 people involved, 1-12 reasons for, 1-13 tasks for managing, 1-3 update, 1-13 Accessroutines abort recovery, 8-5 flowchart, 8-5 audit trail and, 3-3 automatic database recovery and, 8-2 controller of access to data, 3-16 DASDL definition, 3-21 OPTIONS definition, 3-17 PARAMETERS definition, 3-19 halt/load recovery, 8-9 flowchart of process, 8-9 handling read errors, 6-17 write errors, 6-18 optional database services, 3-16 services and tasks, 6-5 ACCUMULATED dump option, 7-9 ADDRESSCHECK DASDL option, 3-11, 3-17 ADDS (See Advanced Data Dictionary System) ADDSDB/RUNTIMEDATA file, 18-3, 18-7 administering a database, 6-2, 6-3 Advanced Data Dictionary System (ADDS) backing up the dictionary, 18-7 installing, 17-5 memory requirements, 16-3 returning to a previous level of, 20-3 upgrading, 18-2, 18-5 AFN (audit file number), 3-5 AFS mode facilitating the NFT task, 18-18, 19-11 algorithm, round-robin, 3-6 ALL option Visible DBS command STATUS MIX, 7-10 ALLOWEDCORE DASDL option, 3-19 alphanumeric data item, 2-16 format PRINTAUDIT program, 10-2, 10-4 analyzing database structures, 9-6 APPEND keyword, 7-23 application programmer, database access and, 1-12 application programs backing up, 7-4 batch, 5-2 running, 5-3 changing to solve audit throughput problem, 13-5 data set sections, 2-7 discontinuing, 6-16 Enterprise Database Server Extended Edition migration, 1-7 Enterprise Database Server messages to, 6-14 independence from data changes, 1-4 record serial numbers (RSNs), 12-6 user database access, 1-12 AREALENGTH DASDL option, 3-22 AREAS DASDL option, 3-22 AREAS file attribute, 13-2 AREASIZE file attribute, 13-2 assisted software update, 21-11 limitations and considerations, 21-17 audit block size, increasing, 13-6 audit buffers advantages of varying, 3-8 3850 8198 001 Index 1

Index overload situation, 3-8 varying the number of, 3-7, 13-6 AUDIT BUFFERS command, 3-8 AUDIT DASDL option, 3-4, 3-11, 3-17 audit file number (AFN), 3-5 audit file sections, 13-7, 13-10 COPYAUDIT, 3-8 I/Os, 3-6 optimal number, 3-7 PRINTAUDIT, 3-8 varying audit buffers for, 3-7, 13-6 audit files as diagnostic tools, 10-3 backup, 7-4, 7-20 COPYAUDIT program and, 3-5, 7-5 current audit file, 3-5 how long to keep, 7-22 numbers, 3-5 PRINTAUDIT program interval types, 10-14 ordering contents of a view, 10-12 selecting record types in a view, 10-8 selection parameters and examples, 10-17 session syntax, 10-11, 10-12 timestamps, using, 10-16 reconstruct recovery from, 8-18 requesting a view of, 10-9 tape encryption, 7-21 types of records in, 10-7 where to view, 10-3, 10-9 audit output bottleneck, 13-5 audit pack, managing sectors on, 7-22 audit reader support library running two versions of Enterprise Database Server, 17-9 audit record types, 10-8 audit trail, 3-3, 3-4 contents of, 3-5 creating, 4-2 AUDIT TRAIL DASDL options, 3-4, 3-12 DASDL definition, 3-22 audit trail throughput, 13-5 audited database abort recovery, 8-4 monitoring results of, 8-5 activity during dump, 7-14 advantages, 3-3 halt/load recovery, 8-8 monitoring results of, 8-10 rebuild recovery, 8-23 reconstruct recovery, 8-16 from a backup dump, 8-17 from audit file only, 8-18 Quickfix, 8-19 WFL jobs, 8-20 recovery of, 8-2, 8-12 rollback recovery, 8-25 single transaction abort recovery, 8-3 automatic database recovery abort recovery, 8-4, 8-5 halt/load recovery, 8-8, 8-10 single transaction abort, 8-3 types, 8-2 automatic software update, 21-12 limitations and considerations, 21-17 availability of database during manual recovery, 8-13 B backing up the dictionary, 18-7 backup (See also dump; dumping; offline dump; online dump) audit files, 7-20 database, 7-2 database activity during, 7-14 dump, 7-3 incremental, 7-8 partial or full, 7-7 recovery and, 8-2 storage, 7-11 summary and order of tasks, 7-5 tasks for managing, 1-4 database files, 7-2 related database files, 7-2, 7-24 selected database files, 7-3 batch application program, 5-2 running, 5-3 method of populating a database, 5-2 BEGIN-TRANSACTION operation restart data set and, 3-25 BLOCKSIZE DASDL option, 3-22 BLOCKSIZE file attribute, 13-2 Boolean data item, 2-17 BUFFERS DASDL option, 3-13, 3-15 BUILDREORG program services and tasks, 6-6 byte memory requirements, determining sample calculations, 16-5 table for, 16-2 Index 2 3850 8198 001

Index C calculating memory requirements, 16-5 CANDE (See Command and Edit) certifying database consistency and integrity, 9-4 structures, 9-5 changing code file locations, 6-12 database file locations, 6-12 database structures, 11-2, 11-3 changing to a new software release, procedures for ADDS environment, 18-2 products that do not use ADDS, 19-2 CHECKSUM DASDL option, 3-13, 3-14, 3-23 COBOL batch application program, 5-3 source file, 5-3 code files, changing locations of, 6-12 code memory requirements, determining sample calculations, 16-5 table for, 16-2 coexistence of Enterprise Database Server Extended Edition and Enterprise Database Server Standard Edition, 1-7 column as part of data set, 2-5 synonym for data item, 2-3 Command and Edit (CANDE) COPY command, 7-5 file creation and editing tool, 2-2 message control, 16-7 REMOVE command, 7-5 communications processor data link processors (CPDLPs), 16-7 compiler requirements, 15-6 compression, 7-11 configuration file DMUPDATE utility, 21-7 configuring terminals for SDF Plus-based products, 16-6 control file characteristics and purpose, 1-12 creating, 4-2 temporary control file, 6-11 functions, 1-11, 6-9 initializing manually, 4-6 maintaining, 6-8 services and tasks, 6-4 tasks for managing, 6-10 CONTROL FILE DASDL options, 3-12 DASDL definition, 3-24 control of database control file functions, 1-11 services and tasks, 6-4 tasks for managing, 1-3 controlled software update, 21-9 limitations and considerations, 21-17 CONTROLPOINT DASDL option, 3-19 COPY command, 7-5, 7-23 COPY TO TAPE AND REMOVE DASDL option, 3-23, 7-20 COPYAUDIT program, 7-5, 7-20 audit file sections, 3-8 audit files and, 3-5 results of running, 7-21 COPYDUMP command, 7-3, 7-5, 7-19 copying a dump, 7-19 count item data item, 2-17 creating upgraded ADDS environment, 18-2 current audit file, 3-5 current dictionary properties, recording, 18-6 cycle number, 7-11 D DASDL (See Data and Structure Definition Language) data set as index to, 2-9 structures DASDL options that manage, 3-13 data item, 2-15 data set, 2-5 global data item, 2-19 set, 2-9 subset, 2-13 subset as index to, 2-13 tasks for managing recovery of, 1-4 update how Enterprise Database Server enables, 2-4 data access controlled by Accessroutines, 3-16 people who need it, 1-12 tasks for managing, 1-3 Data and Structure Definition Language (DASDL) $ options, 3-9, 3-11, 3-12 DASDL definition, 3-11 3850 8198 001 Index 3

Index Accessroutines DASDL definition, 3-21 ADDRESSCHECK option, 3-11, 3-16, 3-17 ALLOWEDCORE option, 3-19 AREALENGTH option, 3-22 AREAS option, 3-22 AUDIT option, 3-4, 3-11, 3-16, 3-17, 3-19 AUDIT TRAIL options, 3-4, 3-12, 7-20 DASDL definition, 3-22 auditing of database and, 3-3 BLOCKSIZE option, 3-22 BUFFERS option, 3-13, 3-15 categories of options, 3-12 CHECKSUM option, 3-13, 3-14, 3-23 compiler services and tasks, 6-4 CONTROL FILE options, 3-12, 3-24 DASDL definition, 3-24 CONTROLPOINT option, 3-19 COPY TO TAPE AND REMOVE option, 3-22, 3-23, 7-20 data set definition, 2-6 DATARECOVERY code file DASDL definition, 3-21 DEFAULTS options, 3-12 DASDL definition, 3-13 definition compiling, 4-2, 4-3, 4-4, 4-8 how to write, 2-2 what is defined, 2-2 who can create, 2-2 description, updating, 19-4 DIGITCHECK option, 3-13, 3-15 DMCONTROL option, 3-11, 4-6 DMSUPPORT library DASDL definition, 3-21 dollar options, 3-9, 3-12 DASDL definition, 3-11 DUPLICATED ON PACK option, 3-22, 3-23 DUPLICATED option, 7-20 EMPLOYEEDB database DASDL source file, A-1 Enterprise Database Server data definition language, 1-2 file name definition, 3-21 generating a new database, 4-2 global data item definition, 2-19 group data item definition, 2-18 INDEPENDENTTRANS option, 3-11, 3-16, 3-17, 3-19 INITIALIZE option, 3-12 INITIALVALUE option, 3-13, 3-15 KEYCOMPARE option, 3-11, 3-16, 3-17, 3-19 limitations, 16-8 LOCK TO MODIFY DETAILS option, 3-13, 3-15 LOCKEDFILE option, 3-11 logical remap capabilities, 1-4 optional database functions, 3-11 setting and resetting, 3-10 options $, 3-9, 3-11, 3-12 ADDRESSCHECK, 3-11, 3-16, 3-17 ALLOWEDCORE, 3-19 AREALENGTH, 3-22 AREAS, 3-22 AUDIT, 3-4, 3-11, 3-16, 3-17, 3-19 audit backup, 7-20 AUDIT TRAIL, 3-4, 3-12, 3-22 BLOCKSIZE, 3-22 BUFFERS, 3-13, 3-15 CHECKSUM, 3-13, 3-14, 3-23 CONTROL FILE, 3-12, 3-24 CONTROLPOINT, 3-19 COPY TO TAPE AND REMOVE, 3-22, 3-23, 7-20 DEFAULTS, 3-12, 3-13 DIGITCHECK, 3-13, 3-15 DMCONTROL, 3-11, 4-6 dollar, 3-9, 3-11, 3-12 DUPLICATED, 7-20 DUPLICATED ON PACK, 3-22, 3-23 INDEPENDENTTRANS, 3-11, 3-16, 3-17, 3-19 INITIALIZE, 3-12 INITIALVALUE, 3-13, 3-15 KEYCOMPARE, 3-11, 3-16, 3-17, 3-19 LOCK TO MODIFY DETAILS, 3-13, 3-15 LOCKEDFILE, 3-11 OPTIONS, 3-12, 3-16, 3-19 OVERLAYGOAL, 3-20 PACK, 3-13, 3-15, 3-22, 3-23, 3-24, 3-25 PARAMETERS, 3-12 QUICKCOPY TO, 7-20 RDS-ID, 3-25, 3-26 RDS-LOCATOR, 3-25, 3-26 RDS-MIX-NO, 3-25, 3-26 RDS-PROG, 3-25, 3-26 RDS-PROGRAM, 3-25, 3-26 RDS-USER-INFO, 3-25, 3-26 REAPPLYCOMPLETED, 3-11, 3-16, 3-18, 3-19 REBLOCK, 3-13, 3-14 REBLOCKFACTOR, 3-13, 3-14 RESTART DATA SET, 3-12, 3-25 STATISTICS, 3-11, 3-16, 3-18, 3-19 Index 4 3850 8198 001

Index SYNCPOINT, 3-20 SYNCWAIT, 3-20 UPDATE, 3-12 UPDATE EOF, 3-22, 3-23 USERCODE, 3-24, 3-25 VERIFY, 7-20 ZIP, 3-11 OPTIONS category options, 3-12 DASDL definition, 3-16, 3-19 OVERLAYGOAL option, 3-20 PACK option, 3-13, 3-15, 3-22, 3-23, 3-24, 3-25 PARAMETERS options, 3-12 purpose, 2-1 QUICKCOPY TO option, 7-20 RDS-ID option, 3-25, 3-26 RDS-LOCATOR option, 3-25, 3-26 RDS-MIX-NO option, 3-25, 3-26 RDS-PROG option, 3-25, 3-26 RDS-PROGRAM option, 3-25, 3-26 RDS-USER-INFO option, 3-25, 3-26 REAPPLYCOMPLETED option, 3-11, 3-16, 3-18, 3-19 REBLOCK option, 3-13, 3-14 REBLOCKFACTOR option, 3-13, 3-14 RECONSTRUCT program DASDL definition, 3-21 RECOVERY code file DASDL definition, 3-21 references to information about, 2-2 REORGANIZATION program DASDL definition, 3-21 RESTART DATA SET options, 3-12 DASDL definition, 3-25 set definition, 2-11 single data item definition, 2-18 STATISTICS option, 3-11, 3-16, 3-18, 3-19 structures DASDL options that manage, 3-13 data item, 2-15 data set, 2-5 global data item, 2-19 set, 2-9 subset, 2-13 subset definition, 2-15 SYNCPOINT option, 3-20 SYNCWAIT option, 3-20 syntax correcting, 4-2, 4-3, 4-4 error messages, 4-6 UPDATE EOF option, 3-22, 3-23 UPDATE option, 3-12 update, performing, 19-4 USERCODE option, 3-24, 3-25 VERIFY option, 7-20 ZIP option, 3-11, 4-6 data changes integrity, 1-4 tracking, 1-4 data communications data link processors (DCDLPs), 16-7 data definition language for Enterprise Database Server, 2-1 data dictionaries backing up, 18-7 recording current properties of, 18-6 returning to a previous level of, 20-3 upgrading, 18-2, 18-5 data files, creating, 4-2 data integrity, choosing options to ensure, 3-10 data item DASDL definition, 2-18 definition, 2-3, 2-15 global, 2-19 group, 2-18 maintaining the structure of, 2-16 numeric, 2-16 purpose, 2-16 single, 2-18 synonym for column and data item, 2-3 types of, 2-16 updating, 2-16 data management software product packages, 15-3 style identifiers, 15-3 data memory requirements, determining sample calculations, 16-5 table for, 16-2 data security choosing options to ensure, 3-10 tasks for managing, 1-4 data set capacity, 13-2 DASDL definition, 2-6 definition, 2-3, 2-5 format, 2-5 illustration, 2-5 logically separating, 13-3 maintaining the structure of, 2-6 overriding DASDL DEFAULT options for, 3-14 purpose, 2-5 sections, 2-6, 13-11 AA word, 2-8 application programs and, 2-7 3850 8198 001 Index 5

Index dividing into, 13-4 record distribution, 2-8 requirements, 2-7 synonym for table, 2-3 updating, 2-6 database Accessroutines, 6-5 controller of access to database, 3-16 optional services of, 3-16 activity during dump, 7-14 administering, 6-2 analyzing database structures, 9-6 application program, backing up, 7-4 application programmer access to, 1-12 audit trail, 3-4 AUDIT TRAIL DASDL options DASDL definition, 3-22 auditing, 3-3 availability during manual recovery, 8-13 backup, 7-2 audit files, 7-20 database activity during, 7-14 incremental, 7-8 partial, 7-3, 7-7 storage, 7-11 summary and order of tasks, 7-5 tasks for managing, 1-4 to disk, 7-11 to tape, 7-11 batch application program, 5-2 BUILDREORG program, 6-6 certifying consistency and integrity, 9-4 code files, changing locations of, 6-12 consistency, certifying, 9-4 continuity, 7-2 control file characteristics and purpose, 1-12 creating, 4-2, 6-11 functions, 1-11, 6-9 initializing manually, 4-6 maintaining, 6-8 tasks for managing, 6-10 CONTROL FILE DASDL options DASDL definition, 3-24 creating, 3-9 flowchart, 4-8 DASDL compilation flowchart, 4-8 correcting syntax, 4-2, 4-3, 4-4 data set definition, 2-6 definition, compiling, 4-2, 4-3, 4-4 global data item definition, 2-19 group data item definition, 2-18 sample source file, A-1 set definition, 2-11 single data item definition, 2-18 subset definition, 2-15 syntax error messages, 4-6 DASDL compiler, 6-4 DBA, 1-12 administrative tasks, 6-2 responsibilities, 1-13 DBCERTIFICATION program options and actions, 9-4 description file backing up, 7-4, 11-4 creating, 4-2 services and tasks, 6-5 designing from a real-world data map, 1-14 disabling, 9-3 disk dump, 7-11 DMRECOVERY program, 6-5 DMSUPPORT library, 6-5 backing up, 7-4 creating, 4-2, 4-6 creating manually, 4-7 DMUTILITY program, services and tasks, 6-5 dollar DASDL options, 3-9 dump, 7-2 backup, 7-3 copying, 7-19 duplicating, 7-19 verifying, 7-18 enabling, 9-3 Enterprise Database Server backing up, 18-11, 19-4, 19-5 upgrading in an ADDS environment, 18-11 Enterprise Database Server services, 6-4 file name DASDL definition, 3-21 files changing locations of, 6-12 default pack locations, 1-9 file-equating, 6-7 initializing, 6-7 listing contents of, 9-3 listing with the CANDE FILES command, 1-9 writing contents of, 9-3 functions, designating a library for, 6-7 generating a new database, 4-2 inquiry, 1-13 integrity, certifying, 9-4 maintenance, 6-3 host system tasks, 6-6 Index 6 3850 8198 001

Index scheduling, 6-3 tasks, 6-6 model, making RMSUPPORT library title unique, 17-13 nontailored database files services and tasks, 6-4 offline dump, 7-14 DMUTILITY tasks, 7-17 examples of syntax for, 7-17 online dump, 7-14 DMUTILITY tasks, 7-15 examples of syntax for, 7-16 tasks prior to, 7-15 optional functions choosing, 3-10 dollar options, 3-11 setting and resetting, 3-10 OPTIONS DASDL options DASDL definition, 3-16, 3-19 options, defining, 3-3 people involved in accessing, 1-12 performance, tasks for managing, 1-3 physical limitations, 16-1, 16-8 populating, 5-2 reasons to access, 1-13 RECONSTRUCT program, 6-5 backing up, 7-4 creating, 4-2, 4-6, 4-7 recovery, 8-2 abort recovery, 8-4, 8-5 automatic, 8-2 halt/load recovery, 8-8, 8-10 manual, 8-12 rebuild recovery, 8-23 reconstruct recovery, 8-16, 8-17, 8-18, 8-19, 8-20 rollback recovery, 8-25 single transaction abort, 8-3 situations that required manual recovery, 8-12 tasks for managing, 1-4 unaudited database, 8-27 reorganization planning for, 11-3 reasons for, 11-2, 11-3 REORGANIZATION program, 6-6 report on files and rows, 9-3 RESTART DATA SET DASDL options DASDL definition, 3-25 returning to a previous level of Enterprise Database Server, 20-5 Remote Database Backup, 20-6 RMSUPPORT library, 6-5 backing up, 7-4 scalability, 1-4 security, tasks for managing, 1-4 software interaction for an open, 5-8 structure definition and modification tasks, 1-3 structures DASDL options that manage, 3-13 initializing, 3-9 overview, 2-3 reasons for changing, 11-2 types of change, 11-3 tailored database files naming, 3-21 services and tasks, 6-4 tape dump, 7-11 transaction definition and flowchart, 1-13 update, 1-13 planning for, 11-3 reasons for, 11-2 types of change, 11-3 update levels, 6-9 upgrading in an ADDS environment Enterprise Database Server, 18-11 Open Distributed Transaction Processing, 17-12 user access to, 1-12 workspace size choosing options for, 3-10 database administrator (DBA), 1-12 administrative tasks, 6-2 database availability goal, 1-5, 11-4 database management system (DBMS), Enterprise Database Server as, 1-2 Database Operations Center, 1-8 database specifics chart, B-1 DATARECOVERY code file DASDL definition, 3-21 DBA (See database administrator) dbatools Analyzer program, 9-2 dbatools Monitor program, 9-2 DBCERTIFICATION program, 9-2 options and actions, 9-4 DBDATA file type, 1-11 DBMS (See database management system) default definition, 3-11 default job queue and nonusercoded databases, 18-15, 19-8 default pack locations for files, 1-9 DEFAULTS DASDL options, 3-11, 3-12, 3-13 overriding, 3-14 defining 3850 8198 001 Index 7

Index data, 2-1 density, 7-11 description file, 1-10 backing up, 7-4, 11-4 characteristics and purpose, 1-12 creating, 4-2 services and tasks, 6-5 DESCRIPTION/EMPLOYEEDB file, 1-10 characteristics and purpose, 1-12, 4-7 designating a library for a function, 6-7 designing a database, 1-14 diagnosing problems using audit files, 10-3 dictionaries backing up, 18-7 recording current properties of, 18-6 returning to a previous level of, 20-3 upgrading, 18-2, 18-5 DIGITCHECK DASDL option, 3-13, 3-15 disabling the database, 9-3 discontinuing a program, 6-16 disk dump, 7-3, 7-11 disk subsystem, increasing the throughput of, 13-7 dividing a data set into sections, 13-4 DKTABLE, 2-7 DMCONTROL DASDL option, 3-11, 4-6 DMCONTROL program, 6-10 DMINQ interface, 1-7 DMRECOVERY program halt/load recovery and, 8-8 services and tasks, 6-5 DMSUPPORT library, 1-10 backing up, 7-4 characteristics and purpose, 1-12 creating, 4-2 after DASDL compilation, 4-6 manually, 4-7 DASDL definition, 3-21 during a software update with DMUPDATE utility, 5-7 services and tasks, 6-5 DMSUPPORT/EMPLOYEEDB library, 1-10 characteristics and purpose, 1-12, 4-7 DMUPDATE process (See software update with DMUPDATE utility) DMUPDATE support library running two versions of Enterprise Database Server, 17-10 DMUPDATE utility (See software update with DMUPDATE utility) DMUTILITY tape encryption, 7-5 DMUTILITY program backing up database with, 7-2 CANCEL statement, 6-10 COPYDUMP command, 7-3, 7-5, 7-19 DUMP command, 7-3, 7-5, 7-10 DUPLICATEDUMP command, 7-3, 7-5, 7-19 offline dump, 7-17 online dump, 7-15 QUIESCE command, 7-11 RESUME command, 7-11 services and tasks, 6-5 using to back up Enterprise Database Server databases, 18-11, 19-4, 19-5 VERIFYDUMP command, 7-5, 7-18 dollar DASDL options, 3-9, 3-12 dump (See also backup; dumping; offline dump; online dump) backup, 7-3 copying, 7-19 database, 7-2 activity during, 7-14 duplicating, 7-19 incremental, 7-8 partial or full, 7-7 storage, 7-11 to disk, 7-11 to tape, 7-11 verifying, 7-18 DUMP command, 7-3 DMUTILITY, 7-10 dump option ACCUMULATED, 7-9 INCREMENTAL, 7-9 dumping (See also backup; dump; offline dump; online dump) audit files, 7-20 database files, 7-2, 7-7 database activity during, 7-14 incrementally, 7-8 storage, 7-11 selected database files, 7-3 to disk, 7-11 to tape, 7-11 duplicate audit trail, 3-5 duplicate set, 12-5 DUPLICATED ON PACK DASDL option, 3-23 DUPLICATEDUMP command, 7-3, 7-5, 7-19 DUPLICATES error, 8-7 duplicating a dump, 7-19 Index 8 3850 8198 001

Index E EMPLOYEEDB database, 3-2 administering, 6-2 as audited database, 3-3 audit trail, 3-4 AUDIT TRAIL DASDL options DASDL definition, 3-22 batch application program, 5-2 CONTROL FILE DASDL options DASDL definition, 3-24 DASDL source file, A-1 DBDATA type file list, 1-11 DEFAULTS DASDL options, 3-14 definition, A-3 DESCRIPTION/EMPLOYEEDB file, 1-10 EMPLOYEEDB/CONTROL file, 1-11 characteristics and purpose, 1-12, 4-7 functions, 1-11 EMPLOYEEDB/UPDATE source file, 5-3 facts, 3-2 file name DASDL definition, 3-21 generating when new, 4-2 maintenance, 6-3 OPTIONS DASDL options DASDL definition, 3-16, 3-19 populating, 5-2 real-world data mapped to Enterprise Database Server structures, 1-14 RECONSTRUCT/EMPLOYEEDB program, 1-10 RESTART DATA SET DASDL options DASDL definition, 3-25 secondary pack, 1-10 structures, A-2 tailored database files characteristics and purpose, 1-11, 4-7 list, 1-10 naming, 3-21 enabling the database, 9-3 END-TRANSACTION operation restart data set and, 3-25 Enterprise Database Server (See also Enterprise Database Server Extended Edition, Enterprise Database Server Standard Edition) auditing the database, 3-3 COPYAUDIT program, 3-5 DASDL as data definition language for, 2-1 DASDL options choosing, 3-10 dollar options, 3-11 setting and resetting, 3-10 DASDL options, choosing, 3-3 database management overview, 1-3 database services, 6-4 databases backing up, 18-11, 19-4, 19-5 listing files with the CANDE FILES command, 1-9 physical limitations, 16-1 upgrading in an ADDS environment, 18-11 definition, 1-2 enabling a data update, 2-4 errors, 6-15 DUPLICATES, 8-7 during abort recovery, 8-7 during halt/load recovery, 8-11 error messages, 8-8, 8-12 LIMITERROR, 8-7 results of, 6-15 exceptions, 6-15 where to find, 6-16 generating a new database, 4-2 keys file, 15-4 list of database files, 1-10 memory requirements code, 16-3 database, 16-4 optional functions choosing, 3-10 dollar options, 3-11 setting and resetting, 3-10 returning to a previous level of, 20-5 running two versions, 17-9, 17-10 tailored database files, 6-4 standard software files, 1-9 services and tasks, 6-4 structures data item, 2-15 data set, 2-5 global data item, 2-19 managing, 3-13 overview, 2-3 real-world data mapped to, 1-14 set, 2-9 subset, 2-13 tailored database files characteristics and purpose, 1-11, 4-7 EMPLOYEEDB database list of, 1-10 listing with the CANDE FILES command, 1-10 naming, 3-21 tasks, 1-2 3850 8198 001 Index 9

Index unaudited database, 3-3 upgrading, 16-8, 18-11 Enterprise Database Server Extended Edition (See also Enterprise Database Server) application programs, 1-7 audit file sections, 3-6, 13-6, 13-7 data set sections, 2-6, 2-7, 2-8 Enterprise Database Server Standard Edition compatibility with, 1-7 EXTENDED attribute, 12-3, 12-6 features, 1-6 goals, 1-5 license, 1-7 moving to, 1-7 online set garbage collection, 11-4 purpose, 1-5 record serial number (RSN), 12-6 scenarios, 13-2 SECTIONS option, 13-11 set sections, 2-11 systems that benefit from, 1-6 TranStamp locking, 12-1, 12-2, 12-3, 13-11 Enterprise Database Server SE (See Enterprise Database Server Standard Edition) Enterprise Database Server Standard Edition (See also Enterprise Database Server) AA word as tiebreaker, 12-5 traditional locking, 12-2 traditional sets, 2-12 Enterprise Database Server XE (See Enterprise Database Server Extended Edition) environment, data management determining memory requirements for, 16-2 sample calculations of memory requirements, 16-5 errors, 6-15 handling during manual recovery, 8-15 I/O during abort recovery, 8-7 during halt/load recovery, 8-11 handling, 6-16 read errors, 6-17 reconstruct recovery WFL jobs, 8-20 write errors, 6-18 results of, 6-15 evaluating memory requirements, 16-5 example memory calculations, 16-5 examples backing up related database files, 7-24 certifying database structures, 9-5 changing code file locations, 6-12 database file locations, 6-12 checking DASDL syntax, 4-3 compiling batch application program, 5-3 DASDL source file, 4-3 copying a dump, 7-19 correcting DASDL syntax, 4-4 creating DMSUPPORT library and RECONSTRUCT program manually, 4-7 creating temporary, 6-11 date item types, 2-16 designating a library for a function, 6-7 discontinuing a program, 6-16 duplicating a dump, 7-19 file-equating a file, 6-7 finding a mix number, 6-13 host system messages, 6-14 initializing control file manually, 4-6 database and structures, 6-8 keeping track of a processing job, 6-13 listing database files, 1-9 manual COPYAUDIT operations, 7-23 PRINTAUDIT program multiple line request, 10-12 session syntax, 10-10, 10-11 QUICKCOPY operations, 7-23 real-world data map, 1-14 rebuild recovery, 8-24 reconstruct recovery, 8-19 from audit file only, 8-18 from backup dump, 8-18 from tape, 8-17 row recovery, 8-17 rollback recovery, 8-26 running an application program, 5-3 source file application program, 5-3 turning off designation of a library for a function, 6-7 verifying a dump, 7-18 WFL job backing up the database with, 7-12 reconstruct recovery, 8-20 exceptions, 6-15 results of, 6-15, 6-16 exclude list clause, 7-10 existing dump copying, 7-19 duplicating, 7-19 Index 10 3850 8198 001

Index verifying, 7-18 EXTENDED attribute, 12-3, 12-6, 13-11 F field definition, 2-4 synonym for data item, 2-3 field data item, 2-17 file attributes, data set capacity and, 13-2 file-equating a file, 6-7 files audit files, 3-5 changing locations of, 6-12 DBDATA file type list for sample database, 1-11 file name DASDL definition, 3-21 file-equating, 6-7 initializing, 6-7 listing CANDE FILES command, 1-9, 1-10 DM utility LIST command, 9-3 old versions of, 17-4, 19-3 printing DM utility WRITE command, 9-3 timestamps, 6-9 filler data item, 2-18 final report of manual recovery, 8-15 flowcharts abort recovery process, 8-5 compiling DASDL definition, 4-8 creating a database, 4-8 halt/load recovery process, 8-9 frozen libraries, thawing, 15-4 full database dump, 7-7 G GARBAGE COLLECT command, 11-4 garbage collection, online set, 11-4, 13-11 generating a new database, 4-2 global data item DASDL definition, 2-19 definition, 2-3, 2-19 maintaining, 2-20 updating, 2-20 group data item, 2-17 DASDL definition, 2-18 H halt/load recovery, 8-8 flowchart of process, 8-9 messages, 8-10, 8-12 monitoring results of, 8-10 remedies for failed, 8-11 hexadecimal format PRINTAUDIT, 10-2, 10-4 host system database tasks, 6-6 how it uses records, 2-4 messages, 6-14 hot software update (See software update with DMUPDATE utility) I I/O improving efficiency of, 13-6 rate affected by sectioned audit files, 3-6 I/O errors during abort recovery, 8-7 during halt/load recovery, 8-11 handling, 6-16 read errors, 6-17 reconstruct recovery WFL jobs for, 8-20 write errors, 6-18 IDC (Interactive Datacomm Configurator), 16-7 incremental database dump, 7-8 INCREMENTAL dump option, 7-9 independence of application programs from data changes, 1-4 INDEPENDENTTRANS DASDL option, 1-7, 3-11, 3-17, 11-4 automatic database recovery and, 8-2 INDEPENDENTTRANS option for Open Distributed Transaction Processing, 17-12 index set as, 2-9 subset as, 2-13 synonym for set or subset, 2-3 inhibiting messages, 16-7 INITIALIZE DASDL option, 3-9, 3-12 initializing control file, 4-6 database files, 6-7 structures, 3-9 3850 8198 001 Index 11

Index INITIALVALUE DASDL option, 3-13, 3-15 inquiry, 1-13 Installation Center program using to return to a previous release, 20-2 using to upgrade a Remote Database Backup environment, 19-6 using to upgrade software in an ADDS environment, 18-2 installation environment, determining, 17-2 installation process current user, 15-2 examples, 15-5 first-time installation steps, 17-3 general requirements for, 15-6 new user, 15-2 overview, 15-5 preparing for, 15-2 installing ADDS, 17-5 data management environment, 17-2 Interim Corrections and Supplemental Support Packages, 21-1 new software release, procedures for an ADDS environment, 18-2 products that do not use ADDS, 19-2 Interactive Datacomm Configurator (IDC), 16-7 Interim Corrections and Supplemental Support Packages installing with database open, 21-1 interval types for PRINTAUDIT program, 10-14 J job, keeping track of, 6-13 K keeping track of a processing job, 6-13 KEYCOMPARE DASDL option, 3-11, 3-17 keys file, 15-4 kilobyte memory requirements, determining sample calculations, 16-5 table for, 16-2 L language to define data, 2-1 level of software returning to a previous level of ADDS, 20-3 Enterprise Database Server, 20-5 Remote Database Backup, 20-6 upgrading to a new level ADDS environment, 18-2 non-adds environment, 19-2 libraries, thawing frozen, 15-4 library maintenance commands, 7-6, 7-20 using to back up Enterprise Database Server databases, 18-11, 19-4, 19-5 library, equating to a function, 6-7 LIBS (Library Task Entries) system command, 17-4, 19-3 license, Enterprise Database Server Extended Edition, 1-7 limitations, physical (See physical limitations for) LIMITERROR error, 8-7 line support processor (LSP), 16-7 linear scalability goal, 1-5 listing the SL system command associations, C-1 loading software, 17-4, 19-2 overview, 15-5 procedures for ADDS environment, 18-2 non-adds environment, 19-2 using the SI or Installation Center program to upgrade software in a non-adds environment, 19-2 to upgrade software in an ADDS environment, 18-2 location for database dump, 7-11 LOCK TO MODIFY DETAILS DASDL option, 3-13, 3-15 LOCKEDFILE DASDL option, 3-11 locking records, 2-12 traditional locking, 12-2 TranStamp locking, 12-1, 12-2, 12-3, 13-11 logical remap, 1-4 logically separating a data set, 13-3 LSP (line support processor), 16-7 M maintenance tasks, 6-6 manual database recovery Index 12 3850 8198 001

Index database availability during, 8-13 final report, 8-15 NOZIP option, 8-14 reasons to delay, 8-14 rebuild recovery, 8-23 reconstruct recovery, 8-16 from a backup dump, 8-17 from audit file only, 8-18 Quickfix, 8-19 WFL jobs, 8-20 restarting, 8-15 rollback recovery, 8-25 situations requiring, 8-12 types, 8-12 manual operations creating DMSUPPORT library, 4-7 RECONSTRUCT program, 4-7 initializing control file, 4-6 MARC screen, 6-13 MAXFILESPERTAPE phrase, 7-23 memory requirements, determining sample calculations, 16-5 table for, 16-2 message control in CANDE, 16-7 messages abort recovery, 8-6, 8-8 halt/load recovery, 8-10, 8-12 host system, 6-14 syntax error messages for DASDL, 4-6 migrating dictionaries, 18-5 software procedures for a non-adds environment, 19-2 procedures for an ADDS environment, 18-2 mix number finding, 6-13 for discontinuing a program, 6-16 MIXNUMBER task attribute, 6-13 model of database, 17-13 modified software, 1-7 monitoring, 9-2 monitoring the database, 9-2 DBCERTIFICATION program options and actions, 9-4 general tasks, 9-3 moving to a new software release, procedures for ADDS environment, 18-2 products that do not use ADDS, 19-2 multiple sets, 13-9 multiterabyte capacity goal, 1-5 N NDLII (Network Definition Language II), 16-7 Network Definition Language II (NDLII), 16-7 network support processor (NSP), 16-7 new software release, installing ADDS environment, 18-2 non-adds environment, 19-2 NFTINFOFILE file, 17-8 NOCOMPARE option, 7-10 noncompression, 7-11 nonduplicate set, 12-5 nontailored database files, 6-4 nonusercoded databases default job queue attributes and, 18-15, 19-8 job queue for all Remote Database Backup tasks, 17-7 modifying the RDB support library for, 17-8 NOZIP option, 8-14 NSP (network support processor), 16-7 numeric data item, 2-16 O OBJECT/EMPLOYEEDB/UPDATE file, 5-3 offline dump activity during, 7-14 backup WFL job, 7-13 DMUTILITY tasks, 7-17 examples of syntax for, 7-17 old software release, returning to, procedure for (See previous software release, returning to) old versions of files, 17-4, 19-3 online dump activity during, 7-14 backup WFL job, 7-12 DMUTILITY tasks, 7-15 examples of syntax for, 7-16 tasks prior to, 7-15 online method of populating a database, 5-2 online set garbage collection, 11-4, 13-11 open database software interaction, 5-8 Open Distributed Transaction Processing in an ADDS environment, upgrading Enterprise Database Server databases, 17-12 3850 8198 001 Index 13

Index software, procedures for using, 17-12 optimizing data access, 3-10 optional Enterprise Database Server database functions, 3-10 dollar options, 3-11 setting and resetting, 3-10 OPTIONS DASDL options, 3-11, 3-12 DASDL definition, 3-16, 3-19 options, DASDL $, 3-9, 3-11, 3-12 ADDRESSCHECK, 3-11, 3-16, 3-17 ALLOWEDCORE, 3-19 AREALENGTH, 3-22 AREAS, 3-22 AUDIT, 3-4, 3-11, 3-16, 3-17, 3-19 AUDIT TRAIL, 3-4, 3-12, 3-22, 7-20 BLOCKSIZE, 3-22 BUFFERS, 3-13, 3-15 CHECKSUM, 3-13, 3-14, 3-23 CONTROL FILE, 3-12, 3-24 CONTROLPOINT, 3-19 COPY TO TAPE AND REMOVE, 3-22, 3-23, 7-20 DEFAULTS, 3-12, 3-13 DIGITCHECK, 3-13, 3-15 DMCONTROL, 3-11, 4-6 dollar, 3-9, 3-11, 3-12 DUPLICATED, 7-20 DUPLICATED ON PACK, 3-22, 3-23 INDEPENDENTTRANS, 3-11, 3-16, 3-17, 3-19 INITIALIZE, 3-12 INITIALVALUE, 3-13, 3-15 KEYCOMPARE, 3-11, 3-16, 3-17, 3-19 LOCK TO MODIFY DETAILS, 3-13, 3-15 LOCKEDFILE, 3-11 OPTIONS, 3-12, 3-16, 3-19 OVERLAYGOAL, 3-20 PACK, 3-13, 3-15, 3-22, 3-23, 3-24, 3-25 PARAMETERS, 3-12 QUICKCOPY TO, 7-20 RDS-ID, 3-25, 3-26 RDS-LOCATOR, 3-25, 3-26 RDS-MIX-NO, 3-25, 3-26 RDS-PROG, 3-25, 3-26 RDS-PROGRAM, 3-25, 3-26 RDS-USER-INFO, 3-25, 3-26 REAPPLYCOMPLETED, 3-11, 3-16, 3-18, 3-19 REBLOCK, 3-13, 3-14 REBLOCKFACTOR, 3-13, 3-14 RESTART DATA SET, 3-12 STATISTICS, 3-11, 3-16, 3-18, 3-19 SYNCPOINT, 3-20 SYNCWAIT, 3-20 UPDATE, 3-12 UPDATE EOF, 3-22, 3-23 USERCODE, 3-24, 3-25 VERIFY, 7-20 ZIP, 3-11 ordering contents for PRINTAUDIT view, 10-12 OVERLAYGOAL DASDL option, 3-20 overriding DASDL DEFAULT options, 3-14 overview, installation process, 15-5 overwriting existing software, 18-2, 18-6 P PACK DASDL option, 3-13, 3-15, 3-23, 3-25 pack locations for files, default, 1-9 PARAMETERS DASDL options, 3-12 DASDL definition, 3-16 partial database dump, 7-7 partial database recovery Quickfix, 8-19 reconstruct recovery, 8-16 from audit file only, 8-18 from tape, 8-16 WFL jobs, 8-20 performance of database, tasks for managing, 1-3 physical files creating, 4-2 data set as, 2-5 physical limitations for databases, 16-1 Enterprise Database SErver, 16-8 physical requirements for SDF Plus, 16-6 populating a database, 5-2 POPULATION option, 13-2 preparing for the installation process, 15-2 previous software release, returning to ADDS, 20-3 Enterprise Database Server, 20-5 Remote Database Backup, 20-6 PRINTAUDIT program alphanumeric format, 10-2, 10-4 audit file sections, 3-8 audit file view contents, 10-4 format, 10-4 interval types, 10-14 introductory lines, 10-5 Index 14 3850 8198 001

Index limiting, 10-7 ordering contents of, 10-12 record heading information, 10-7 requesting, 10-9 selecting record types, 10-8 selection parameters and examples, 10-17 timestamps, 10-16 types of records, 10-7 where to view, 10-3, 10-9 hexadecimal format, 10-2, 10-4 session syntax, 10-10 ending a session, 10-12 entering a request, 10-11 multiple line request, 10-12 privileges, assigning to the SYSTEM/ADDS/UTILITIES program, 15-5 processing job, keeping track of, 6-13 products with a screen interface, SDF Plus physical requirements, 16-6 properties, recording for current dictionary, 18-6 Q QUICKCOPY command, 7-23 QUICKCOPY TO DASDL option, 7-20 Quickfix row recovery, 8-19 QUIESCE command, DMUTILITY, 7-11 R RDS-ID DASDL option, 3-26 RDS-LOCATOR DASDL option, 3-26 RDS-MIX-NO DASDL option, 3-26 RDS-PROG DASDL option, 3-26 RDS-PROGRAM DASDL option, 3-26 RDS-USER-INFO DASDL option, 3-26 read errors, Accessroutines handling of, 6-17 real-world data map, 1-14 mapped to Enterprise Database Server structures, 1-14 REAPPLYCOMPLETED DASDL option, 3-11, 3-18, 13-6, 17-12 REBLOCK DASDL option, 3-13, 3-14 REBLOCKFACTOR DASDL option, 3-13, 3-14 rebuild recovery, 8-23 syntax examples, 8-24 RECONSTRUCT program, 1-10 backing up, 7-4 characteristics and purpose, 1-12 creating, 4-2 after DASDL compilation, 4-6 manually, 4-7 DASDL definition, 3-21 services and tasks, 6-5 reconstruct recovery, 8-16 from a backup dump, 8-17 from audit file only, 8-18 Quickfix, 8-19 syntax examples, 8-17 WFL jobs, 8-20 RECONSTRUCT/EMPLOYEEDB program, 1-10 characteristics and purpose, 1-12, 4-7 record definition, 2-3 heading information in audit file view, 10-7 how the system treats, 2-4 illustration of, 2-4 synonym for row, 2-3 types in audit files, 10-7 record locking, 2-12 traditional locking, 12-2 TranStamp locking, 12-1, 12-2, 12-3, 13-11 record serial number (RSN), 12-4, 13-11 application programs, 12-6 as tiebreaker, 12-5 recovery abort recovery, 8-4 flowchart, 8-5 monitoring results of, 8-5 automatic, types of, 8-2 data and database, 1-4 database availability during manual, 8-13 database backup and, 8-2 final report from, 8-15 halt/load recovery, 8-8 flowchart of process, 8-9 monitoring results of, 8-10 manual error-handling during, 8-15 types of, 8-12 meaning of, 8-2 reasons to delay, 8-14 rebuild recovery, 8-23 reconstruct recovery, 8-16 from a backup dump, 8-17 from audit file only, 8-18 Quickfix, 8-19 syntax examples, 8-17, 8-18, 8-19 3850 8198 001 Index 15

Index WFL jobs, 8-20 restarting, 8-15 rollback recovery, 8-25 single transaction abort, 8-3 situations that required manual recovery, 8-12 tasks for managing, 1-4 types, 8-2 unaudited database, 8-27 RECOVERY code file DASDL definition, 3-21 related database files, backing up, 7-2, 7-24 release compatibility and support policy, 14-1 release level returning to a previous level of ADDS, 20-3 Enterprise Database Server, 20-5 Remote Database Backup, 20-6 upgrading to a new level ADDS environment, 18-2 in a non-adds environment, 19-2 Remote Database Backup facilitating NFT task under AFS mode, 17-7, 18-18, 19-11 memory requirements on the primary host, 16-3 on the secondary host, 16-3 providing a queue, 17-7, 18-15 recompiling system software, 17-7 returning to a previous level of, 20-6 running a second version, 17-11 support library, modifying for a nonusercoded database, 17-8, 18-16, 19-9 upgrading environment for, 18-13, 19-5 secondary database, 18-17, 19-10 under ABW mode, 18-14, 19-7 using with a nonusercoded database, 17-7 REMOVE command, 7-5 reorganization of the database planning for, 11-3 reasons for, 11-2 types of change, 11-3 REORGANIZATION program DASDL definition, 3-21 services and tasks, 6-6 report on files and rows, 9-3 requirements for installation, 15-6 memory data management, 16-3 database, 16-4 sample site calculations, 16-5 SDF Plus, 16-6 resetting optional Enterprise Database Server database functions, 3-10 resources needed for database options, 3-10 RESTART DATA SET DASDL option, 3-12 DASDL definition, 3-25 restarting recovery operations, 8-6, 8-10, 8-15 RESUME command, DMUTILITY, 7-11 RMSUPPORT library backing up, 7-4, 20-5 services and tasks, 6-5 RO MESSAGE command, 16-7 rollback recovery, 8-25 syntax examples, 8-26 root table sectioned sets, 2-13 traditional sets, 2-12 round-robin algorithm, 3-6 row as part of data set, 2-5 synonym for record, 2-3 row recovery Quickfix, 8-19 WFL jobs, 8-20 RSN (See record serial number) running a second version of Remote Database Backup, 17-11 running two versions of Enterprise Database Server audit reader support library, 17-9 DMUPDATE support library, 17-10 S safety of database, tasks for managing, 1-3 sample EMPLOYEEDB database (See EMPLOYEEDB database) sample memory calculations, 16-5 scalability of Enterprise Database Server databases, 1-4 scenario audit trail throughput problem, 13-5 general transaction processing environment, 13-10 reaching data set capacity limits, 13-2 set contention problem, 13-8 scheduling database maintenance, 6-3 SCRATCHPOOL option, 7-11 Screen Design Facility Plus (SDF Plus) libraries, 17-4, 19-3 Index 16 3850 8198 001

Index memory requirements per user, 16-4 physical requirements, 16-6 requirements, overview, 15-6 terminal and terminal emulator support, 16-7 verifying installation, 17-4, 19-3 screen-based products, SDF Plus requirements for, 16-6 SDF Plus (See Screen Design Facility Plus) SECADMIN privileges for SYSTEM/ADDS/UTILITIES program, 15-5 secondary database, upgrading, 18-17, 19-10 secondary family database pack files, 1-10 sectioned audit file, 13-7 data set, 2-6 set, 2-11 SECTIONS option, 13-11 security requirements for the SYSTEM/ADDS/UTILITIES program, 15-5 security, tasks for managing, 1-4 selection parameters in PRINTAUDIT program, 10-17 separating a data set, 13-9 serial number, 7-11 serial number reporting, 7-10 set contention, 2-12, 13-8 DASDL definition, 2-11 definition, 2-3, 2-9 Enterprise Database Server Extended Edition, 2-13 how it works, 2-10, 2-14 illustration, 2-10 maintaining the structure of, 2-11 online garbage collection, 11-4, 13-11 overriding DASDL DEFAULT options for, 3-14 purpose, 2-10 sections, 2-11, 2-13, 13-10 requirements, 2-12 solution to set contention, 13-9 synonym for index, 2-3 traditional, 2-12 updating, 2-11 setting optional Enterprise Database Server database functions, 3-10 SI program (See Simple Installation (SI) program) Simple Installation (SI) program using to restore to previous version of Enterprise Database Server software, 20-5 using to upgrade a Remote Database Backup environment, 19-6 using to upgrade software in an ADDS environment, 18-2 single audit trail, 3-5 single data item DASDL definition, 2-18 single transaction abort recovery, 8-3 situations requiring manual database recovery, 8-12 SL (Support Library) system command, 6-7 SL system command associations, C-1 software enhanced, 1-7 installation requirements, 15-6 interaction for an open database, 5-8 loading, 17-4, 19-2 modified, 1-7 overview of Enterprise Database Server files, 1-3 supported, 1-8 unsupported, 1-8 software components DMUPDATE utility, 21-4 DMUPDATESUPPORT system library, 21-4 software release returning to a previous level of ADDS, 20-3 Enterprise Database Server, 20-5 Remote Database Backup, 20-6 upgrading to a new level in a non-adds environment, 19-2 in an ADDS environment, 18-2 software update with DMUPDATE utility, 21-1 assisted, 21-11 automatic, 21-12 controlled, 21-9 customizing configuration file, 21-7 DMUPDATE support library, 17-10 limitations and considerations, 21-16 planning for, 21-3 software components, 21-4 steps to perform, 21-15 update files, 21-14 update types, 17-11, 21-3 using DMSUPPORT library, 5-7 standard Enterprise Database Server software files list, 1-9 3850 8198 001 Index 17

Index services and tasks, 6-4 standard software files, 1-9 STATISTICS DASDL option, 3-11, 3-18 storage locations, choosing options for, 3-10 structures analyzing, 9-6 certifying, 9-4 DASDL options that manage, 3-13 data item, 2-15 data set, 2-5 global data item, 2-19 initializing, 3-9 overview of Enterprise Database Server database, 2-3 reasons for changing, 11-2 set, 2-9 subset, 2-13 types of, 2-4 types of change, 11-3 subset DASDL definition, 2-15 definition, 2-3, 2-13 illustration, 2-14 maintaining the structure of, 2-15 purpose, 2-14 synonym for index, 2-3 updating, 2-15 SUMLOG file, 6-15 Support Library (SL) system command, 6-7 associations, C-1 support policy and release compatibility overview, 14-1 SYNCPOINT DASDL option, 3-20 SYNCWAIT DASDL option, 3-20 syntax error messages for DASDL, 4-6 system files, naming, 3-21 SYSTEM/ACCESSROUTINES, 16-3 DASDL definition, 3-21 SYSTEM/ADDS/UTILITIES program, 18-3, 18-7 security requirements, 15-5 SYSTEM/COPYAUDIT program audit files and, 3-5 SYSTEM/DMCONTROL program action after DASDL compilation, 4-6 SYSTEM/DMDATARECOVERY code file DASDL definition, 3-21 SYSTEM/DMRECOVERY program halt/load recovery flowchart of process, 8-9 SYSTEM/DMSUPPORT library DASDL definition, 3-21 SYSTEM/DMUTILITY program action after DASDL compilation, 4-6 SYSTEM/RDBSERVER program, 17-7 SYSTEM/RDBSUPPORT program, 17-7 SYSTEM/RECONSTRUCT program DASDL definition, 3-21 SYSTEM/RECOVERY code file DASDL definition, 3-21 SYSTEM/REORGANIZATION program DASDL definition, 3-21 T table as synonym for data set, 2-3 tailored database files backing up, 7-4 characteristics and purpose, 1-11, 4-7 list for EMPLOYEEDB database, 1-10 listing with the CANDE FILES command, 1-10 naming, 3-21 services and tasks, 6-4 tape compression, 7-11 cycle number, 7-11 density, 7-11 dump, 7-3, 7-11 noncompression, 7-11 SCRATCHPOOL option, 7-11 serial number, 7-11 version number, 7-11 workers, 7-11 tape drives CTS9840, 7-10 DLT, 7-10 tape encryption audit files, 7-21 DMUTILITY, 7-5 task values abort recovery results, 8-6 tasks, Enterprise Database Server database management, 1-3 temporary control file, 6-11 terminal and terminal emulator support by SDF Plus, 16-7 terminal configuration for SDF Plus-based products, 16-6 terminating a program abnormally, 6-16 THAW system command, 15-4 thawing frozen libraries, 15-4 tiebreaker AA word, 12-5 Index 18 3850 8198 001

Index RSN, 12-5 timestamps for files, 6-9 in PRINTAUDIT program, 10-16 TPS (Transaction Processing System), 1-8 traditional locking, 12-2 transaction abort recovery and, 8-4 definition and flowchart, 1-13 Transaction Processing System (TPS), 1-8 TranStamp locking, 12-1, 12-2, 13-11 components, 12-1 results, 12-3 U unaudited database activity during dump, 7-15 advantages and disadvantages, 3-4 recovery, 8-27 supported by Enterprise Database Server, 3-3 unexpected events, handling, 6-14 unsupported software, 1-8 UPDATE DASDL option, 3-9, 3-12 UPDATE EOF DASDL option, 3-23 update levels of database, 6-9 update types software update with DMUPDATE utility, 21-3 updating database, 1-13 planning for, 11-3 reasons for, 11-2 types of change, 11-3 how Enterprise Database Server enables, 2-4 upgrading ADDS dictionaries, 18-2 dictionaries, 18-5 Enterprise Database Server databases in an ADDS environment, 18-11 to use Open Distributed Transaction Processing product, 17-12 Enterprise Database Server Extended Edition, 1-7 Remote Database Backup environment, 18-13, 19-5 software in a non-adds environment, 19-2 in an ADDS environment, 18-2 software, procedures for in an ADDS environment, 16-8 USERCODE DASDL option, 3-25 V VERIFY DASDL option, 7-20 VERIFYDUMP command, 7-5, 7-18 verifying a dump, 7-18 version number, 7-11 virtual sector size, version 2 (VSS-2) planning ahead, 16-6 Visible DBS command AUDIT BUFFERS, 3-8 GARBAGE COLLECT, 11-4 W WFL (See Work Flow Language) Work Flow Language (WFL) command running application program, 5-3 jobs backing up the database with, 7-12 example database backup, 7-12 reconstruct recovery jobs, 8-20 workers, 7-11 workspace size, choosing options for, 3-10 write errors, Accessroutines handling of, 6-18 Z ZIP DASDL option, 3-11, 4-6 Special Characters *SYSTEM/SDFPLUS/ARCHIVEMANAGER, 19-3 *SYSTEM/SDFPLUS/COMMANAGER, 19-3 *SYSTEM/SDFPLUS/DICTMANAGER, 19-3 *SYSTEM/SDFPLUS/FORMSPROCESSOR, 19-3 *SYSTEM/SDFPLUS/FORMSSUPPORT, 19-3 3850 8198 001 Index 19

Index Index 20 3850 8198 001

.

2008 Unisys Corporation. All rights reserved. *38508198-001* 3850 8198 001