DataFlex Connectivity Kit For ODBC User's Guide. Version 2.2



Similar documents
Setting Up ALERE with Client/Server Data

Using MS-SQL Server with Visual DataFlex March, 2009

ODBC Driver Version 4 Manual

ODBC Overview and Information

SEER Enterprise Shared Database Administrator s Guide

STATISTICA VERSION 9 STATISTICA ENTERPRISE INSTALLATION INSTRUCTIONS FOR USE WITH TERMINAL SERVER

Matisse Installation Guide for MS Windows. 10th Edition

Installation Instruction STATISTICA Enterprise Small Business

3 Setting up Databases on a Microsoft SQL 7.0 Server

Contents. PAINLESS MULTI-DBMS STRATEGY For Magic e-developers. Contents Pines Boulevard, Suite 312 Pembroke Pines, Florida USA

STATISTICA VERSION 12 STATISTICA ENTERPRISE SMALL BUSINESS INSTALLATION INSTRUCTIONS

FileMaker 11. ODBC and JDBC Guide

PaperClip32. Installation Guide. for Workgroup and Enterprise Editions. Document Revision 2.1 1

FileMaker 12. ODBC and JDBC Guide

IGEL Universal Management. Installation Guide

Using the Caché SQL Gateway

National Fire Incident Reporting System (NFIRS 5.0) Configuration Tool User's Guide

Matisse Installation Guide for MS Windows

Migrating from DAW s CK for MS SQL to Mertech s MS SQL Driver

Connecting LISTSERV to an Existing Database Management System (DBMS)

ODBC Driver User s Guide. Objectivity/SQL++ ODBC Driver User s Guide. Release 10.2

PaperClip Audit System Installation Guide

Portions of this product were created using LEADTOOLS LEAD Technologies, Inc. ALL RIGHTS RESERVED.

ElectricCommander. Technical Notes MS Visual Studio Add-in Integration version version 3.5 or higher. October 2010

PAINLESS MULTI-DBMS STRATEGY For Magic Developers

Plug-In for Informatica Guide

Veritas Cluster Server Database Agent for Microsoft SQL Configuration Guide

Linking Access to SQL Server

CHAPTER 23: USING ODBC

Installation Instruction STATISTICA Enterprise Server

FileMaker 13. ODBC and JDBC Guide

National Fire Incident Reporting System (NFIRS 5.0) NFIRS Data Entry/Validation Tool Users Guide

Dell Statistica Statistica Enterprise Installation Instructions

Setting up a database for multi-user access

P R O V I S I O N I N G O R A C L E H Y P E R I O N F I N A N C I A L M A N A G E M E N T

How To Backup A Database In Navision

Unicenter NSM Integration for Remedy (v 1.0.5)

Crystal Reports Installation Guide

v4.8 Getting Started Guide: Using SpatialWare with MapInfo Professional for Microsoft SQL Server

CIMHT_006 How to Configure the Database Logger Proficy HMI/SCADA CIMPLICITY

DiskPulse DISK CHANGE MONITOR

Server & Workstation Installation of Client Profiles for Windows

Video Administration Backup and Restore Procedures

Inmagic ODBC Driver 8.00 Installation and Upgrade Notes

File Management Utility User Guide

Converting InfoPlus.21 Data to a Microsoft SQL Server 2000 Database

enicq 5 System Administrator s Guide

SmartConnect Users Guide

Forms Printer User Guide

XMailer Reference Guide

MobiLink Synchronization with Microsoft SQL Server and Adaptive Server Anywhere in 30 Minutes

QAD Enterprise Applications. Training Guide Demand Management 6.1 Technical Training

STATISTICA VERSION 10 STATISTICA ENTERPRISE SERVER INSTALLATION INSTRUCTIONS

Accounts Payable Workflow Guide. Version 11.2

Novell ZENworks 10 Configuration Management SP3

Bosch ReadykeyPRO Unlimited Installation Guide, product version 6.5. This guide is item number DOC , revision 2.029, May 2012.

Documentum Content Distribution Services TM Administration Guide

HELP DOCUMENTATION E-SSOM INSTALLATION GUIDE

Windows Domain Network Configuration Guide

DBMoto 6.5 Setup Guide for SQL Server Transactional Replications

Sage 100 ERP. Installation and System Administrator s Guide

Release Notes For Versant/ODBC On Windows. Release

Setup and Configuration Guide for Pathways Mobile Estimating

RTI Database Integration Service. Getting Started Guide

Jet Data Manager 2012 User Guide

Accounts Payable Workflow Guide. Version 12.0

PRECISION v16.0 MSSQL Database. Installation Guide. Page 1 of 45

Microsoft Dynamics CRM Adapter for Microsoft Dynamics GP

HYPERION SYSTEM 9 N-TIER INSTALLATION GUIDE MASTER DATA MANAGEMENT RELEASE 9.2

InventoryControl for use with QuoteWerks Quick Start Guide

Expedite for Windows Software Development Kit Programming Guide

SEER-HD Database Administrator s Guide

Server & Workstation Installation of Client Profiles for Windows (WAN Edition)

Interworks. Interworks Cloud Platform Installation Guide

CloudCTI Recognition Configuration Tool Manual

ODBC Client Driver Help Kepware, Inc.

OBJECTSTUDIO. Database User's Guide P

Migrating helpdesk to a new server

DiskBoss. File & Disk Manager. Version 2.0. Dec Flexense Ltd. info@flexense.com. File Integrity Monitor

The full setup includes the server itself, the server control panel, Firebird Database Server, and three sample applications with source code.

Qlik REST Connector Installation and User Guide

INFORMIX - Data Director for Visual Basic. Version 3.5

MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC

Table of Contents. CHAPTER 1 About This Guide CHAPTER 2 Introduction CHAPTER 3 Database Backup and Restoration... 15

Human Resources Installation Guide

Kofax Export Connector for Microsoft SharePoint

Auditing manual. Archive Manager. Publication Date: November, 2015

CA ARCserve Backup for Windows

Oracle Enterprise Manager

Suite. How to Use GrandMaster Suite. Exporting with ODBC

Portions of this product were created using LEADTOOLS LEAD Technologies, Inc. ALL RIGHTS RESERVED.

WhatsUp Gold v16.3 Installation and Configuration Guide

Installation Guide. Version 1.5. May 2015 Edition ICS Learning Group

TANDBERG MANAGEMENT SUITE 10.0

Using Microsoft SQL Server A Brief Help Sheet for CMPT 354

Embarcadero Performance Center 2.7 Installation Guide

Installing RMFT on an MS Cluster

High Availability Setup Guide

Microsoft Dynamics GP econnect Installation and Administration Guide

Transcription:

DataFlex Connectivity Kit For ODBC User's Guide Version 2.2 Newsgroup: news://dataaccess.com/dac-public-newsgroups.connectivity- Kit_Support Internet Address (URL): http://www.dataaccess.com FTP Site: ftp://ftp.dataaccess.com Part Number: 000610.UG

COPYRIGHT NOTICE Copyright 2003 DATA ACCESS CORPORATION. All rights reserved. Windows, when used in this manual, refers to the Microsoft Windows operating system. No part of this publication may be copied or distributed, transmitted, transcribed, stored in a retrieval system, or translated into any human or computer language, in any form or by any means, electronic, mechanical, magnetic, manual, or otherwise, or disclosed to third parties without the express written permission of Data Access Corporation, Miami, Florida, USA. DISCLAIMER Data Access Corporation makes no representation or warranties, express or implied, with respect to this publication, or any Data Access Corporation product, including but not limited to warranties of merchantability or fitness for any particular purpose. Data Access Corporation reserves to itself the right to make changes, enhancements, revisions and alterations of any kind to this publication or the product(s) it covers without obligation to notify any person, institution or organization of such changes, enhancements, revisions and alterations. TRADEMARKS Windows NT, Windows 95 and Windows 98 are registered trademarks of Microsoft Corporation. DataFlex and Visual DataFlex are registered trademarks of Data Access Corporation. All other company, brand, and product names are registered trademarks or trademarks of their respective holders.

Table of Contents Chapter 1 - Introduction... 7 Chapter 2 - Overview of the Connectivity Kit for ODBC... 9 ODBC...9 DataFlex... 11 DataFlex and ODBC... 11 What you Need... 12 Chapter 3 - Installation... 15 Chapter 4 - Converting Data to ODBC... 19 Database Builder Conversion... 21 CKMgr Conversion... 25 Chapter 5 - Connecting to Existing ODBC Data... 29 Manual connect... 29 Connect using Database Builder... 30 Chapter 6 Structure caching... 35 Chapter 7 - Intermediate File... 37 Header... 39 Table Keywords... 41 Column Keywords... 46 Index Keywords... 50 3

Chapter 8 - Character formats (OEM or Ansi)... 53 Setting Character Formats...53 Recommendations...54 Converting Table Formats...54 Chapter 9 - ODBC Specific Commands and Techniques 57 Attributes...57 Commands...61 Chapter 10 - Record Identity... 73 What is a Record Identity?...73 Defining the Record Identity...75 Recnum in Programs...76 RECNUM relationships...77 Chapter 11 - NULL Values and Defaults... 79 Null values...79 Default values...84 Configuring Null and Default values for conversion...86 Recommendations...87 Chapter 12 - Transactions... 89 Concurrency...89 Transactions and locking in the DataFlex language...92 Transactions and locking using the command language...92 4

Transactions and Data-Dictionaries... 99 Transactions and the DataFlex Connectivity Kit for ODBC. 102 Chapter 13 Database Builder... 111 Database Menu... 111 File menu... 114 Maintenance menu... 120 Chapter 14 - Error Handling... 123 Driver Level Errors... 123 Database Level Errors... 126 Appendix A ODBC Escape sequences... 133 Literals... 133 Scalar functions... 133 Appendix B - ODBC Data Sources... 143 Appendix C Configuration file ODBC_DRV.INT... 147 Appendix D - Getting Support... 156 How to Get Technical Support... 156 How to Contact Data Access... 156 5

Chapter 1 - Introduction DataFlex Connectivity Kit for ODBC 1 The DataFlex Connectivity Kit for ODBC is used to access ODBC databases from DataFlex programs. ODBC (Open DataBase Connectivity) is an industry standard that allows programs to access multiple database types without the need to rewrite the code for the program or having to re-link the program. A variety of database management systems can be accessed through ODBC. These include enterprise database systems such as Oracle, Sybase, SQL Server, flat file systems as dbase, Paradox and even non-database systems as Excel and ASCII. Many existing DataFlex applications, without change, will be able to use these non-dataflex database systems. Some other DataFlex applications require changes in order to make them perform well with these database systems. This document describes the steps you need to take to convert existing DataFlex data to an ODBC data source, how to connect a DataFlex program to an already existing ODBC data source, and information on how to adjust program code so that it takes advantage of the connectivity options of DataFlex. Introduction Version 2.2 7

DataFlex Connectivity Kit for ODBC Chapter 2 - Overview of the Connectivity Kit for ODBC 2 ODBC ODBC or Open DataBase Connectivity defines a method of connecting to data sources that are open to as many applications and data sources as possible. To accomplish this, the application and the data source have agreed upon a common method to access the database. This agreement defines a complete set of API function calls and a complete SQL syntax set together forming ODBC. The database side of this open connectivity is provided by drivers, contained in Dynamically Linked Libraries (DLLs). These drivers transform the ODBC API functions into functions supported by the particular data source being used. The ODBC SQL syntax is translated in a similar way into syntax accepted by the data source. The manufacturer of the database system usually produces these drivers, but there are many third party vendors as well. The ODBC architecture consists of four major components. They are described as follows. Overview Data source A Data Source identifies the server and database therein that will be accessed. A Data Source defines the ODBC driver to use for the connection. Depending on the driver it also defines the location of the data, the server on which the data resides, the database in which the data resides and so forth. Data Sources are created through the ODBC Administrator; the administrator can be started from the Control Panel or from the DataFlex Database Builder. Driver The driver is a DLL that sits between the driver manager and the data source. It processes ODBC function calls. It passes commands from the application to the data source, after possible translation. It also receives the result for the commands from the data source and passes it back to the application Version 2.2 9

Chapter 2 Driver Manager This DLL is generally provided by Microsoft as part the ODBC installation. It loads driver DLLs and directs function calls to them. Application A program that processes data. Application Overview Driver Manager Driver Driver Driver Data Source Data Source Data Source Four components of ODBC architecture. It is important to understand that ODBC is designed to expose database capabilities, not supplement them. Thus, application writers should not expect that ODBC would suddenly transform a simple database into a fully featured relational database engine. Nor are driver writers expected to implement functionality not found in the underlying database. 10 User s Guide

DataFlex Connectivity Kit for ODBC DataFlex Data Access Corporation introduced DataFlex in 1981. It was the first fourth generation language (4GL) and RDBMS available for Local Area Networks (LANs). In 1991, Data Access Corporation introduced DataFlex revision 3.0; the first commercially available object oriented 4GL and RDBMS in the industry. Today, DataFlex still comes with the closely integrated database but also includes an open database API to access industry-leading databases like SQL Server, Pervasive.SQL and DB2. DataFlex offers a complete suite of products that let developers create real world software solutions. It runs on DOS, LANs, Windows and all popular Unix platforms. Visual DataFlex is available for Windows. The DataFlex WebApp Server is available for Windows NT using the Microsoft Internet Information Server. DataFlex 3.1 and higher and Visual DataFlex are API enabled runtimes. All database access uses the same internal core functions in the API part of the runtime. Native DataFlex tables are accessed through a call to the API, the API then passes these requests to an internal DataFlex native driver. This driver is linked to the runtime. The runtime also has the ability to load other database drivers dynamically. For example, the DataFlex Connectivity Kit for ODBC is loaded dynamically. This approach lets the basic DataFlex code remain the same even when the backend data is within an ODBC data source. Installing a database and loading a driver into an existing DataFlex environment makes that environment capable of using the driver and accessing the backend supported by that driver. This flexible driver technique allows you to update drivers independent of the runtime, or vice-versa. Overview DataFlex and ODBC The Data Access Database API expects database drivers to support a certain set of attributes for a table. These attributes may or may not be available in the supported backend. If such an attribute is not available in the backend it needs to be stored somewhere outside of the backend database. The API uses so-called intermediate files to store such information. The contents of an intermediate file are partly API defined and partly driver specific. Version 2.2 11

Chapter 2 The DataFlex Connectivity Kit for ODBC will use the backend for its attributes whenever possible. The goal is to keep the intermediate file as small as possible. Some information can only be stored in the intermediate file because ODBC does not support the attribute essential to DataFlex. The ODBC Connectivity Kit expects an intermediate file to be present for every table that is accessed through it. A design goal of the ODBC client was that a DataFlex program should be able to switch the underlying database to ODBC without the need to adjust that program. This goal imposes a requirement on the tables that can be accessed through the driver. Every table must have a record identity. If you use the conversion utilities supplied with the driver, record identities can be automatically created for you. If you want to use the driver to access existing ODBC tables, those tables must have a record identity. If a table fails to have a record identity, it cannot be accessed through the driver. Overview What you Need In order to use the DataFlex Connectivity Kit for ODBC you will need the following components: A DataFlex program An API enabled DataFlex runtime that supports loading dynamic libraries The DataFlex Connectivity Kit for ODBC ODBC (version 3.0 or higher) At least one ODBC driver Depending on the database you may need a Database Management System and the appropriate means to communicate with it A DataFlex Program In order to access the data there must be a program; you, the end user, supply this program. The DataFlex Runtime Several DataFlex runtimes support the Connectivity Kit. For character mode applications, this is the DataFlex 3.1d (or higher) runtime using the Console Mode runtime operating on Windows. For Graphical User Interface (GUI) applications, this is Visual DataFlex 7 service pack 3 (or higher). 12 User s Guide

DataFlex Connectivity Kit for ODBC All these runtimes include the Data Access Database API. This API allows DataFlex to communicate with supported database systems with little or no regard to which specific database system is being used. The API has the ability to support several different database systems. This is accomplished by an agreed upon common method of accessing a database. A database system is accessed through a dynamic library known as an API driver. Such a driver implements the common methods the API expects. The API enabled runtime has the ability to load API drivers dynamically. This enables DataFlex to communicate with every database system that utilizes an API driver. The DataFlex Connectivity Kit for Microsoft ODBC is simply an API driver. The DataFlex Connectivity Kit for ODBC The DataFlex Connectivity Kit for ODBC is the API driver for ODBC, supplied by Data Access Corporation. It uses Call Level Interface (CLI). The DataFlex Connectivity Kit for ODBC will translate DataFlex runtime commands into SQL statements. The statement will be executed on the data source and the result is returned to the DataFlex runtime. Overview ODBC ODBC alone cannot access database tables. ODBC only consists of the driver manager and some applications that enable the user to setup ODBC Data Sources. It is not the intention of this document to fully explain how to setup data sources for particular database systems. Specific examples will be given in this document. For further information regarding installation, access and use of ODBC refer to the instructions supplied by Microsoft or the manufacturer of your ODBC database driver. An ODBC driver For each database system to be accessed an ODBC driver is needed. The database system manufacturer typically provides an ODBC driver. Third party vendors also have made ODBC database drivers available for many different database systems. Version 2.2 13

Chapter 2 A database system Some database systems will not require anything else than the ODBC driver. Usually these are flat-file database systems. Some systems however require a Database Management System to be installed. Such systems typically include connectivity pieces to connect workstations to the DBMS. Overview 14 User s Guide

Chapter 3 - Installation DataFlex Connectivity Kit for ODBC 3 You can install the DataFlex Connectivity Kit for ODBC on Windows platforms. The kit must be installed on a machine that already has an API enabled runtime installed. The installation program will not install database client or server software; it is assumed that you already have a database environment installed. For more information on installing a database environment, refer to the database s manual. To install the Connectivity Kit you need to run the dfodbc.exe installation program. This program will take you through the steps needed to install the Connectivity Kit in an existing DataFlex console mode and/or Visual DataFlex environment. For more information on installing DataFlex or Visual DataFlex please refer to the respective Installation and Environment Guide. 1. Select Component When you start the installation program, the first step is to select the components to install. You have several options: OPTION Client for Visual DataFlex Client for DataFlex Console Mode Online Documentation Create Program Group DESCRIPTION Select this option if you want to install the Connectivity Kit in a Visual DataFlex (VDF) 4.0c or higher environment. The installation program prompts you for the VDF directory. This is the root directory of the VDF install. Select this option if you want to install the Connectivity Kit in a DataFlex Console Mode 3.1c or higher environment. The installation program will prompt you for the Console Mode directory. This is the root directory of the Console Mode install. Select this option if you want the online documentation installed on your machine. It will be installed in the destination directory. Select this option if you want a program group created in the start menu. The Installation Version 2.2 15

Chapter 3 Installation OPTION Development install Tools & Utilities DESCRIPTION program group enables you to open the documents or to uninstall the driver. Select this option if you want the Connectivity Kit development packages installed in the DataFlex development environment. The packages will be installed in the Visual DataFlex and/or the DataFlex Console Mode development environment depending on the selections made. This option will also install sample program source code. The sources of the conversion programs will be installed if this option has been selected and the Tools & Utilities option is selected. Select this option if you want to install conversion tools and utilities. The OEM Ansi converter will be installed if this option is selected. If you are installing for DataFlex Console Mode, the database conversion program, ckmgr, will also be installed. 2. Select Directories and Setup Installation Options The next steps let you select the directories where you want the components installed. In most cases, the default directories should be accepted. In addition, you can setup a number of installation options. 3. Finish Installation Finally, after entering the installation information, the actual installation starts. All files of the selected components will be installed in the directories you have specified. 16 User s Guide

4. Register DataFlex Connectivity Kit for ODBC After the components have been installed, you must register the Connectivity Kit for use of Embedded SQL. The registration must be done in every DataFlex environment where the kit is installed (VDF and Console mode). You can choose to register later. In that case the Embedded SQL functionality Connectivity Kit will run in 31-day evaluation mode. Registering is only needed for Embedded SQL. All other parts of the DataFlex Connectivity Kit for ODBC do not require registration. They run with or without a valid registration. Uninstall You can uninstall the Connectivity Kit by using Add/Remove Programs in Control Panel. Installation Version 2.2 17

Chapter 4 - Converting Data to ODBC DataFlex Connectivity Kit for ODBC 4 The DataFlex Connectivity Kit for ODBC is used for two main purposes: To convert data to ODBC To attach to existing ODBC data This chapter discusses the conversion. Restrictions Using the DataFlex Connectivity Kit for ODBC, you can connect to a variety of database systems. These systems all have their own rules on what constructions are legal and which data can be put into which column. Some database systems are more restrictive then DataFlex on the columns that can be placed in an index, others do not allow indexed columns to have an offset greater then the defined page size, and so forth. If a table contains a construction that is illegal in the target platform, the conversion will not succeed. There is no way to determine up front through ODBC, if a construction is legal in the target database system. Knowledge about the target database system can avoid a lot a frustration in this area and save a lot of time. If an illegal construction is present, the only solution is to change the data definition so that it is acceptable by the target database system. Usually when the data definition changes, the programs need to be adjusted too. Converting Data Next to illegal constructions, it is possible that certain columns have values that cannot be accepted by the target database. Many database systems do not accept two (2) digit year dates for example. If you have such dates in your original DataFlex files the data needs to be adjusted before it can be converted. Record Identity The conversion can automatically add a record identity column to the ODBC table. This column will be named DFRECNUM. The column will not be automatically maintained. For more information on this subject, see Chapter 10 - Record Identity. Version 2.2 19

Chapter 4 Null & default values The DataFlex Connectivity Kit for ODBC will use a configuration file (ODBC_DRV.INT) to determine the nullability and default value of columns created in the conversion process. If no configuration file is found the default settings will be used. You can specify target setting per DataFlex type. For example, you can define that Numeric fields should be converted not to accept null values and use a default value of 0. See chapter Chapter 11 - NULL Values and Defaults for more information. If more control over the target settings is desired convert the definition only. Adjust the nullability and default values to your specifications. After that copy the data from the original table to the new converted table. Most database systems only allow table definition adjustments to be made on empty tables. In this way a table that has column specific defaults can be created. The table definition can be adjusted through Database Builder or the utilities of the target database system. Converting Data We have found that null values can degrade finding performance considerably. It is recommended not to allow null values in indexed columns. Table character format DataFlex data is stored in OEM format. Non-DataFlex back ends may expect the data to be stored in Ansi format. When defining the conversion options you can define the table character format to be used in the converted table. For more information on this subject, see Chapter 8 - Character formats (OEM or Ansi). Utilities Every environment has its own conversion utility. The Console Mode runtime uses a utility called ckmgr.flx. Type dfruncon ckmgr to start the utility. In Visual DataFlex, you should use Database Builder version 1.143 or higher. Create an ODBC data source Before you can convert anything to ODBC, you need to create an ODBC data source for your target database system. The data source must be created with the ODBC administrator program. Refer to the documentation of the driver you are using for specific details on how to setup a data source. There are two types of Data Sources, machine- and file Data Sources. Both types contain similar information, they differ in the way the information is 20 User s Guide

DataFlex Connectivity Kit for ODBC stored. Because of these differences, they are used in a somewhat different manner. Machine Data Sources are stored on the client system. Associated with the Data Source is all the information the ODBC Manager and database driver need to connect to the specified database. There are two Data Source subtypes, User- and System Data Sources. One specific user of the machine can use a User Data Source; all users that use the machine where the Data Source is defined can use a System Data Source. File Data Sources are stored in a file with extension.dsn (in ASCII format). The file Data Source stores all the information the ODBC Manager and database driver need to connect to the specified database. The file can be manipulated like any other text file. See Appendix B - ODBC Data Sources for more information on Data Sources. Database Builder Conversion Before data can be converted, the driver needs to be loaded. If you selected to automatically load the driver during install, the driver will be loaded on startup. Otherwise, you can load the driver by choosing the Load database driver menu choice in the Database menu of Database Builder. From the file dialog that displays, choose the ODBC_DRV.DLL file. After the driver has been loaded, the Database menu will be expanded with extra items, of these new items choose Convert to ODBC. Or Convert to ODBC from script. Converting Data Convert to ODBC After selecting Convert to ODBC, a list of all available tables is displayed. Select the tables you want to convert and click the Convert button. This activates the next dialog where you can specify the conversion options. Convert To ODBC from script After selecting this menu item, you will be prompted to select a conversion script file. Conversion script files, by default, have an extension of CNV but other extensions are allowed. A conversion script file is an ASCII file that specifies what files to convert. For each file you want to convert to ODBC a line must be created in the conversion script file of the form: Version 2.2 21

Chapter 4 <FileNumber>, <PrimaryFieldName> where: <FileNumber> <PrimaryFieldName> The filelist number of the file you want to convert to ODBC The name of the field that will be the record identifier. Such a field must be numeric and must have a main index that contains only this field. The index must already exist. A conversion script file for the order entry sample application of Visual DataFlex looks like: Converting Data 21, ID 25, NUMBER 30, ORDER_NUMBER After selecting the file(s) you need to specify conversion options in the Convert to ODBC, set options panel. This panel is discussed below; in the convert from script process the Recnum support checkbox is disabled. 22 User s Guide

DataFlex Connectivity Kit for ODBC Converting Data From the Convert to ODBC, set options dialogue you can setup the conversion. In the fields enter: 1. The data source name (DSN) of the data source you created in the ODBC administrator. 2. The user ID. This is not needed for all target database systems. If you fill in the information here, it will serve as a default when the ODBC driver prompts for the login information. If however you also specify a password the ODBC driver will not prompt for login information and the entered user id and password will be used always. (See discussion of password below). 3. The password of the user. This is not needed for all target database systems. If you fill in a password, you must also fill in the user identification. It is not recommended to supply the information here. If Version 2.2 23

Chapter 4 Converting Data you do not supply the password during conversion you will be prompted for the password every time the table(s) you convert is (are) opened. 4. The schema name within the database where the tables must be placed. This is only required if the database supports schema and you want to use a different schema than the default. 5. Uncheck Delete original after conversion if you want the original table to keep on existing after conversion. If any errors occur during conversion the original table will not be removed regardless of the selection you make here. This option should be unchecked if you want to copy the data to the converted table at a later moment. 6. Check Recnum support if you want the automatic record identity column to be added to the ODBC table. When checked this will add a column called DFRECNUM to the table definition. The new column will be placed in a new index. That index will automatically be set to be the primary index of the table. If you do not check the Recum support the table will be converted as is. The option is not available when converting from script. 7. Check the Convert definition only checkbox if you do not want to copy the records of the original table over to the new table. This option can also be used to create an empty table in the target environment, manipulate its definition and copy the data at a later time. If you want to copy the data at a later moment you will have to uncheck Delete original after conversion. 8. Check the File DSN checkbox when the data source to convert to is a file data source. 9. Check the Run unattended checkbox if you do not want errors to popup during conversion. If you select this option, a log file will be created in which results of the conversion including the error messages, if they occur, will be written. The log file is called ODBC_DRV.LOG and will be placed in the current working directory. This is the data directory of the current workspace. If there is already a file with this name, the log information will be appended to the existing file. 10. Select the RECNUM Index conversion type. A RECNUM index is an index that contains RECNUM as a segment. Such indexes can be converted in two ways. The conversion process can replace RECNUM by the record identity, or it can create non-unique indexes. Be aware that using non-unique indexes can cause behavior differences between programs working on DataFlex data and ODBC data. The DataFlex 24 User s Guide

DataFlex Connectivity Kit for ODBC database does not support non-unique indexes. 11. Select the table character format of the resulting table. Data can be stored in Ansi or OEM format. The first time the option panel is started; the default format will be read from the DEFAULT_TABLE_CHARACTER_FORMAT setting in the ODBC_DRV.INT configuration file. For more information on this subject see Chapter 8 - Character formats (OEM or Ansi). For more information on the configuration file see Appendix C Configuration file ODBC_DRV.INT. 12. Click the OK button to start the conversion. CKMgr Conversion To start the conversion utility type dfruncon ckmgr. After the utility is started, choose the Convert files to ODBC option from the File menu. Convert to ODBC After selecting Convert to ODBC, a list of all available tables is displayed. Select the tables you want to convert and click the Convert button. This activates the next dialog where you can specify the conversion options. Converting Data From the Convert to ODBC, set options dialogue you can setup the Version 2.2 25

Chapter 4 Converting Data conversion. In the fields enter: The data source name (DSN) of the data source you created in the ODBC administrator. The user ID. This is not needed for all target database systems. If you fill in the information here, it will serve as a default when the ODBC driver prompts for the login information. If however you also specify a password the ODBC driver will not prompt for login information and the entered user id and password will be used always. (See discussion of password below). The password of the user. This is not needed for all target database systems. If you fill in a password, you must also fill in the user identification. It is not recommended to supply the information here. If you do not supply the password during conversion you will be prompted for the password every time the table(s) you convert is (are) opened. The schema name within the database where the tables must be placed. This is only required if the database supports schema and you want to use a different schema than the default. Uncheck Delete original after conversion if you want the original table to keep on existing after conversion. If any errors occur during conversion the original table will not be removed regardless of the selection you make here. This option should be unchecked if you want to copy the data to the converted table at a later moment. Check Recnum support if you want the automatic record identity column to be added to the ODBC table. When checked this will add a column called DFRECNUM to the table definition. The new column will be placed in a new index. That index will automatically be set to be the primary index of the table. If you do not check the Recum support the table will be converted as is. The option is not available when converting from script. Check the Convert definition only checkbox if you do not want to copy the records of the original table over to the new table. This option can also be used to create an empty table in the target environment, manipulate its definition and copy the data at a later time. If you want to copy the data at a later moment you will have to uncheck Delete original after conversion. Check the File DSN checkbox when the data source to convert to is a file data source. Check the Run unattended checkbox if you do not want errors to popup during conversion. If you select this option, a log file will be created in which results of the conversion including the error messages, if they occur, will be 26 User s Guide

DataFlex Connectivity Kit for ODBC written. The log file is called ODBC_DRV.LOG and will be placed in the current working directory. If there is already a file with this name, the log information will be appended to the existing file. Select the RECNUM Index conversion type. A RECNUM index is an index that contains RECNUM as a segment. Such indexes can be converted in two ways. The conversion process can replace RECNUM by the record identity, or it can create non-unique indexes. Be aware that using non-unique indexes can cause behavior differences between programs working on DataFlex data and ODBC data. The DataFlex database does not support non-unique indexes. Select the table character format of the resulting table. Data can be stored in Ansi or OEM format. The first time the option panel is started; the default format will be read from the DEFAULT_TABLE_CHARACTER_FORMAT setting in the ODBC_DRV.INT configuration file. For more information on this subject see Chapter 8 - Character formats (OEM or Ansi). For more information on the configuration file see Appendix C Configuration file ODBC_DRV.INT. Click the OK button to start the conversion. Source code for CKMGR If the Development and the Tools & Utilities option were chosen during install the source code for the CKMGR program will be installed in the src directory of the DataFlex environment. A subdirectory ckmgr has been created that contains all sources needed to compile the CKMGR program. Converting Data Version 2.2 27

DataFlex Connectivity Kit for ODBC Chapter 5 - Connecting to Existing ODBC Data 5 The DataFlex Connectivity Kit for ODBC is also used for attaching to existing ODBC data. This chapter discusses that option. The connection process consists of three manual steps: 1. Creating an intermediate file. 2. Adding a filelist entry. 3. Generating a FD file. Alternatively you can use Database Builder to connect to existing ODBC data. Manual connect Creating an Intermediate File To connect to existing ODBC data, you need to create an intermediate file. The intermediate file should at least set the intermediate file keywords DRIVER_NAME and SERVER_NAME. Normally you would also set the DATABASE_NAME keyword. The table to connect to must have at least one unique index defined. This index must have one segment; the segment must be a numeric column. For example, if we want to connect to the table Department, in the data source Company, we would create an intermediate file, called dept.int, with the following content: DRIVER_NAME ODBC_DRV SERVER_NAME DSN=Company DATABASE_NAME Department Connecting to Data Creating a Filelist Entry The next step is to place a reference to the created intermediate file in the filelist. In a character mode environment Use DFAdmin (dfruncon DFAdmin) Select the appropriate filelist From the Edit menu choose New entry Version 2.2 29

Chapter 5 In VDF Use Database Builder Select the appropriate workspace From the Filelist menu choose New entry For the new entry you should set the root name to the name of the intermediate file you just created (dept.int). The user and logical name can be set to your specifications. Typically, in the above example, they would be set to Departments in the company and dept respectively. Generating an FD File The last step is to generate a FD file for the new filelist entry. The DataFlex compiler uses FD files to import table definitions. In a character mode environment, use DFAdmin, in VDF use Database Builder to generate the FD. Connecting to Data Connect using Database Builder Before Database Builder can be used to connect to a data source, the Connectivity Kit needs to be loaded. If you selected to automatically load the driver during install, the driver will be loaded on startup. Otherwise, you can load the driver by choosing the Load database driver menu choice in the Database menu of Database Builder. From the file dialog that displays, choose the ODBC_DRV.DLL file. After the driver has been loaded, the Database menu will be expanded with extra items, of these new items choose Connect to ODBC. Connect to ODBC After selecting the Connect To ODBC Table menu item, a list of available data sources is presented. Select a data source by clicking on the item in the list and then clicking on the OK button. 30 User s Guide

DataFlex Connectivity Kit for ODBC Connecting Data You can adjust the contents of the data source list by choosing the type of data sources you want to see. The list can display user- or system data sources. To select a file data source click on the File DSN button and select the data source. When the data source has been selected a list of available tables in the data source will be presented. It is possible that you need to login to the data source. This depends on the database management system pointed to by the data source. Select a table in the same manner as selecting a data source by clicking on the desired table and then clicking on the OK button. Version 2.2 31

Chapter 5 Connecting to Data After the table has been selected you will see the intermediate file definition screen. This screen will generate an intermediate file for the table you select to connect to. If an intermediate file already exists with the proposed name, its settings will not be read in and the file will be overwritten. The screen has two tab pages, one for file level intermediate file settings and one for defining indexes. 32 User s Guide

DataFlex Connectivity Kit for ODBC The screen only helps in generating an intermediate file. It will not check if the definition is correct. The indexes defined here do not have to exist in the ODBC table. In general, it will be faster if the indexes actually exist but it is not required. Remember that a primary index is required and this index must have only one segment that is numeric. The primary index segment should uniquely identify a record. You can define indexes by selecting the field you want added to the index and then clicking on the Segment button. This will add the field to the list of segments. Segments can be moved by the Up and Down buttons. You can delete a segment by clicking the Delete button. In this case, you need to find out the definition of the DEPT table from Oracle in order to create the correct intermediate file. The Oracle definition has one unique index for the DEPTNO column. This is also a numeric column so we use it as primary index. The following intermediate file will be generated when you click on the OK button. Connecting Data DRIVER_NAME ODBC_DRV SERVER_NAME DSN=Oracle DATABASE_NAME DEPT Version 2.2 33

Chapter 5 PRIMARY_INDEX 1 INDEX_NUMBER 1 INDEX_NUMBER_SEGMENTS 1 INDEX_SEGMENT_FIELD 1 Connecting to Data 34 User s Guide

Chapter 6 Structure caching DataFlex Connectivity Kit for ODBC 6 When opening a table through the Connectivity Kit information on the table definition will be assembled. The information will be obtained from the meta database. Since this involves a number of database find operations it may take some time to get all this information. In general table definitions are very static, especially in a deploy environment. For this reason structure caching was implemented. Structure caching will dramatically speed up the opening of tables. It stores the table definition in a disk file (*.cch) and reads this disk file instead of getting the information from the meta database. Storing information in two places is always dangerous. It is possible for the cache to get out of sync with the actual table definition. The structure cache mechanism was designed to prevent getting out of sync as much as possible. Since environments and requirements vary between different installations, structure cache behavior can be setup through the Connectivity Kit configuration file. What is structure caching Structure caching will store all information (table, column and index) of a table in a cache file (*.cch) when the table is first opened through the Connectivity Kit. The next time the table is opened, this cache file will be used to get the structure information instead of using the meta database. Location of cache files By default the structure cache files will be placed in the same directory as the intermediate file for the table. You can specify a cache path in the Connectivity Kit configuration file. If this path is defined it must point to a valid directory. All cache files will be placed in that directory. Structure cache Cache paths can be used for easy maintenance or when using multiple Connectivity Kits. Other Connectivity Kits also support a structure cache mechanism. In a mixed environment it may be needed to give each Connectivity Kit its own cache directory. Lifetime of cache files Cache files can get out of sync. There are a number of reasons why this happens. Most of the time the Connectivity Kit will automatically detect the Version 2.2 35

Chapter 6 out of syncness and regenerate the cache file automatically. The reasons why a cache file will get out of sync are: The intermediate file changes. A restructure operation is done through the DataFlex API The table definition changes not using the Connectivity Kit to make the definition change. The Connectivity Kit uses a new cache format. Most of the cases above will be handled automatically. The cache mechanism will compare he timestamps of the cache file and the intermediate file. If the intermediate file is newer then the cache file, the cache file will be regenerated. You can switch off the timestamp check in the Connectivity Kit configuration file. Structure cache When a restructure operation is done, the cache file will be deleted. Upon the next open of the table, the cache file will be regenerated. Every cache file contains a signature. This signature defines the format of the cache file. When a cache file is opened but the signature does not comply with the expected format, the cache file will be regenerated. The only situation that is not automatically detected and handled is when table definition changes are made outside the Connectivity Kit. In that case the cache files of the changed tables should be deleted. When the tables are opened the next time, a new cache file will be generated. It is always allowed to delete cache files, they will be regenerated automatically. 36 User s Guide

Chapter 7 - Intermediate File DataFlex Connectivity Kit for ODBC 7 A DataFlex driver in general must support certain functionality. Not all of that functionality is available for all target database formats. The intermediate file is the location where this type of information is stored. The contents of an intermediate file can be divided into three sections. A section that allows the Data Access API to determine the database driver to use A section that contains information needed before a table can be opened A section that contains information needed after a table has been opened The general format of the intermediate file consists of lines of text separated by carriage return and line feed (CR/LF) characters. Each line may be up to 255 characters long and consists of a keyword and value pair or a comment. Keywords may be upper, lower or mixed case. Comments are preceded by the semicolon (;) character. In line comments are not supported. White space, defined as blank spaces and tab characters, is not significant and may be used for legibility. The specific meaning and usage of intermediate file keywords are entirely dependent upon the specific database driver with the exception of the DRIVER_NAME keyword. Every table that is accessed through the DataFlex Connectivity Kit for ODBC must have an intermediate file associated with it. The intermediate file specifies the location of the table by specifying a server, a database on the server and the table therein to connect. Intermediate File Order of Keywords Intermediate file keywords must be placed in the following order: a header, columns and indexes. The upcoming paragraphs will discuss all supported intermediate file keywords, the value they can be set to and the associated attribute, if any. The keywords will be presented in the following format: Version 2.2 37

Chapter 7 <Keyword> (<Usage>) Value Associated attribute <Possible values> <Attribute_name> (<Type>) Where Intermediate File <Keyword> (<Usage>) <Possible values> <Attribute_name> (<Type>) The keyword to set in the intermediate file. Required. Since most keywords are optional, this is not mentioned in the keyword s description. It is only mentioned if a keyword is required, all others are optional. Optional keywords can be omitted; required keyword must be set. A list of values or a description of possible values for the keyword The name of the attribute associated with the keyword. The type of the associated attribute. Associated Attributes Some keywords have a so-called associated attribute. The behavior that is setup by such a keyword can also be setup by using the attribute. Attributes can be set by the Set_attribute command; their value can be queried by the Get_attribute command. Setting up behavior through the intermediate file creates a global, persistent setting. Using the Set_Attribute command will result in a local setting only valid until the next Set_attribute command. It depends on your specific needs, which of the two ways should be used in a specific case. The Primary_Index_Trigger keyword, for example, has an associated attribute DF_FILE_PRIMARY_INDEX_TRIGGER of type Boolean. There are two ways to switch the attribute on: Primary_index_trigger YES ; Intermediate file line Set Attribute DF_FILE_PRIMARY_INDEX_TRIGGER of ; MyTable.File_number To True Alternatively, you can use the following to switch the attribute off: Primary_index_trigger NO ; Intermediate file line Set Attribute DF_FILE_PRIMARY_INDEX_TRIGGER of ; 38 User s Guide

DataFlex Connectivity Kit for ODBC Header MyTable.File_number To False The keywords for the intermediate file header must be placed at the beginning of the intermediate file in the order they are discussed here. If an intermediate file does not have this information at the beginning of the file, the file and the associated table will fail to open. Driver_Name (Required) Value Associated attribute ODBC_DRV DF_FILE_DRIVER (String) The Driver_Name keyword is used by the Data Access API to determine the driver that must be used to open the table that is associated with the intermediate file. The driver will be loaded if necessary. Normally, this keyword has been parsed before the ODBC Client is called. The ODBC Client will then start parsing the rest of the intermediate file information. This keyword must be the first keyword in any intermediate file. Server_Name (Required) Value Associated attribute Connection string DF_FILE_LOGIN (String) The Server_Name keyword must be set to a connection string. The connection string identifies the data source to connect to. The string can contain user information, including password, but that is not required. The string is made up of a number of Keyword=Value pairs separated by semicolons (;). Intermediate File CONNECTION KEYWORD DSN UID PWD Configuration keyword DESCRIPTION OF THE VALUE Name of the data source to connect to. A user id if needed by the data source. The password corresponding to the user id name. Backend specific defined keyword. For a list of configuration keywords and Version 2.2 39

Chapter 7 CONNECTION KEYWORD DESCRIPTION OF THE VALUE their description see your database s manual. For example, if we want to connect to the table Department, in the data source Company, we would create an intermediate file, called dept.int, with the following content: DRIVER_NAME ODBC_DRV SERVER_NAME DSN=Company DATABASE_NAME Department Database_Name 1 Value Associated attribute The name of the table to connect to None Intermediate File The name of the table associated with the intermediate file. If this keyword is not set, the intermediate file name (without the.int extension) will be used. Schema_Name Value Associated attribute The name of the database owner DF_FILE_OWNER (String) The name of the schema the table belongs to. A schema is a collection of names or objects. A schema can contain tables, views, and triggers. Schemas provide a logical classification of objects in the database. If the keyword is not set, the user ID used to login to the data source is used as the schema name. 1 The Database_name keyword was created and designed before actually connecting to SQL databases. This is why it has the old DataFlex meaning of table. In a future revision this may be adjusted to a more SQL oriented keyword. 40 User s Guide

DataFlex Connectivity Kit for ODBC Table Keywords The following keywords set attributes on table level. They must be set directly after the intermediate file header and before the column keywords (if any). Dummy_Update_Column Desktop databases usually do not support positioned updates. In these cases a different update mechanism, called SQLSetPos that allows for records to be locked is used. If the backend does not support exclusive locks on records, the Connectivity Kit will perform a dummy update to lock the record. This mechanism will ensure that records are locked when they are found while in a transaction. Since most ODBC drivers do not support table locking there is a difference between DataFlex locking and ODBC locking. The dummy update mechanism updates a record directly after it has been found. The record identity column is set to the value just got from the find. This happens in the find logic. If the record identity column is not "updatable", it could be an auto increment for example; you can setup an alternative column by using the intermediate file keyword DUMMY_UPDATE_COLUMN. The Dummy_Update_Column keyword should be set to the number of the column you want to "dummy update". Column numbers start at 1. A lot of data sources use a so-called optimistic locking strategy. DataFlex programs cannot handle this strategy. Most environments allow the user to change the locking strategy from optimistic to pessimistic. In order for DataFlex programs to correctly lock data, the locking strategy must be set to pessimistic locking! For Access, one needs to setup pessimistic locking as described in the following article: http://support.microsoft.com/support/kb/articles/q225/9/26.asp. At this moment there is no facility to determine how the backend in use supports locking. There are several ways this can be done depending on the backend in use. Possible behaviors are: Positioned updates. If the backend supports positioned updates, locking records is supported, no dummy updates are required. SQLSetPos + exclusive lock. Some back ends support the SQLSetPos logic with the possibility to lock a record exclusively. No dummy updates are required. SQLsetPos + dummy update. If the backend does not support exclusive Intermediate File Version 2.2 41

Chapter 7 locks on records, a dummy update will be done directly after a record has been found while in a transaction. In this case, the DUMMY_UPDATE_COLUMN intermediate file setting is used to determine the column to use in the dummy update operation. None. Some back ends are read only and do not support locking at all. Tested environments are: Intermediate File Environment DB2 UDB v7.1 MS SQL 7.0 Sybase ASA 7 Oracle 8.0 Oracle 8i MS Access 2000 Get_RID_After_Create Value Associated attribute YES, NO Update behavior Positioned updates Positioned updates Positioned updates Positioned updates Positioned updates SQLSetPos + dummy update DF_FILE_GET_RID_AFTER_CREATE (Boolean) This keyword handles the behavior of the Connectivity Kit when the Primary_Index_Trigger keyword is set to YES. After records are created in a table with a triggered primary index, the assigned record identity is moved to the record buffer used in DataFlex. Setting this keyword to NO will switch of moving the new identity to the record buffer. This eliminates a client server communication roundtrip, thus speeding up performance when creating records. Note: Be aware that setting this keyword to NO can result in unwanted or erroneous behavior. Switching off the move behavior can speed up bulk creation of records. This keyword will be rarely used; normally you will only use this attribute from within the program logic where it will be used in massive creating of records. In such cases it will be switched off, then the records are created after which the attribute is switched back on. 42 User s Guide

DataFlex Connectivity Kit for ODBC Max_Rows_Fetched Value 0.. Associated attribute DF_FILE_MAX_ROWS_FETCHED (Positive integer) The number of records result sets of find operation on this table should be limited to. The default value is 0 (zero), which means that result sets will not be limited. All other positive integer values will limit the result set to that value. ODBC uses SQL as its language to manipulate data. SQL is a set oriented language. Every statement works on set(s) and result in a set. A DataFlex find command will be translated into its SQL counterpart, which is sent to the database server. The SQL statement will result in a set of rows that satisfy the condition of the statement for the find command. This result set can have no, one or more rows. In some cases, a result set will contain all rows of a table. Limiting the result set can improve performance. Unfortunately, there is no rule that can be followed when using this attribute. The best setting for the attribute depends on the program logic and may vary in different functional areas in one program. Normally, the best setting to use when experimenting with values for this attribute is 0, 1 and the number of records that are shown on the screen (in lists). Primary_Index Value Associated attribute 1.. highest index number defined for the table DF_FILE_RECORD_IDENTITY (Integer) Intermediate File The number of the identity index. The DataFlex API requires every table to have a record identity. This is a column in the table that uniquely identifies a row in the table. This column should be numeric and an index should be defined for it. For more information on record identity, see Chapter 10 - Record Identity. NOTE: The DF_FILE_RECORD_IDENTITY attribute uses column numbers. The Primary_index intermediate file keyword uses index numbers. When setting up the primary index in the intermediate file the record identity field is set indirectly. Version 2.2 43

Chapter 7 Primary_Index_Trigger Value Associated attribute YES, NO DF_FILE_PRIMARY_INDEX_TRIGGER (Boolean) Indicates that the record identity column is filled by the backend automatically when creating records. After records have been created, the Connectivity Kit will move the assigned number to the column by performing a select (max(recordidentitycolumn)) from table statement. The default value for the keyword is NO. Refind_After_Save Value Associated attribute YES, NO DF_FILE_REFIND_AFTER_SAVE (Boolean) Intermediate File If triggers are defined on a table that fill one (or more) columns in a row that is created or updated, you may use this setting if you want these new column values to be available directly after the save operation. Setting this keyword will ensure that the record buffer is up to date. Setting this keyword to NO, the default, will switch re-fetching the row off. This eliminates a client server communication roundtrip, thus speeding up performance when creating records. System_File Value Associated attribute YES, NO DF_FILE_IS_SYSTEM_FILE (Boolean) Indicates if the table is a DataFlex system table. System tables in DataFlex are treated in a special way. They should contain one record and one record only. When a system table is opened, the first (and only) row in the table will be placed in the table s record buffer. DataFlex determines this attribute by the maximum number of records. In ODBC we cannot set a maximum number of rows. Be aware that the number of records in the table does not need to be one. This is determined by 44 User s Guide

DataFlex Connectivity Kit for ODBC program logic, not by the Connectivity Kit. If there is more than one record, the record that will be placed in the record buffer is the first one according to the record identifier sort order. Table_Character_Format Value Associated Attribute ANSI, OEM DF_FILE_TABLE_CHARACTER_FORMAT (String) The format of the data in the table. This format can be set to Ansi or to OEM. DataFlex programs use data in OEM format. Most Windows applications expect data to be in Ansi format. If you want to access data from DataFlex and some other Windows tool, it may be required to store the data in Ansi format. Be aware that setting this value by editing the intermediate file will only change the way data is presented and new data is stored. Existing data will not be converted from OEM to Ansi or vice versa. You need to run a conversion utility to convert existing data. If you are creating new tables (when converting, by using Database Builder or doing a structure operation) the setting of the driver configuration keyword DEFAUL_TABLE_CHARACTER_FORMAT will be used to determine the initial setting of the attribute. For more information on character formats see Chapter 8 - Character formats (OEM or Ansi). Intermediate File Use_Dummy_Zero_Date Value Associated Attribute YES, NO DF_FILE_USE_DUMMY_ZERO_DATE (Boolean) Indicates if dummy zero dates must be used for date columns that do not allow null values. The setting has no effect on date columns that allow null values. When set to YES, the Connectivity Kit will translate the DataFlex zero date to the dummy zero date value 0001-01-01. This avoids problems that can arise with sorting on indexes that use the date columns. For more information see Chapter 11 - NULL Values and Defaults. Version 2.2 45

Chapter 7 Column Keywords The following keywords set attributes on column level. They must be set directly after the table keywords and before the index keywords (if any). Column keywords are grouped per column. If you want to set a column keyword, the column must be identified first (Field_Number), and then you can set one or more column keywords for that particular column. The keywords will apply to the last Field_Number that was specified. Most of the keywords discussed will be automatically set by the Connectivity Kit and will never be added to the intermediate file. The length of a column for example, is defined in ODBC and normally this is not changed on the DataFlex side. In some situations, you may want to change the attribute on the DataFlex side only. In those cases, you should use the keywords or associated attributes to setup the desired value. Field_Index Intermediate File Value Associated attribute 0.. highest index number defined for the table DF_FIELD_INDEX (Integer) This keyword will set the main index attribute for a column. If this attribute is not set in the intermediate file, the DF_FIELD_INDEX attribute will be set to the first index the column appears in as a segment. See the DataFlex documentation for more information on main indexes. Field_Length Value Associated attribute 1.. Maximum length for the type of the column DF_FIELD_LENGTH (Integer) The Connectivity Kit will get the lengths defined in ODBC. If you want to define a different length in DataFlex from the length in ODBC, you should use this setting. It defines the length of the column (together with Field_Precision). This keyword is normally used to make sure that two relating columns have the same length. This can be a problem when relating between tables of different back ends. For example, DataFlex will report the length of a numeric field as a multiple of 2. ODBC supports lengths for numeric data that is not a 46 User s Guide

DataFlex Connectivity Kit for ODBC multiple of 2. If you want to relate between an ODBC numeric column of length 3 and a DataFlex field of length 4, you can adjust the field length of the ODBC column in DataFlex only. Field_Name Value Associated attribute The name of the column as reported by DataFlex DF_FIELD_NAME (String) The name of a column as reported by the Connectivity Kit. The keyword is generally used to setup overlap column names. It can also be used to setup a different name for a DataFlex environment from the name in ODBC. There are column names that are legal in the data source but not in DataFlex. If you want to access a table with such a column, use the Field_Name keyword to set it to a legal DataFlex name. An example of this type of name would be File_Number. Field_Number Value Associated attribute 0.. Maximum number of columns in a table. DF_FIELD_NUMBER (Integer) The Field_Number keyword defines a column keyword group. All subsequent column keyword settings will apply to the column with the number specified. The column number will change when another Field_Number keyword is used. It is not possible to separate settings under multiple column groups. Every new column number will overwrite any preexisting settings. Intermediate File Field_Overlap_Start, Field_Overlap_End Value Associated attribute 1.. Number of columns in the table DF_FIELD_LENGTH (Integer), DF_FIELD_OFFSET (Integer) The Field_Overlap_Start and Field_Overlap_End keywords are used to define an overlap column. An overlap column is a logical column that overlaps Version 2.2 47

Chapter 7 multiple underlying real columns. The keywords are set to the numbers of the columns that are overlapped. If we want to define a column at number 5 called My_Overlap starting at column 3 and ending at column 4 (inclusive) we need to place the following lines in the intermediate file. Field_Number 5 Field_Name My_Overlap Field_Overlap_Start 3 Field_Overlap_End 4 Intermediate File The lines above will insert a column in the table definition. Column number 5 is a logical column; the original column number 5 and higher will shift one position. Overlaps defined in this way always overlap complete columns. Please note that the overlap column will be inserted when the column definition is finished at a new field group header, an index group header or the end of the intermediate file. Therefore, the following intermediate file definition will generate two subsequent overlap columns at position 5 and 6. The original column number 5 and higher will have shifted two positions. Field_Number 5 Field_Name Eventually_Field_6 Field_Overlap_Start 2 Field_Overlap_End 3 Field_Number 5 Field_Name Eventually_Field_5 Field_Overlap_Start 1 Field_Overlap_End 3 Field_Overlap_Offset_Start, Field_Overlap_Offset_End Value Associated attribute 1.. Length of a row in the table DF_FIELD_LENGTH (Integer), DF_FIELD_OFFSET (Integer) The Field_Overlap_Offset_Start and Field_Overlap_Offset_End keywords are used to define an overlap column. An overlap column is a logical column that overlaps multiple underlying real columns. The keywords are set to the offset in the row of the columns that are overlapped. These keywords can be used to create so-called underlaps. The Field_Overlap_Offset keywords are not supported in Structure 48 User s Guide

DataFlex Connectivity Kit for ODBC operations. If you perform a restructure operation on a table with underlaps, they will be forced to complete overlap by the Structure_End logic. Please note that every backend has its own rules for offsets and field lengths. Using the same settings in DataFlex and ODBC can result in different overlaps. Be sure to check if the overlap definition is correct. See the discussion on the Field_Overlap_Start and Field_Overlap_End keywords for more information on inserting overlaps and the consequences for other column s numbers. Field_Precision Value Associated attribute 1.. Maximum precision for the type of the column DF_FIELD_PRECISION (Integer) The Connectivity Kit will get the precision defined in ODBC. If you want to define a different precision in DataFlex from the precision in ODBC, you should use this setting. It defines the length of the column (together with Field_Length). This keyword is normally used to make sure that two relating columns have the same length. This can be a problem when relating between tables of different back ends. For example, DataFlex will report the length of a numeric field as a multiple of 2. ODBC supports lengths for numeric data that is not a multiple of 2. If you want to relate between an ODBC numeric column of length 3 and a DataFlex field of length 4, you can adjust the field length of the ODBC column in DataFlex only. Intermediate File Field_Related_File, Field_Related_Field Value Associated attribute 1.. Maximum number of tables supported by DataFlex and 1.. Number of columns in the relates to table DF_FIELD_RELATED_FILE (Integer), DF_FIELD_RELATED_FIELD (Integer) DataFlex relations are setup through the DataFlex filelist. In the filelist, all tables are assigned a number. This number is used within DataFlex and the DataFlex API to identify the table; columns are also identified by their number. Version 2.2 49

Chapter 7 By setting these two keywords, a relationship can be defined. The Connectivity Kit does not use the foreign and primary key definitions created in the data source. Field_Store_Time Value.. 86400 Associated attribute DF_FIELD_STORE_TIME (Integer) Intermediate File This keyword sets the time portion for a datetime type column, for more information on the datetime type, see your database s documentation. The integer represents the number of seconds in the day. A value of 1 indicates that the server time will be used. Any other negative value indicates that the client s system time will be used. The time portion will be stored upon save every time the column is changed. Datetime columns contain a date and a time portion. The Connectivity Kit will report the column as being of the DF_DATE type. In a DataFlex program, the column can be manipulated using the date logic in DataFlex. The time portion can be manipulated using this attribute. Index Keywords The following keywords set attributes on index level. They must be set directly after the column keywords. Index keywords are grouped per index. If you want to set an index keyword, the index must be identified first (Index_Number), and then you can set one or more index keywords for that particular index. The keywords will apply to the last Index_Number that was specified. The ODBC Connectivity Kit does not support automatic reading of index information on the backend. You must define indexes in the intermediate file. You should define intermediate file indexes for all indexes that exist in the data source. Next to that you can define indexes for DataFlex only. Defining an index in DataFlex only is useful when using an index that is only used sporadically. The structure end logic favors backend indexes. It will try to create as many indexes in the data source as possible. If you have defined indexes in DataFlex only because they are seldom used, a structure_end on the table 50 User s Guide

DataFlex Connectivity Kit for ODBC will create these indexes in the data source. Index_Number Value 1.. Associated attribute None The Index_Number keyword defines an index keyword group. All subsequent index keyword settings will apply to the index with the number specified. The index number will change when another Index_Number keyword is used. It is not possible to split up settings under multiple index groups. Using the same index number twice (or more) will result in errors in the table definition. Index_Name Value Associated attribute The name of the index in DB2 DF_INDEX_NAME (String) ODBC identifies indexes by name. DataFlex identifies indexes by number. The name of an index can be set through the Index_Name intermediate file keyword. When creating the table this name will be used for the index. Index_Number_Segments Value 1.. Associated attribute DF_INDEX_NUMBER_SEGMENTS (Integer) Intermediate File The Index_Number_Segments keyword defines the number of segments in the index. Index_Segment_Field Value Associated attribute 1..Number of fields DF_INDEX_SEGMENT_FIELD (Integer) The number of the column in the current segment. There must be an Index_Segment_Field line for every segment in the index. Version 2.2 51

Chapter 7 For example, if we want to define index 4, as having three segments where segment 1 is column 5, segment 2 is column 3 and segment 3 is column 2 this would look like: Index_number 4 Index_Number_Segments 3 Index_Segment_Field 5 Index_Segment_Field 3 Index_Segment_Field 2 Index_Segment_Direction Value Associated attribute ASCENDING, DESCENDING DF_INDEX_SEGMENT_DIRECTION (Integer) Intermediate File The direction of a certain segment. If you want to set the direction, this must be done directly after the Index_Segment keyword. The keyword can be set to either ASCENDING or DESCENDING. ASCENDING is the default value. Therefore, if we wanted the above index to be descending on the last two segments the definition would look like: Index_number 4 Index_Number_Segments 3 Index_Segment_Field 5 Index_Segment_Field 3 Index_Segment_Direction DESCENDING Index_Segment_Field 2 Index_Segment_Direction DESCENDING 52 User s Guide

DataFlex Connectivity Kit for ODBC Chapter 8 - Character formats (OEM or Ansi) 8 You can choose to store your data in either OEM or ANSI format. While native DataFlex Tables store their data in OEM format this may not be an appropriate format for other databases. The Connectivity Kit can read or write data in either OEM or ANSI format and handle all the required translations automatically. The native character format of your data will be invisible to your application (i.e. no programming changes are required). However, you must identify the character format of each table so the Connectivity Kit can handle the translations as needed. This is accomplished with settings in your.int files. What format do I want for my tables? Native DataFlex tables are stored in OEM format. If you are moving your data to another database and you expect that DataFlex will be the only application using this data, then you could just store all of your data in OEM format. There are two reasons why you may not want to do this: 1) you may be hooking into an existing table and you must use the character format that already exists, or 2) you will want your data to be accessible by other tools and applications that expect the data to be in a specific character format. To simplify the decision, just assume that you want to store your data in the format that is the standard for the selected database. Setting Character Formats Determining the Character Format of a table Each table's.int file should contain information about how the data is stored. You will have a line in this file that looks like TABLE_CHARACTER_FORMAT OEM <or> ANSI If your.int file does not contain this setting, it indicates that the file was created with an earlier version of the Connectivity Kit and has not yet been modified by the newer Connectivity Kit. When missing, the format is assumed to be OEM. TABLE_CHARACTER_FORMAT tells the driver (and you) what the current character format for a table is. You must be very careful if you change this value! Changing the value does not change the data. This can result in the driver making improper translations. If you need to change the Character formats (OEM or Ansi) Version 2.2 53

Chapter 8 character format of a table you should use a conversion utility to do this. You can safely change this value if the table is empty. You might do this if you wish to set a table's character format before adding or converting data. Normally, you will never need to do this. Instead, you can set a default character format for all new tables as described below. Character formats (OEM or Ansi) Defining the default character format for new tables When creating new tables Database Builder must know what character format to use when creating the table definition. It does this by looking at a setting in your configuration file, ODBC_DRV.INT, called DEFAULT_TABLE_CHARACTER_FORMAT. DEFAULT_TABLE_CHARACTER_FORMAT OEM <or> ANSI This setting determines what format will be used when defining and creating a new table. It determines whether the table's TABLE_CHARACTER_FORMAT will be set to OEM or ANSI. This setting insures that table conversions save the data in the proper character format. If your configuration file does not contain the DEFAULT_TABLE_CHARACTER_FORMAT setting or the file is missing, the default will be OEM. We advise that you explicitly set this value in your configuration file. Note that when missing, the character formats default to OEM. This is done for backwards compatibility reasons and does not represent a recommendation. Recommendations 1. We recommend that all of the tables in any database should use the same character format. 2. Existing tables created by pre 2.1 versions of the driver are most likely using OEM format. These will continue to work properly. If you are using ANSI as your default character format, we advise that you convert the format of these tables to ANSI using a conversion utility (see Converting Table Formats). Converting Table Formats The conversion utility called OEMAnsi is a command line utility that converts data from OEM to Ansi format or vice versa. It will also adjust the table 54 User s Guide

DataFlex Connectivity Kit for ODBC definition of the converted tables to the new format. For Visual DataFlex 7 service pack 3, a wizard was created to setup the conversion process. OEMAnsi - command line utility The OEMANsi command line utility is a DataFlex program that converts data from OEM to Ansi and vice versa. The sources of this conversion program will be installed if you choose both Development Install and Tools & Utilities in the installation program. If you do not select Development Install, only the utility itself will be installed. OEMAnsi runs in both Visual DataFlex and DataFlex Console Mode environments. To start the console version type dfruncon oemansi <options> on the command line. To start the Visual DataFlex version create a shortcut icon and fill in the desired command line in the shortcut properties. The command line utility supports the following options: OPTION DESCRIPTION -a All tables in current filelist. -c<tf> Target format for conversion. Values of <TF> can be: OEM - Convert tables currently in Ansi to OEM format Ansi - Convert tables currently in OEM to Ansi format. TOGGLE - Toggle current table format. -h Help information on usage. -l- Do not log, by default logging is on. -l<logname> Log to file, if no logname is passed 'OEMAnsi.log' is used. -o Convert each table in one big transaction. By default every row will be converted in its own transaction. The conversion process will only update data that actually needs to be converted. OEM and Ansi differ below Ascii value 32 and above Ascii value 126, from Asci value 32 to 126 they are equal. Converting a table may result in no updates at all, a few updates or an update of all rows in that table. This depends on the actual data in the table. If you choose to use one big transaction, the conversion will be faster. Make sure the transaction log of the database is big enough Intermediate File Version 2.2 55

Chapter 8 -t<tname> -w<wsname> to accommodate the transaction size. If you choose individual transactions for each row, make sure a backup of all the tables to convert is available. Any error that occurs during the conversion process will leave the table in an undefined state. Table with physical name <Tname>. All tables in filelist of workspace <WSName>. Character formats (OEM or Ansi) VDFOEMAnsi wizard The VDFOEMAnsi wizard is a user friendly interface to the same functionality as the command line utility. It allows the user to select the method of conversion (Workspace, Filelist and/or Individual table) and to select a target format. The results are presented in a more windows oriented fashion. To start the VDFOEMAnsi wizard you need Visual DataFklex 7 service pack 3 installed. In this environment you can start the wizard by double clicking the VDFOEMAnsi.vd7 file in the Visual DataFlex lib directory. The sources of this wizard program will be installed if you choose both Development Install and Tools & Utilities in the installation program. If you do not select Development Install, only the utility itself will be installed. 56 User s Guide

DataFlex Connectivity Kit for ODBC Chapter 9 - ODBC Specific Commands and Techniques 9 The DataFlex Connectivity Kit for ODBC contains a package called ODBC_DRV.PKG. The package contains ODBC specific definitions and commands. You can use these in DataFlex source code to make your application more ODBC specific. This might make the program unusable for other database types. Next to the commands defined in ODBC_DRV.PKG, you can use Embedded SQL. How to use Embedded SQL is described in the Embedded SQL User s Guide. Attributes The DataFlex API defines a number of attributes that give information about a table and its definition. Next to those attributes, some ODBC specific attributes have been added. Attributes that concern the Connectivity Kit can be defined on table, column and index level. Most attributes have the same behavior as described in the DataFlex documentation. We will discuss the attributes that differ from the standard DataFlex behavior or are defined specifically for ODBC. For more information on other attributes, refer to the DataFlex documentation. Most attributes defined for ODBC specifically can also be set through intermediate file keywords. Refer to Chapter 7 Intermediate File, for a description of those attributes. ATTRIBUTE DF_FILE_GET_RID_AFTER_CREATE DF_FILE_MAX_ROWS FETCHED DF_FILE_PRIMARY_INDEX_TRIGGER DF_FILE_REFIND_AFTER_SAVE DF_FILE_TABLE_CHARACTER_FORMAT DF_FILE_USE_DUMMY_ZERO_DATE DF_FIELD_DEFAULT_VALUE INTERMEDIATE FILE KEYWORD Get_RID_After_Create Max_Rows_Fetched Primary_Index_Trigger Refind_After_Save Table_Charcater_Format Use_Dummy_Zero_Date Commands and Techniques Version 2.2 57

Chapter 9 ATTRIBUTE DF_FIELD_IS_NULL DF_FIELD_NULL_ALLOWED DF_FIELD_STORE_TIME DF_INDEX_NAME INTERMEDIATE FILE KEYWORD Field_Store_Time Index_Name Attribute usage The attributes that are set for the Connectivity Kit are attributes on table-, column-, index- and index segment level. Structure versus non-structure Most of the extra attributes can be set within or outside of a structure operation. If set within a structure operation (within a Structure_Start.. Structure_End command pair), the setting is permanent. The intermediate file will be adjusted accordingly. If set outside of a structure operation, the setting will exist for the duration of the program or until it is reset. Commands and Techniques The DF_FILE_USE_DUMMY_ZERO_DATE, DF_FIELD_NULL_ALLOWED, DF_FIELD_DEFAULT_VALUE and DF_INDEX_NAME attribute can only be set inside a structure operation. The DF_FIELD_IS_NULL attribute can only be set outside a structure operation. All others mentioned above can be set in- or outside of a structure operation. DF_FILE_GET_RID_AFTER_CREATE (Boolean) This attribute can also be set through the intermediate file keyword Get_RID_After_Create. Refer to Chapter 7 - Intermediate File for more information. DF_FILE_MAX_ROWS_FETCHED (Integer) This attribute can also be set through the intermediate file keyword Max_Rows_Fetched. Refer to Chapter 7 - Intermediate File for more information. 58 User s Guide

DataFlex Connectivity Kit for ODBC DF_FILE_PRIMARY_INDEX_TRIGGER (Boolean) This attribute can also be set through the intermediate file keyword Primary_Index_Trigger. Refer to Chapter 7 - Intermediate File for more information. DF_FILE_REFIND_AFTER_SAVE (Boolean) This attribute can also be set through the intermediate file keyword Refind_After_Save. Refer to Chapter 7 - Intermediate File for more information. DF_FILE_TABLE_CHARACTER_FORMAT (String) This attribute can also be set through the intermediate file keyword Table_Character_Format. Refer to Chapter 7 - Intermediate File for more information. DF_FILE_USE_DUMMY_ZERO_DATE (Boolean) This attribute can also be set through the intermediate file keyword Use_Dummy_Zero_Date. Refer to Chapter 7 - Intermediate File for more information. DF_FIELD_DEFAULT_VALUE (String) This attribute sets up the database default value for a column. You can setup data source defaults on a column in three different formats: literal, ODBC escape sequence or backend string. A literal default is set by using a string in single quotes. It should be a valid literal value for the SQL type of the column. An ODBC escape sequence is enclosed in curly brackets {}, finally a backend string is enclosed in square brackets []. Commands and Techniques Typical examples of setting default values are: Set_Attribute DF_FIELD_DEFAULT_VALUE imyfile imycharfield ; To Unknown Set_Attribute DF_FIELD_DEFAULT_VALUE imyfile imydatefield ; To {fn current_date()} Set_attribute DF_FIELD_DEFAULT_VALUE imyfile imycharfield ; To [convert(char(30), CURRENT_USER)] Version 2.2 59

Chapter 9 The default will be used when creating records. So if we were to have a table MyTable with columns A, B, C and use the code below, we would end up with a new row in MyTable having default values (if any) for columns B and C. Clear MyTable Lock Move Not a default value To MyTable.A Saverecord MyTable Unlock See Chapter 11 - NULL Values and Defaults and your database s documentation for more information. DF_FIELD_IS_NULL (Boolean) This attribute indicates if a column has the null value. The attribute is true when a field has the null value and false otherwise. A null value is a special marker that is used when the value of a column is unknown. You can set this attribute to force a column to have the null value. See Chapter 11 - NULL Values and Defaults and your database s documentation for more information. Commands and Techniques DF_FIELD_NULL_ALLOWED (Boolean) This attribute indicates if a column allows null values to be stored. The attribute is true when a column allows null values and false otherwise. A null value is a special marker that is used when the value of a column is unknown. See Chapter 11 - NULL Values and Defaults and your database s documentation for more information. DF_FIELD_STORE_TIME (Integer) This attribute can also be set through the intermediate file keyword Field_Store_Time. Refer to Chapter 7 - Intermediate File for more information. DF_INDEX_NAME (String) This attribute can also be set through the intermediate file keyword Index_Name. Refer to Chapter 7 - Intermediate File for more information. 60 User s Guide

DataFlex Connectivity Kit for ODBC Commands The following commands have been added. CLI_Set_Driver_Attribute / CLI_Get_Driver_Attribute The CLI_Set- and CLI_Get_Driver_Attribute commands can be used to set and get API driver level attributes. The API driver attributes can also be set through the Connectivity Kit s configuration file. The commands allow a DataFlex programmer more control over the driver level attributes. Setting the attributes will not change the Connectivity Kit configuration file. The commands have the same syntax: CLI_Set_Driver_Attribute <DriverId> <AttrId> To <svar> CLI_Get_Driver_Attribute <DriverId> <AttrId> To <svar> Where : <Driver_id> Is the identification of the driver. <AttrId> Is the attribute identification. Attribute identification can be: ATTRIBUTE IDENTIFICATION DRVR_DEFAULT_NULLABLE_ASCII DRVR_DEFAULT_NULLABLE_NUMERIC DRVR_DEFAULT_NULLABLE_DATE SHORT DESCRIPTION Boolean that indicates if ASCII columns allow NULL values when new columns are created. See also the DF_FIELD_NULL_ALLOWED field attribute. Boolean that indicates if Numeric columns allow NULL values when new columns are created. See also the DF_FIELD_NULL_ALLOWED field attribute. Boolean that indicates if Date columns allow NULL values when new columns are created. See also the Commands and Techniques Version 2.2 61

Chapter 9 Commands and Techniques DRVR_DEFAULT_NULLABLE_TEXT DRVR_DEFAULT_NULLABLE_BINARY DRVR_DEFAULT_DEFAULT_ASCII DRVR_DEFAULT_DEFAULT_NUMERIC DRVR_DEFAULT_DEFAULT_DATE DRVR_DEFAULT_DEFAULT_TEXT DF_FIELD_NULL_ALLOWED field attribute. Boolean that indicates if Text columns allow NULL values when new columns are created. See also the DF_FIELD_NULL_ALLOWED field attribute. Boolean that indicates if Binary columns allow NULL values when new columns are created. See also the DF_FIELD_NULL_ALLOWED field attribute. String that stores the default value of ASCII columns when new columns are created. See also the DF_FIELD_DEFAULT_VALUE attribute. String that stores the default value of Numeric columns when new columns are created. See also the DF_FIELD_DEFAULT_VALUE attribute. String that stores the default value of Date columns when new columns are created. See also the DF_FIELD_DEFAULT_VALUE attribute. String that stores the default value of Text columns when new columns are created. See also the DF_FIELD_DEFAULT_VALUE attribute. 62 User s Guide

DataFlex Connectivity Kit for ODBC DRVR_DEFAULT_DEFAULT_BINARY DRVR_MAX_ACTIVE_STATEMENTS DRVR_ERROR_DEBUG_MODE DRVR_DRIVER_DECIMAL_SEPARATOR DRVR_DRIVER_THOUSANDS_SEPARATOR DRVR_DRIVER_DATE_FORMAT String that stores the default value of Binary columns when new columns are created. See also the DF_FIELD_DEFAULT_VALUE attribute. Integer that holds the maximum number of statements allowed for a connection. Boolean that indicates if the Connectivity Kit should popup errors in a message box before passing them to DataFlex. The decimal separator used by the ODBC driver in use. The thousands separator used by the ODBC driver in use. The date format used by the ODBC driver in use. DRVR_DRIVER_DATE_SEPARATOR The date separator used by the ODBC driver in use. DRVR_USE_CACHE Boolean that indicates if structure caching should be used. DRVR_REPORT_CACHE_ERRORS Boolean that indicates if read errors on structure cache files should be reported. DRVR_CACHE_PATH The path where structure cache files are stored. DRVR_USE_CACHE_EXPIRATION Boolean that indicates if cache expiration must be checked. DRVR_DEFAULT_TABLE_CHARACTER_FORMAT String that stores the default table character format. DRVR_DUMMY_ZERO_DATE_VALUE The value used as dummy Commands and Techniques Version 2.2 63

Chapter 9 DRVR_DEFAULT_USE_DUMMY_ZERO_DATE zero date. This attribute should only be set if the lowest possible date of the database in use is not 0001-01-01. It should be set to the lowest possible date of the database in use. The default for the table attribute to use dummy zero dates. This default is used when creating new tables. For more information on API driver level attributes see Appendix C Configuration file ODBC_DRV.INT. Commands and Techniques ODBC_SetConstraint This new command was added to give the programmer more control over the Select statement generated by the driver. It will effectively turn off the driver logic, that generates the where clause of a select statement, and replace that with what is passed in the command. Since you are overwriting internal logic, this command has the potential to generate unexpected results when misused. That is why the command must be used in one defined way. You must setup the clause, find records in the result set by the index that you want the result ordered by in forward or backward direction (GT, LT), and remove the clause. For example, if we want to use this functionality in the Order Entry sample of VDF to find all order lines for the MODEMS item, we could do this the following way: Use ODBC_DRV String Clause Move ITEM_ID = MODEMS To Clause ODBC_SetConstraint OrderDtl.File_number Clause Repeat Find Gt OrderDtl By Index.1 [Found] Showln Order detail for a modem is: ; OrderDtl.Order_Number, OrderDtl.Detail_number Until [Not Found] ODBC_SetConstraint OrderDtl.File_number 64 User s Guide

DataFlex Connectivity Kit for ODBC This feature can speed up reporting in particular. Sometimes looking for related information is more efficient if the where clause is replaced by a ChildColumn = ParentValue construction. The select statements generated by the driver will select the desired record and all that follow according to the index used. By giving it a specific where clause having specific knowledge about the database, you are able to limit the size of the result sets and thereby increase performance. The ODBC_SetConstraint command will reset the DF_FILE_MAX_ROWS_FETCHED attribute when issued. If the command is issued with a clause, the attribute will be set to zero (0). If it is called with no clause (an empty string), the attribute will be reset to its original value. Please note that you need to use SQL syntax in the where clause that you specify. If you want to select on dates you must use the YYYY-MM-DD format that SQL uses. Normally, this is easiest accomplished by: Integer OrgFmt OrgSep Get_attribute DF_DATE_FORMAT To OrgFmt Get_attribute DF_DATE_SEPARATOR To OrgSep Set_attribute DF_DATE_FORMAT To DF_DATE_MILITARY Set_attribute DF_DATE_SEPARATOR To (ASCII("-")) ODBC_SetConstraint MyFile.File_number (MYDATECOLUMN = ' + string(mydate) + "'") Set_attribute DF_DATE_FORMAT To OrgFmt Set_attribute DF_DATE_SEPARATOR To OrgSep Also note that string constants are placed in single quotes ( ). Of Special Note If this command is used on a table you cannot use other finds on that table until ending the ODBC_SetConstraint since, as long as the constraint is in force it will overwrite the find logic of the driver. If, before ending ODBC_SetConstraint, we changed the example above to find on OrderDtl by some other index or in a different direction (not gt) on the same index, the result would be unpredictable. Commands and Techniques Use ODBC_DRV String Clause Move ITEM_ID = MODEMS To Clause Version 2.2 65

Chapter 9 ODBC_SetConstraint OrderDtl.File_number Clause Repeat Find Gt OrderDtl By Index.1 [Found] Showln Order detail for a modem is: ; OrderDtl.Order_Number, OrderDtl.Detail_number Find Lt OrderDtl By Index.1 // The result of this command is unpredictable Until [Not Found] ODBC_SetConstraint OrderDtl.File_number You cannot change the buffer between finds to re-seed the find logic. Doing that will result in the first record being found again. If we adjust the previous code to the example below, the repeat loop will turn into an endless loop since you are finding the same record repeatedly. Use ODBC_DRV String Clause Commands and Techniques Move ITEM_ID = MODEMS To Clause ODBC_SetConstraint OrderDtl.File_number Clause Repeat Find Gt OrderDtl By Index.1 [Found] Showln Order detail for a modem is: ; OrderDtl.Order_Number, OrderDtl.Detail_number Move SomeValue To OrderDtl.Order_number Until [Not Found] ODBC_SetConstraint OrderDtl.File_number You can use find, save and delete commands on other tables where no ODBC_SetConstraint is in force. You could, for example, expand the code above and show the customer names for the customers that bought modems. To achieve this you would need to find an OrderHeader and a Customer record. You can end looping through an ODBC_SetContraint set anytime you like. If we wanted to find only two records in the sample above we could add a counter, increment it for every found record and break the loop when the counter is two. Use ODBC_DRV String Clause Integer Counter 66 User s Guide

DataFlex Connectivity Kit for ODBC Move 0 To Counter Move ITEM_ID = MODEMS To Clause ODBC_SetConstraint OrderDtl.File_number Clause Repeat Find Gt OrderDtl By Index.1 [Found] Showln Order detail for a modem is: ; OrderDtl.Order_Number, OrderDtl.Detail_number Increment Counter If (Counter >= 2) Break Until [Not Found] ODBC_SetConstraint OrderDtl.File_number You must use SQL syntax in the string that is passed. Familiarize yourself with SQL; there are many good books available. If you want to write portable applications that support multiple database drivers, you can use the general form of the ODBC_SetConstraint command, CLI_SetConstraint. CLI_SetConstraint performs the same functionality. In fact, it is called from the ODBC_SetConstraint command. The CLI_SetConstraint command works exactly as described for ODBC_SetConstraint. It gets an extra parameter, the identification of the driver that you want to set the constraint for. If we changed the code above to be more generic, it would look like the sample below. Use ODBC_DRV String Clause String DriverId Move ODBC_DRV_ID To DriverId Move ITEM_ID = MODEMS To Clause CLI_SetConstraint OrderDtl.File_number Clause DriverId Repeat Find Gt OrderDtl By Index.1 [Found] Showln Order detail for a modem is: ; OrderDtl.Order_Number, OrderDtl.Detail_number Commands and Techniques Until [Not Found] CLI_SetConstraint OrderDtl.File_number DriverId Disclaimer Data Access has done a lot of research into performance issues and design differences between the native DataFlex database and an ODBC Data Version 2.2 67

Chapter 9 Source. We found that the ODBC_SetConstraint or CLI_SetConstraint command can speed up a connection to an ODBC Data Source considerably. While we are looking into ways to improve internal driver logic to get the kind of result these commands can give, this is not a trivial task and outcome of further research is uncertain. In the meantime, Data Access wants to offer this solution to the DataFlex community. You should be aware that the misuse of these commands could give unexpected results. It is the developers responsibility to verify that the results are as expected. All risks associated with the use of these commands are the developer s; Data Access Corporation disclaims any and all liability with respect to the use of the ODBC_SetConstraint or CLI_SetConstraint Command. ODBCAdministrator This command will start the ODBC administrator. Use ODBC_DRV Procedure StartODBCAdministrator Local Integer WndHandle Commands and Techniques Get Window_Handle To WndHandle ODBCAdministrator WndHandle End_Procedure // StartODBCAdministrator This command will start the ODBC Administator. The windows handle that is passed must be a valid window handle. ODBCDSNName Get a data source name. Use ODBC_DRV Procedure ShowDSNs Local String DSNName Local Integer Count Local Integer NumDSN ODBCEnumerateDataSources NumDSN For Count From 1 To NumDSN ODBCDSNName Count To DSNName Showln Data source Count : DSNName 68 User s Guide

DataFlex Connectivity Kit for ODBC Loop End_Procedure // ShowDSNs This command will get the Data Source name as specified in the ODBCAdministrator. This command must be preceded by an ODBCEnumerateDataSources command. The enumerate command will store the data source names in memory for further use. ODBCEnumerateDataSources Get the number of data sources and store the data source names in memory for further use. Use ODBC_DRV Procedure ShowDSNs Local String DSNName Local Integer Count Local Integer NumDSN ODBCEnumerateDataSources NumDSN For Count From 1 To NumDSN ODBCDSNName Count To DSNName Showln Data source Count : DSNName Loop End_Procedure // ShowDSNs This command will enumerate the data sources as specified in the ODBC manager. The names of the data sources are stored in memory for further use. The names can be queried by the ODBCDSNName command. ODBCEnumerateFields Get the number of fields of the specified table and store the field names in memory for further use. Use ODBC_DRV Commands and Techniques Procedure ShowFields String DSNName String TableName Local String FieldName Local Integer Count Local Integer NumFields ODBCEnumerateFields DSNName TableName To NumFields Version 2.2 69

Chapter 9 For Count From 1 To NumFields ODBCFieldName Count To FieldName Showln Field Count : FieldName Loop End_Procedure // ShowFields This command will enumerate the fields in the specified table. The names of the fields are stored in memory for further use. The names can be queried by the ODBCFieldName command. ODBCEnumerateTables Get the number of tables of the specified data source and store the table names and the schema the tables belong to in memory for further use. Use ODBC_DRV Procedure ShowTables String DSNName Local String TableName Local Integer Count Local Integer NumTables Commands and Techniques ODBCEnumerateTables DSNName To NumTables For Count From 1 To NumTables ODBCTableName Count To TableName Showln Table Count : TableName Loop End_Procedure // ShowTables This command will enumerate the tables in the specified data source. The names of the tables are stored in memory for further use. The names can be queried by the ODBCTableName command, the schema can be queried by the ODBCSchemaName command. ODBCFieldName Get the name of the field. Use ODBC_DRV Procedure ShowFields String DSNName String TableName Local String FieldName Local Integer Count Local Integer NumFields 70 User s Guide

DataFlex Connectivity Kit for ODBC ODBCEnumerateFields DSNName TableName To NumFields For Count From 1 To NumFields ODBCFieldName Count To FieldName Showln Field Count : FieldName Loop End_Procedure // ShowFields This command will get the field name of the table. This command must be preceded by an ODBCEnumerateFields command. The enumerate command will store the field names in memory for further use. ODBCManager This command has been deprecated. It is replaced by the ODBCAdministrator command. See the description of the ODBCAdministrator command for details. ODBCSchemaName Get the schema name of the specified table. Use ODBC_DRV Procedure ShowSchemas String DSNName Local String SchemaName Local Integer Count Local Integer NumTables ODBCEnumerateTables DSNName To NumTables For Count From 1 To NumTables ODBCSchemaName Count To SchemaName Showln Schema Count : SchemaName Loop End_Procedure // ShowSchemas Commands and Techniques This command will get the schema name of a table. This command must be preceded by an ODBCEnumerateTables command. The enumerate command will store the schema names in memory for further use. ODBCTableName Get the name of the specified table. Use ODBC_DRV Version 2.2 71

Chapter 9 Procedure ShowTables String DSNName Local String TableName Local Integer Count Local Integer NumTables ODBCEnumerateTables DSNName To NumTables For Count From 1 To NumTables ODBCTableName Count To TableName Showln Table Count : TableName Loop End_Procedure // ShowTables This command will get the table name. This command must be preceded by an ODBCEnumerateTables command. The enumerate command will store the table names in memory for further use. Commands and Techniques 72 User s Guide

Chapter 10 - Record Identity DataFlex Connectivity Kit for ODBC 10 The Data Access Database API allows DataFlex programs to use non- DataFlex databases. In order for the API to function, it imposes some requirements on the structure of the data it can connect to. The API demands that every table that it connects to contain a so-called record identity column. If a table fails to have a record identity, the API will not be able to connect to that table. What is a Record Identity? A record identity is required for historic reasons. A short introduction on record numbers explains that history. DataFlex Record Numbers The DataFlex database stores its information in disk files. Every table is stored in several disk files. One of the files contains the actual data, the other files contain the indexes, a DataFlex compiler include file (so the table can be used in programs) and a column name file. DataFlex data is accessed according to a defined sort order called an index. One order always exists and that is the storage order. This order can be used from within a DataFlex environment and is referred to as the record number (RECNUM) order. The record number is a consecutive positive number starting at 1 that indicates a record s relative position in the table. It can be used to search the table for a specific record. Since record numbers indicate a record s relative position in the table, it can be used in conjunction with the record size to determine the records position in the disk file. This involves minimal disk access and results in a fast operation. Record Identity The record number can be used within the DataFlex environment as if it were a field in the table. In reality, it is a logical column; no disk space is allocated to store record numbers. Record numbers are assigned when creating records. DataFlex programmers cannot control the record number for a new record. It has been strongly discouraged by Data Access to use record numbers as meaningful information. Older DataFlex systems used to misuse the logical column as customer number or order number. Version 2.2 73

Chapter 10 DataFlex record numbers have the following characteristics: Uniquely identify a row Automatically available Automatically maintained Unchangeable for a given row Reusable Offers faster access than other indexes Sequential consecutive number ranging from 1 to 16 million The DataFlex runtime and packages rely on the existence of record numbers. The most common way to keep track of the current record is to store the record number in some type of integer variable (be it a property, local or global). If that current record needs to be re-found, this is done by issuing a Find Equal By Recnum command and the record buffer is refreshed with the desired record. API Requirements The API requires every table it accesses to have a record number like column called the record identity. This column can be a logical column or actually exist in the table. It should have the following characteristics: Record Identity Uniquely identify a row Numeric Ranges from 1 to 2 31 (2147483647) Is available as a unique index The API does not require the record identity to Be automatically available Be automatically maintained Be unchangeable for a given row Be reusable Offer faster access than other indexes Be sequential consecutive numbers ranging from 1 to 16 million All tables accessed by the Data Access Database API must fulfill the four 74 User s Guide

DataFlex Connectivity Kit for ODBC characteristics. Since the API does not require that the record identity be automatically maintained there may be drivers that require additional programming to maintain the identity. Also be aware that other non-dataflex applications creating data in tables accessed by DataFlex need to make sure that the record identity meets the minimum requirements of the API. Theoretically, it is possible to have a non-numeric record identity. As long as no commands and/or classes are used that store record identifiers (in integer variables) this will work perfectly. In practice however it is near to impossible to write a program that does not use such classes or commands. For practical reasons, it is therefore safe to state the following. Note: The Data Access Database API requires every table it accesses to have a numeric record identity column. This column must uniquely identify rows in the table and a unique index must be defined on the column. Other Databases Other database types do not have a record number column or use some other form of identifying the rows in a table. Some databases use record number like functionality, others use so-called ROWID s, some use timestamps others do not use any type of logical identifier (at least not accessible by client programs). ODBC Since ODBC is an open specification it is unknown if the data source supports record identity columns, ROWID s, timestamps or none of the mentioned identifiers. When converting a DataFlex table to ODBC an extra column can be appended to the original definition. This column will be called DFRECNUM. It is a numeric column. The program that used the original DataFlex data may need to be adjusted. Record Identity When connecting to ODBC data (as opposed to converting), the table to connect to must have a column that can be used as record identity. If such a column does not exist, it must be appended before the table can be used from within a DataFlex environment. Defining the Record Identity The record identity of a table should be setup through the intermediate file by Version 2.2 75

Chapter 10 using the Primary_Index keyword. Note that in order to specify an identity column you need to use an index number. This is done with future enhancements in mind. You should set the keyword to the number of the unique index that has the record identity column as its only segment. For example, if we have column AnumericColumn for which the index with name AsampleIndex is defined we could create the following intermediate file entries: Primary_Index 3 Index_Number 3 Index_Name AsampleIndex INDEX_NUMBER_SEGMENTS 1 INDEX_SEGMENT_FIELD 4 Record Identity Recnum in Programs As mentioned above, the DataFlex record number column can be accessed as if it exists. DataFlex programmers can manipulate the RECNUM column like any other column except for its value when creating a record. A common technique is to store the record number, do some operations, re-find the record and manipulate the buffer. This is done by: Move MyTable.Recnum To LocalVariable Move LocalVariable To MyTable.Recnum Find Eq MyTable By Recnum The above piece of code will still work when using the DataFlex Connectivity Kit for ODBC. The Connectivity Kit will automatically translate every reference to RECNUM into a reference to the record identity column. DataFlex handles moving a value into the record number column in a special way. In addition to moving the value into the (logical) column, the record buffer s status is set to INACTIVE. This feature is used to create copies of existing records. The Connectivity Kit fully supports this behavior. If for example we have a table called MyTable, with a record identity column called MyRID the following two commands will place 10 into the MyRID column: Move 10 To MyTable.Recnum Move 10 to MyTable.MyRID 76 User s Guide

DataFlex Connectivity Kit for ODBC The first command will also set the MyTable record buffer to inactive. The second command will not change the status of the record buffer. RECNUM relationships Some older DataFlex environments would misuse the RECNUM field as meaningful information. Other would even point relationships to a RECNUM field. This is known as a RECNUM relation. The DataFlex Connectivity Kit for ODBC supports converting tables that have a RECNUM relationship. When converting with recnum support the record identity will be filled by the value of the RECNUM of the DataFlex record that is converted. Record Identity Version 2.2 77

Chapter 11 - NULL Values and Defaults DataFlex Connectivity Kit for ODBC 11 Null and default values are closely related. Usually one will define a default value for most columns that do not allow null values. Null values SQL represents the fact that some piece of information is missing by a special marker called a null. Columns that contain a null indicate it is not known what the real value is. Informally you can think of such columns as containing a null or being null. One could argue that the term null value is nonsense because the whole point about nulls is that they are not values. We are aware of this argument but the term is commonly used within the database community so we will use it in this discussion. How nulls are actually represented in the system is implementation dependent. That implementation must (obviously) be so that the system can distinguish between null and non-null values. Null values complicate internal database logic by creating a three-valued logic as opposed to the normal two-valued (binary) logic. In normal logic operations, the result is either true or false. Using null values will add a third outcome, unknown. This complicates logic considerably. Null values complicate matters. It may be confusing to the end users if they see unexpected results. More importantly, we have found that allowing null values in index columns has a negative impact on performance. Therefore we advise to avoid allowing null values in any column that appears in an index. If you do not allow null values in a column make sure you define an appropriate default for that column. Defaults can be setup per (DataFlex) type. Whether a column allows null values or not depends on the table definition. This definition can be adjusted by using the DF_FIELD_NULL_ALLOWED attribute. When converting, a configuration file is used that sets up nullability per type to convert. These defaults can be adjusted in the driver configuration file odbc_drv.int. Null Values Using null values in a program DataFlex has no equivalent for the SQL concept of null values in a database. The Connectivity Kit adds a column attribute DF_FIELD_IS_NULL of type boolean that can be used to check if a column is null or to set it to null. Version 2.2 79

Chapter 11 If a row is found that contains a column with the null value, you can still get the value of the column. It will be set to empty or zero (0) depending on the columns type. The only way to know that the column contains the null value is by checking the DF_FIELD_IS_NULL attribute of the column. If a program wants to save a null value in a record, it needs to set the DF_FIELD_IS_NULL attribute for the column before saving. When records are created, only the columns that have been assigned a value will be assigned an explicit value. All others will get the default value defined for the column in the table definition. If no default is defined, the database system will use null. Sort Order of Null Values When a column contains null values, a problem arises when we sort based on that column. A comparison on a null value will return the unknown result. So there is no way to determine the collating sequence of null values. This has been solved by always collating nulls in a standard way. The way NULL values are sorted is known as null collation. Nulls can be collated in the following ways: End, always at the end regardless of ascending or descending order High, at the end when sorting in ascending order, at the beginning when sorting in descending order Low, at the beginning when sorting in ascending order, at the end when sorting in descending order Null Values Start, always at the beginning regardless of ascending or descending order Check your database s documentation to see what null collation is used. Please note that the collation of null values can cause behavior differences between a program working on DataFlex data or ODBC data. Since DataFlex does not support the concept of null values these are represented as empty or 0 (depending on the type). These values may collate different in ODBC. Finding the first, or last, record may give different results between a DataFlex table and its ODBC equivalent. This only happens if null values are allowed in the segments that make up the index in use. Dates and Null values Almost all possible values of the DataFlex database types can be mapped to legal ODBC values. The only exception is the zero date value in DataFlex. 80 User s Guide

DataFlex Connectivity Kit for ODBC There is no sensible date value in ODBC to map this value to other then null. Pre 2.1 revisions of the DataFlex Connectivity Kit for ODBC would therefore convert date column to allow null values with a default value of null. This however could introduce a compatibility issue when converting existing DataFlex data to ODBC. When a date column was part of an index segment, finding on that index could give a different result when using the DataFlex data from using the ODBC data. This is caused by the way the database accessed through ODBC collates null values. Let us assume a DataFlex table containing an index with three segments, NumCol, DateCol and Recnum. Also assume the table has been filled with the following data: NumCol DateCol Recnum 1 0 1 1 01/01/2000 2 1 02/02/2000 3 1 03/03/2000 4 2 0 5 2 02/02/2000 6 2 04/04/2000 7 3 0 8 3 05/05/2000 9 The table above is sorted by the index in question. Now lets look at the same date after it has been converted to a database accessed through ODBC that uses a Null collation of High. The date column is converted to allow null values and the zero date value is mapped to null. The table below is sorted by the index in question. NumCol DateCol Recnum 1 01/01/2000 2 1 02/02/2000 3 1 03/03/2000 4 Null Values Version 2.2 81

Chapter 11 1 null 1 2 02/02/2000 6 2 04/04/2000 7 2 null 5 3 05/05/2000 9 3 null 8 Now lets take a look at some DataFlex code and the results: Code DataFlex ODBC Clear MyTable Move 3 To MyTable.NumCol Find Lt MyTable By Index.N 7 9 The different results described above are technically correct. When converting an existing DataFlex application it would require code changes to handle the find differences outlined above. The DataFlex Connectivity Kit for ODBC offers a way to handle this without any code changes by the dummy zero date value. Null Values Dummy zero date When dummy zero date is used, the DataFlex Connectivity Kit for ODBC will use the lowest possible ODBC date value (0001-01-01) as the value to map the DataFlex zero date to. The Connectivity Kit will handle this internally. The DataFlex program still uses the zero date value. The Connectivity Kit handles automatic conversion of the dummy zero date value whenever needed. If the database in use supports a different lowest possible date value this can be configured. Check your database documentation on what the lowest possible date value is. You should set the dummy zero date to this value. Dummy zero date value will default to 0001-01-01. The dummy zero date value can be specified through the DUMMY_ZERO_DATE_VALUE driver configuration keyword. It should be set in military format (yyyy-mm-dd). For more information on the driver configuration file see Appendix C Configuration file ODBC_DRV.INT. The dummy zero date logic can be configured by the 82 User s Guide

DataFlex Connectivity Kit for ODBC DF_FILE_USE_DUMMY_ZERO_DATE table attribute. If the attribute is set to true for a table, all date columns that do not allow null values will apply the dummy zero date logic. If the dummy zero date logic is used, there no longer is a difference between DataFlex and ODBC collating. There are no Null values in the ODBC table. If there is a need to access the data from outside the DataFlex Connectivity Kit for ODBC one should handle the dummy zero date value in a special way. Next to solving the collating problem the dummy zero date logic also speeds up indexed find operations. In pre- 2.1 Connectivity Kit versions, date columns allowed null values. Finding on an index containing segments that allow null values is slow. Since the dummy zero date logic does not create columns that allow null values. Finding is as fast as finding using other indexes. The introduction of the dummy zero date concept changed a number of default settings used at conversion. In pre 2.1 versions the date columns would be converted to allow null values and have a default value of null. Since 2.1, date columns are converted not to allow null values and have a default value of 0001-01-01. These defaults can be adjusted in the driver configuration file odbc_drv.int. Connectivity Kit independent code for dummy zero dates The dummy zero date concept is also supported in other DataFlex Connectivity Kits. The dummy zero date value is not the same for all Connectivity Kits. When writing embedded SQL code, the programmer will need to filter the dummy zero date values from result sets. In order to do this the value should be known. For this reason the driver level attribute DRVR_DUMMY_ZERO_DATE_VALUE was created. A backend independent way to handle this would be: Function CreateZeroDateQuery String sckid Returns String String szerodatevalue String squery Null Values CLI_Get_Attribute sckid DRVR_DUMMY_ZERO_DATE_VALUE ; To szerodatevalue Function_return ( Select * From MyTable Where MyDate = ; + szerodatevalue + ) End_procedure // SelectAllZerodates For a detailed discussion on the CLI_Get_Driver_Attribute command see Chapter 9 - ODBC Specific Commands and Techniques. Version 2.2 83

Chapter 11 Converting from nullable columns to dummy zero date Data created with a pre 2.1 Connectivity Kit would allow null values for date columns. There may be a need to convert such data to use the dummy zero date logic in order to avoid the sort order or performance problems. The conversion can be done in Database Builder or through a DataFlex program. In Database Builder the parameter tab page allows you to switch the DF_FILE_USE_DUMMY_ZERO_DATE setting for a table. The field tab page allows you to switch the nullability of a column. To convert pre 2.1 data to use dummy zero dates check the Use dummy zero date checkbox on the parameter tab. For every data column in the table uncheck the Nullable column in the field tab. In console mode a program must be written that converts tables. The essence of the conversion logic would be: Procedure ConvertTable Integer itable Local Integer inumfields Local Integer ifield Local Integer itype Structure_Start itable Set_Attribute DF_FILE_USE_DUMMY_ZERO_DATE Of itable ; To (True) Null Values Get_Attribute DF_FILE_NUMBER_FIELDS Of itable To inumfields For ifield From 1 To inumfields Get_Attribute DF_FIELD_TYPE Of itable ifield To itype If (itype = DF_DATE) ; Set_attribute DF_FIELD_NULL_ALLOWED Of ; itable ifield To (False) Loop Structure_End itable End_Procedure // ConvertTable Default values SQL allows column definitions to include a default value. This default is used when records are created and no value for the column has been defined. SQL defaults are different from DataFlex data dictionary default values. The DataFlex default values as implemented in the data dictionary class will be used at data entry time. While the user is entering data, the default is displayed on screen. SQL defaults are used at save time. Data entry has 84 User s Guide

DataFlex Connectivity Kit for ODBC been completed; the user will not see the default value until the record in question is reselected. Both approaches have their merits. The data entry time approach has the advantage of direct feedback, the save time approach has the advantage of data integrity guaranties. You can setup SQL defaults on a column in three different formats: literal, ODBC escape sequence or backend string. Since defaults are used when creating records and no value for the column has been specified, they are usually set to a value that indicates the data is unknown. Or they are set to a function that generates the desired information. Defaults are also used to make sure that columns that do not allow null values always have a valid value. A literal default is set by using a string in single quotes. It should be a valid literal value for the SQL type of the column. An ODBC escape sequence is enclosed in curly brackets {} finally a backend string is enclosed in square brackets []. Literal values must be enclosed in single quotes. You can use any literal value that is legal in the data source for the type and length of the column. ODBC escape sequences must be enclosed in curly brackets. The main reason to use ODBC escape sequences rather then backend strings is that ODBC escape sequences are supported by other drivers. ODBC defines escape sequences for a number of language elements. Two escape sequence types can be used when setting up default values. The Date, time, timestamp and datetime interval literals and the scalar functions. For a detailed discussion of ODBC escape sequences see Appendix A. Backend strings are used for any value that is not a literal in quotes and not an ODBC escape sequence. Usually they are used to setup a default scalar function. The default value of a column depends on the table definition. This definition can be adjusted by using the DF_FIELD_DEFAULT_VALUE attribute. When converting a configuration file is used that sets up defaults per type to convert. These defaults can be adjusted in the driver configuration file odbc_drv.int. Typical examples of setting default values are: Set_Attribute DF_FIELD_DEFAULT_VALUE imyfile imycharfield ; To Unknown Set_Attribute DF_FIELD_DEFAULT_VALUE imyfile imydatefield ; To {fn current_date()} Set_attribute DF_FIELD_DEFAULT_VALUE imyfile imycharfield ; To [convert(char(30), CURRENT_USER)] Null Values Version 2.2 85

Chapter 11 The default will be used when creating records. So if we where to have a table MyTable with columns A, B, C and use the code below, we would end up with a new row in MyTable having default values (if any) for columns B and C. Clear MyTable Lock Move Not a default value To MyTable.A Saverecord MyTable Unlock Check your database s documentation to see if it supports SQL Defaults. Configuring Null and Default values for conversion When data accessible by DataFlex is converted to ODBC a problem arises. DataFlex does not support null and defaults on database level. It is important to setup the null and default behavior correctly when converting. Since allowing nulls and default values are part of a table definition, it can be expensive to adjust these settings after conversion. For that reason the Data Access Connectivity Kit for ODBC allows the user to setup conversion defaults per type. These defaults are setup in an ASCII file called odbc_drv.int. It is not needed to use the configuration file. In case no configuration is present the Connectivity Kit will use the following scheme: Null Values DATAFLEX TYPE NULL ALLOWED DEFAULT Ascii No Numeric No 0 Date No 0001-01-01 Text No Binary No 0 Next to the default per type, the default table setting to use dummy zero dates can be setup through the configuration file. This can be done by setting DEFAULT_USE_DUMMY_ZERO_DATE. The default set for this configuration 86 User s Guide

Null Values DataFlex Connectivity Kit for ODBC file setting will be used when creating a new table through a structure operation (when converting for example). For a detailed discussion of the Connectivity Kit Configuration file, odbc_drv.int, see Appendix C Configuration file ODBC_DRV.INT. Recommendations It is strongly recommended never to allow null values in any type of column. Sometimes it is required to use null values for a certain column. If this is needed, make sure that column is never used as an index segment. The use of null values as index segments is supported, it may give unexpected results. When converting existing DataFlex data to ODBC we strongly advise to use the defaults for conversion. That is not to allow null values in any column and to use dummy zero dates for every date column. Version 2.2 87

Chapter 12 - Transactions DataFlex Connectivity Kit for ODBC 12 This chapter discusses the way transactions and locking are handled by DataFlex and the DataFlex Connectivity Kit for ODBC. The chapter starts with a small introduction to concurrency issues, discuss the DataFlex programming language aspect then it will go into details of the supported database formats. Concurrency Concurrency is the ability of multiple users to access and modify data simultaneously and is of vital importance in any database environment. The database should handle multiple users accessing the database simultaneously in a correct way. The means to achieve this are transactions and locking. Transactions A transaction is a unit of work that is done as a single atomic operation. Transactions either succeed or fail as a whole. For example consider a transaction that transfers money from one bank account to another. This involves withdrawing money from one account and depositing the money in the other account. It is important that both actions occur, it is unacceptable for one action to succeed and the other to fail. A database that supports transactions is able to guarantee that either both steps succeed or both steps fail. The transaction support of a database system must have the ACID (Atomicity, Consistency, Isolation and Durability) properties. Atomicity. A transaction must be an atomic unit of work. Consistency. When completed a transaction should leave the database in a consistent state. Internal data structures such as indexes must be correct at the end of the transaction. Isolation. Modifications made by the current transaction must be isolated from modifications made by other concurrent transactions. A transaction should not be able to see intermediate results of another transaction. Durability. After a transaction has completed its effects are permanent. Transactions are started by a program. The program then manipulates data in the database. Eventually the transaction is committed, or rolled back. A Transactions Version 2.2 89

Chapter 12 commit will make all changes made by the transaction permanent. A rollback will remove all changes made by the transaction just like the transaction was never started. Most databases support all of the transaction properties. The Isolation property usually can be set up to be less restrictive. So called Isolation Levels have been defined. The lowest level will make intermediate results available to other transactions, the highest level will not. There is an inverse relation between Isolation Level and concurrency. The higher the Isolation Level, the lower the concurrency. Transactions Locking A lock guarantees exclusive access to the object on which it is applied. All databases that support multi user access use some form of locking. Locking is used to enforce the Isolation transaction property. Although there are a number of variants on implementation level basically databases support the following lock granularities: row locking, page locking and table locking. Usually a database supports one or more of the lock granularities or uses a mechanism that will automatically select the best locking granularity for the task at hand. There is an inverse relation between lock granularity and concurrency. The smaller the granularity, the bigger the concurrency. If for example we have a table with 10 rows made up of 5 pages, each page having 2 rows, 10 users can simultaneously lock a row, 5 users can simultaneously lock a page and only one user can lock the table. The use of locks can introduce a side effect known as a deadlock. A deadlock is a situation where two, or more, transactions are waiting on each other s locks to be released. Transaction A waits for B and B waits for A. In a deadlock situation transactions wait indefinitely. This is an unacceptable situation. The database system must either prevent deadlocks from happening or detect deadlocks when they happened and resolve that situation. The scenario below gives a sample of a deadlock situation with two transactions. The sample uses record locking. In this case, the lock must be part of the find operation. One can only lock a record after it has been found. To indicate this special type of find we use XFind. TRANSACTION A TIME TRANSACTION B - - 90 User s Guide

DataFlex Connectivity Kit for ODBC TRANSACTION A TIME TRANSACTION B - - XFind R1 T1 - - - - T2 XFind R2 - - XFind R2 T3 - Wait - Wait T4 XFind R1 Wait Wait Wait Wait If the above situation occurs, the two transactions are waiting for each other indefinitely. This is an unacceptable situation so the DBMS either needs to prevent deadlocks from happening or detect when they happen and resolve that situation. There are deadlock avoidance strategies. The native DataFlex database uses one of these strategies. Most database systems however use a deadlock detection and resolving strategy. Deadlock can be resolved by choosing a victim transaction in the list of deadlocked transactions. The victim is then stopped by issuing a rollback. This will free the locks claimed by the victim transaction thus allowing other transactions to obtain those locks and continue. It is important to understand that deadlocks are not a programming error. It is a situation that can occur in certain environments. It is the application programmer s responsibility to handle deadlocks in a proper way. Application programmers typically handle deadlock errors in two ways, one is to report an error to the user and let the user re-enter the information. The other is to automatically retry the transaction a designated number of times. Again, deadlock is not a programming error. It is a condition that can occur in a certain environment. It depends on the application and database backend what the chance is for deadlock to occur. Usually this chance is very small. Nevertheless, application programmers should handle deadlock no matter how small the chances are for it ever occurring. Transactions Version 2.2 91

Chapter 12 Transactions and locking in the DataFlex language A DataFlex program will alter data in tables using one of two methods. Either they will: use data-dictionaries (DDOs) or their predecessor, data-sets (DSOs) or, they will manually code their applications using commands. This method is usually referred to procedural or 2.3 style. All table changes in DDOs are handled through two messages, Request_save and Request_delete. Using DDOs greatly simplifies transaction handling. When you use DDOs, transaction support is built in (in fact it is required). There are no migration or conversion issues with DDOs or DSOs. An application written using data-sets prior to DataFlex 3.1, will work without any required change. Manual methods include using the traditional commands Save, Saverecord, Delete, Lock, Reread, Unlock. Three additional commands were added to DataFlex 3.1 and VDF4, Begin_Transaction, End_Transaction, and Abort_Transaction. Depending on the applications coding logic, changes may be required to make an application work properly with transactions. Even if a legacy application works properly with transactions you may want to make changes in your code to take full advantage of the transaction handling. If you have an application that uses DDOs or DSO, you will want to refer the section titled Handling Transaction and Data-Dictionaries. If, you have applications that do not use DDOs or DSO, you should review the following section. Transactions Transactions and locking using the command language The DataFlex programming language supports a number of commands to define transactions and use locking. These commands are: Begin_Transaction to define the beginning of a transaction End_Transaction to commit a transaction Abort_Transaction to rollback a transaction Lock to lock the database Reread to lock the database and refresh all active buffers Unlock to unlock the database The Lock, Reread and Unlock commands have been part of the DataFlex language for a long time. Up until DataFlex revision 3.1 there was no proper 92 User s Guide

DataFlex Connectivity Kit for ODBC support for transactions. In DataFlex 3.1 support for transactions was added in the form of the three transaction commands. VDF and WebApp have always supported transactions. Command Begin_Transaction End_transaction Abort_Transaction Description Marks the beginning of an explicit transaction. Every database action between this command and its accompanying End_transaction command is the transaction. Defining a transaction within a transaction is not considered an error. The inner transaction will be embraced by the outer one making both transactions act as if it were one. Begin_Transaction and End_Transaction must be on the same scope. It is not allowed to place the End_Transaction command in a different scope. Begin_Transaction will lock the database; see Lock for a description of locking the database. Marks the end of an explicit transaction. Every database action between this command and its accompanying Begin_Transaction command is the transaction. End_Transaction will unlock the database. See Unlock for a description of unlocking the database. Will rollback the transaction. The abort transaction command will not jump to the End_transaction command automatically. The flow of the program will continue on the next line. It is the Transactions Version 2.2 93

Chapter 12 Transactions Lock programmer s responsibility to ensure control jumps out of the transaction after issuing this command. If the programmer does not take care of the control flow errors like DFERR_EDIT_WITHOUT_REREAD (4155) can occur. Abort_Transaction will unlock the database. See Unlock for a description of unlocking the database. Usually there will be little need to use the Abort_Transaction command. Transactions are automatically aborted if errors occur (see below) and the flow jumps to the End_Transaction command in that case. Locks the database. Every table open in the application that is not read only or an alias will be locked. The database API will call the driver of each individual table to lock that table. Tables are locked in the order of the number that they are opened in. Normally this is the same order as defined in the filelist. If the lock attempt generates error DFERR_LOCK_TIMEOUT (4106), the API will automatically retry the lock. The number of retries can be set by using the Set_Transaction_Retry command. A database driver may or may not generate the DFERR_LOCK_TIMEOUT error. Some database systems use row locking. Row locking differs from table locking not only in granularity but also in timing. You can lock a table any time. You can only lock a row after it has been found. Drivers that connect to row locking databases usually do not 94 User s Guide

DataFlex Connectivity Kit for ODBC Reread Unlock implement the lock function but will lock as finding occurs in a locked state. Lock and Unlock are legacy commands. If you want your application to take full advantage of transaction processing we recommend that you replace these commands with Begin_Transaction and End_Transaction. A reread is actually two actions. Reread will lock the database followed by a refind. If the reread command gets no arguments, every active record is refound. If the reread command has arguments only the active record of the passed buffers are re-found. There is no difference in how the lock is done between the two commands reread and lock. See Lock for a description of locking the database. Unlocks the database. Every table, page or row that has been locked will be unlocked. The database API will call the driver of each individual table to unlock that table. Tables are unlocked in the order of the number that they are opened in. Normally this is the same order as defined in the filelist. Lock and Unlock are legacy commands. If you want your application to take full advantage of transaction processing we recommend that you replace these commands with Begin_Transaction and End_Transaction. Transactions Transactions support the ACID properties; this makes programming a system that supports transactions more simple. For example, suppose we have a restriction in our application that we can only order items that are in stock. In a Version 2.2 95

Chapter 12 non-transaction system this condition would have to be checked before trying to save the order. In a transaction system however one can start the transaction, then start saving the rows that make up the order. If somewhere along the way the condition is not met, issue an error, which will return the database in the state it had before starting the transaction. If the condition is met for every row, issue an End_transaction thus committing the transaction. Using the transaction commands to define transactions is referred to as doing explicit transactions. Since the transaction commands were not supported in DataFlex prior to revision 3.1 programmers would be forced to adjust all their programs in order to use the new revision s transaction logic. To avoid this and offer a smooth transition into version 3.1 DataFlex supports so-called implicit transactions. An implicit transaction is started by the Lock or Reread command and ends with the Unlock command. Transactions Errors in a transaction Because transactions must support Atomicity, errors that occur during a transaction will force a rollback of the transaction. When the error occurred during an explicit transaction, control will jump to the next line of the End_Transaction command. This behaviour of transactions results in a minimal use of the Abort_Transaction command. Transactions are usually rolled back because some error occurred, since the occurrence of an error results in a rollback, there is no need to explicitly use the Abort_Transaction command in such a case. The only error that is an exception to this rule is the DFERR_LOCK_TIMEOUT (4106) error. If this error occurs, the program will check the retry setting and if a retry must be done jump to the beginning of the (implicit or explicit) transaction. The number of retries can be set by using the Set_Transaction_Retry command. If the object oriented programming style is used the message Verify_Retry will be send to the object where the transaction is defined. This function can return zero in which case the transaction will be retried, a non-zero return value will stop the retry. If the logic loops through all its retries and it still gets a time out error, the timeout error will be treated as any other error would. Implicit transactions will not be aborted when an error occurs. Instead the flow of the program will continue on the next line. Since implicit transactions where designed for compatibility reasons, the behavior is the same as it was before transactions where introduced to DataFlex. Some database drivers may not be able to support implicit transactions. 96 User s Guide

DataFlex Connectivity Kit for ODBC There is a difference in how some DataFlex revisions handle errors in implicit transactions. DataFlex 3.1d, DataFlex 3.2, Visual DataFlex 6.0, 6.2 and 7.0 will abort the implicit transaction if an error occurs. The flow of the program however will not jump out of the implicit transaction but continue on the next line. The behavior change was made to accommodate non-dataflex databases. It does affect the way existing applications using DataFlex data work when errors occur in a locked state. This behavior will usually not be desired. If for example an implicit transaction tries to save 3 records and an error occurs while trying to save the second record, the transaction will be rolled back but the control will not jump to the end of the implicit transaction. In this case it will still try to save the third record. The rollback has removed the lock so the DFERR_EDIT_WITHOUT_REREAD (4155) error occurs. In DataFlex 3.2 and Visual DataFlex 7.0 this can be avoided by setting the DF_TRANABORT_ONERROR global attribute. In the other revisions mentioned above there is no way to change this behavior. Setting this attribute to false will not cause the implicit transaction to be rolled back if an error occurs. To adjust the setting the programmer should add the line: Set_attribute DF_TRANABORT_ONERROR To (False). In upcoming revisions of DataFlex the DF_TRANABORT_ONERROR attribute will be supported. Its default value will change from true to false. This will ensure that new revisions will be compatible with pre-transaction code. In Visual DataFlex 7.0 the attribute DF_TRANABORT_ONERROR is not defined in fmac. If you want to set it in a VDF7 application you will have to define it. The internal attribute number for the attribute is 22. To define it add #REPLACE DF_TRANABORT_ONERROR CI22 to your program code. In DF3.2 the attribute has been defined in fmac, you can use it without having to define it first. Please note that explicit transactions will always be aborted when an error occurs no matter how the DF_TRANABORT_ONERROR attribute is set. Data sets and Data dictionaries automatically use explicit transactions. Setting the DF_TRANABORT_ONERROR attribute has no effect on Data Set / Data Dictionary based applications. They always use explicit transactions. Due to the difference between runtimes the behavior of existing applications may change when converted to a newer version. Lets summarize how different runtime versions behave compared to pre-dataflex 3.1 runtimes. The runtimes that behave different are marked red; those that behave different but can be made to behave the same are marked blue Important: These differences only apply to applications that are not using data-dictionaries or data sets. If you are using DDOs or DSOs, all runtimes Transactions Version 2.2 97

Chapter 12 exhibit the same, correct behavior. Transactions Runtime DF3.1, DF3.1b, DF3.1c DF3.1d DF3.2 VDF4, VDF5 VDF6 VDF7 service pack 3 Behavior Exactly the same as in pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state. Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state. Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state. In DF3.2 the programmer can set the runtime to work compatible to pre-3.1 runtimes in the following way: Set_attribute DF_TRANABORT_ONERROR To (False) Exactly the same as pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state. Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state. Exactly the same as in pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state. In VDF7 the programmer can set the runtime to work incompatible to pre-3.1 runtimes in the following way: 98 User s Guide

DataFlex Connectivity Kit for ODBC #REPLACE DF_TRANABORT_ONERROR CI22 Set_attribute DF_TRANABORT_ONERROR To (True) Lock granularity DataFlex only recently started supporting different database systems from the native DataFlex database. The native DataFlex database uses a DataFlex specific way of locking. This way of locking is not supported by all other database systems. Issuing a lock can therefore result in different behavior depending on the database system in use. The native DataFlex database supports table level locking only. This means that one and only one transaction can have a table locked at any given time. Most other database systems support row level locking only. In those systems there can be multiple concurrent transactions that have rows in one table locked at a given time. A further difference in table and row locking is the moment the actual lock is done. When using table locking, the table is locked the moment the lock is issued by the program. With row or page locking however, the row or page can only be locked after it, or a row on it, has been found. So row or page locking systems lock while rows are being found in a transaction. This is a conceptual difference. Some DataFlex programs misuse the lock behavior of DataFlex to guarantee exclusive access to certain resources. Lets assume we want to write to an ASCII disk file from a DataFlex program and (mis)use the DataFlex lock mechanism to ensure only one DataFlex program writes to the file at any given time. When the underlying database changes to a row-locking database, this logic will no longer work. Transactions and Data-Dictionaries This discussion applies to the Data-Dictionaries and their predecessor, Data sets. The term Data-Dictionary or DDO will be used to refer to both technologies. DDOs alter tables using two methods, Request_save and Request_delete. Both methods use transactions (in fact they require than your tables support transactions) and fully support all of the ACID property requirements. These methods both operate as follows: 4. Method starts (Request_save or Request_delete) Transactions Version 2.2 99

Chapter 12 5. A Begin_Transaction is executed (unless a transaction is already started) 6. Table data is reread and locked as needed. If table locking is used, all tables that participate in the save or delete are locked. If row (record) locking is used, rows are locked as they are reread. 7. Data is validated, processed and table columns are updated 8. If any error occurs, at any time, the error is handled as follows: Execution of the current method is stopped The transaction is rolled back and aborted All tables and rows are unlocked and the transaction is ended The error is reported (after the unlock) The Err indicator is set to True Control is returned to the method that called the request method or if a transaction was already started, to the line following the End_Transaction. Note that this applies to all errors. If the runtime encounters any unexpected error, this process is triggered. If the developer generates an explicit error with the Error command (typically done within the events validate_save or validate_delete), the process is triggered. Transactions 9. If no errors occur The save or delete is completed, The transaction is committed (unless the request was already within a transaction). All tables and rows are unlocked and the transaction is successfully ended (unless the request was already within a transaction). Control is returned to the method that called the request method Errors in a DDO transaction The error rollback is a very simple process. If an error occurs, execution stops, the transaction is rolled back, and the request method is completed. While this is simple, it is powerful and has the following implications. 1. A request_save or a request_delete may change many rows from many tables. For example a delete cascades and deletes rows in descendant tables causing hundreds of rows to be deleted and 100 User s Guide

DataFlex Connectivity Kit for ODBC hundreds of parent rows to be altered. If an error occurs, all of these changes are rolled back. 2. If you wish to stop a transaction inside of a DDO, use the error command. It is not expected that you will ever use the Abort_Transaction command within a DDO. 3. The event methods Validate_save and Validate_delete, were created to provide a place for a developer to check for any errors and to generate errors as needed. There is nothing stopping you from declaring an error in any DDO event (e.g. Update, Backout, Save_main_file). Any error generated at any time within the transaction will properly abort the transaction. 4. Once an error is encountered within a method, none of the other commands within that method (or the methods that called this method) are executed. The rollback occurs, the error is generated and control is returned to the method that made the request. For example, in the sample below, the code in the function that occurs after the error command can never be executed. Function Validate_save Returns Integer Error 300 Sorry, no save // If an error occurred, this code never is executed Showln You will never see me // We don t need to return a value, the error triggers the stop End_Function // Validate_save 5. Error reporting is always deferred until the transaction is rolled back and all locks are removed. You can, therefore, execute an error within a DDO transaction without worrying that the error will be reported when tables are locked. Grouping multiple DDO saves or deletes If you wish to group several DDO Request_save or Request_delete operations within a single transaction you can group them using Begin_transaction and End_transaction as follows: Transactions Begin_transaction Send Request_save Of hmyddo1 Send Request_save Of hmyddo2 End_transaction Version 2.2 101

Chapter 12 If an error occurs in either DDO transaction both transactions are rolled back and execution resumes at the line following the End_Transaction. Note that you should never use a Lock, Reread or Unlock command to group DDO transactions. Transactions Transactions and the DataFlex Connectivity Kit for ODBC The DataFlex Connectivity Kit for ODBC allows DataFlex programs to access data through ODBC. Open Database Connectivity (ODBC) is a widely accepted application programming interface (API) for database access. ODBC is designed to enable an application to access different database management systems (DBMS s) with the same source code. An application calls ODBC functions which are implemented in database specific modules called drivers. ODBC uses SQL to access data. ODBC is an API specification. The API is independent from any DBMS or operating system. It is important to understand that ODBC is designed to expose database capabilities, not supplement them. Thus, application writers should not expect that using ODBC will suddenly transform a simple database into a fully featured relational database engine. Nor are driver writers expected to implement functionality not found in the underlying database. A variety of database management systems can be accessed through ODBC. These include enterprise database systems such as Oracle, Sybase, DB2, SQL Server, flat file systems as dbase, Paradox and even non-database systems as Excel, XML and ASCII. ODBC supports transactions if the database connected to supports transactions. The same can be said about locking. How a DataFlex application behaves with transactions and locking when using the ODBC Connectivity Kit depends on the database used and the ODBC driver used to connect to that database. Transactions and locking There are ODBC drivers for non-database environments. These generally do not support transactions and locking at all. The discussion below is limited to databases only. If the database supports the Read Uncommitted Isolation Level, the DataFlex Connectivity Kit for ODBC will use that Isolation Level. If Read Uncommitted 102 User s Guide

DataFlex Connectivity Kit for ODBC is not supported the Connectivity Kit does not setup an Isolation Level and will use the default level for the data source. If transactions are supported they will be used automatically by the ODBC Connectivity Kit. In order to determine if a database supports transactions see its documentation. In general enterprise databases support transactions with rollback and roll forward features. Flat file databases provide the full range from no support to full rollback / forward support. If there is no support for transactions the ODBC Connectivity Kit cannot use it. Most databases support row locking. The ODBC Connectivity Kit assumes the environment to be a row-locking environment. If the granularity happens to be greater, page or table, this will also work. Assuming the smallest lock granularity assures that locking (if supported) works in all cases. In general databases use two approaches to locking: Optimistic and pessimistic locking. When optimistic locking is used records are not really locked. All transactions have access to the record at all times. When the transaction requests a lock, a copy of the contents of the record is made. When at a later time in the transaction the record is updated, the current record in the table will be compared to the copy made at the time of the lock. If the two versions do not match, an error is declared and the update is not executed. The idea is that the change of two transactions trying to modify the same row simultaneously is very small. Applications connecting to a database that uses optimistic locking should handle the optimistic lock error condition. When using pessimistic locking Records are actually locked when the transaction requests a lock. Other transaction that try to lock the record will have to wait for the lock to be released. The DataFlex Connectivity Kit for ODBC does not support optimistic locking. Most databases that support optimistic locking offer a way to switch to pessimistic locking. This should be used when accessing the database through the ODBC Connectivity Kit. The way the ODBC Connectivity Kit handles locks depends on the database that is used. At login time a number of attributes of the database are queried that determine the locking method. The method used will eventually select a so-called cursor to be used when accessing the database the cursor choice logic essentially is the following: If DBMS supports a dynamic cursor use it Else If DBMS supports positioned updates in a forward only cursor use it Else If DBMS supports positioned updates in a keyset driven cursor use it Else If DBMS supports SQLSetPos updates in a forward only cursor use it Transactions Version 2.2 103

Chapter 12 Else If DBMS supports SQLSetPos updates in a keyset driven cursor use it Else use forward only cursor SQL is a set oriented language. A find (select) operation will return a set of records instead of just one like in record oriented environments. To traverse these sets a mechanism known as database cursors is used. A cursor allows a non set-oriented environment to obtain information from a set one row at a time. Dynamic cursors Dynamic cursors are the most advanced type of cursor. Unfortunately very few databases support dynamic cursors. If a database supports dynamic cursors, the Connectivity Kit will use that type of cursor. Dynamic cursors support record locking in all find modes. The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In ODBC using dynamic cursors locks are acquired as data is read in a locked state. Transactions Positioned updates A positioned update is an update at the current position of an active cursor. In order to be able to use this form of updating a special form of the select statement must be used. The select statement must contain a so-called update clause. The update clause will instruct the database to lock the record in question. Most enterprise databases support positioned updates through one or more cursor types. The update clause will only be generated when a Find Eq by Recnum operation is done. The Connectivity Kit will therefore only place Update locks on rows that are found through a Find Eq by Recnum operation after the database has been locked. A Reread generates a Find Eq by Recnum after a lock so rows found by a Reread will also be locked. All rows that are found in some other way will be locked when they are updated or deleted. The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In ODBC locks are acquired as data is read in a locked state or as it gets updated or deleted. When accessing ODBC make sure rows are locked, always use either 104 User s Guide

DataFlex Connectivity Kit for ODBC Reread or a Find Eq by Recnum in a locked state. Data Sets and Data Dictionaries will use Reread when updating data. Programs using the DDO/DSO objects will therefore correctly lock all data that is updated. Procedural programs may require code changes for all locks to be applied. A further difference between DataFlex and ODBC is the amount of data that is updated in an update operation. DataFlex will read and write entire records. ODBC (actually SQL) on the other hand is capable of updating one or more columns in a row. If for example one wants to update the name of a customer. DataFlex will read the entire record, modify the name and write the entire record back to the data file. ODBC should only write the modified customer name to the database. The ability to update specified columns will reduce the amount of procedural code that needs adjusting. If the program overwrites columns with values not based on previous values of columns in the row, there is no need to make sure the row is locked. If the program uses previous values of columns in the row to calculate new values of columns in the row, you must make sure the row is locked. In that case a Reread or Find Eq by Recnum must be added to the code if t is not already present. Below an example of a procedure that does not need to be adjusted and one that must be adjusted. Procedure NoNeedToAdjust Clear SomeTable Begin_Transaction Repeat Find Gt SomeTable By SomeIndex If (Found) Begin Move SomeValue To SomeTable.SomeColumn Saverecord SomeTable End Until (Not(Found)) End_Transaction End_Procedure // NoNeedToAdjust Procedure MustBeAdjusted Clear SomeTable Begin_Transaction Repeat Find Gt SomeTable By SomeIndex If (Found) Begin Move (SomeTable.SomeColumn * 1.16) To ; SomeTable.SomeColumn Saverecord SomeTable End Until (Not(Found)) Transactions Version 2.2 105

Chapter 12 End_Transaction End_Procedure // MustBeAdjusted The procedure that must be adjusted can be adjusted the following way: Procedure MustBeAdjusted Clear SomeTable Begin_Transaction Repeat Find Gt SomeTable By SomeIndex If (Found) Begin Reread Move (SomeTable.SomeColumn * 1.16) To ; SomeTable.SomeColumn Saverecord SomeTable Unlock End Until (Not(Found)) End_Transaction End_Procedure // MustBeAdjusted Data Access has been recommending the use of Reread when updating records for quite some time. It is expected that the number of programs that need adjusting in this area is limited. If you want both procedures to apply update locks on the records involved, you should adjust both procedures. Transactions SQLSetPos updates ODBC provides an alternative way to update records from SQL through an API know as SQLSetPos. This API can be used to update and delete rows in a table. Even if SQLSetPos is supported, there are a variety of details in support for the API. SQLSetPos can be used to lock a row exclusive, update a row, delete a row or refresh a row. Not all of the possibilities have to be supported by a database. If the database supports updating rows through SQLSetPos, the Connectivity Kit will use SQLSetPos. When SQLSetPos is supported with support for exclusive locks this will be called directly after each find in a locked state. This way records that are found in a locked state will be locked as they are found. If the exclusive lock is not supported the Connectivity Kit will perform a socalled dummy update to ensure that the row is locked. Directly after a row has been found, the row will be updated. The dummy update column will be set to the value that has just been found. By default the dummy update column is the record identity column. This column is least likely to change. If 106 User s Guide

DataFlex Connectivity Kit for ODBC however a different column should be used for the dummy update logic, this can be set up through the intermediate file keyword DUMMY_UPDATE_COLUMN. Another reason to use an alternative dummy update column is when the record identity column cannot be updated. Some database support system assigned columns like auto-increment columns. In general it is considered an error to try to update such a column. After the dummy update has been done, a re-find is done to ensure the latest version is in the record buffer. If SQLSetPos is supported all records that are read in a locked state will be locked as they are read. Depending on the support this may as simple as placing a lock or as complicated as doing a dummy update and re-finding the record. No update support If the database reports no support for dynamic cursors, positioned updates and SQLSetPos, the Connectivity Kit will use a forward only cursor. It is not defined how locking works in this case. In general these type of environments will lock row as they are updated, deleted or inserted. Locks are applied at the moment of the DataFlex save, saverecord or delete command. No locks are applied when finding in a locked state. Summary on locking There are a lot of options and unknown factors for ODBC in general. A database might support a particular feature or it may not. It would be quite confusing to keep track of all the different possibilities and implementations. Luckily most programmers only use one or two databases to connect to through ODBC. Data Access has tested a number of environments to see how they support locking and transactions. In general we can distinguish the following types of support: DC - Dynamic cursors PU - Positioned updates SPL - SQLSetPos + Exclusive lock SPDU - SQLSetPos + Dummy update The following table lists the tested environments with it locking type support Comments Transactions Version 2.2 107

Chapter 12 Environment Lock Type Comments DB2 UDB 7.1 PU Make sure to set the maximum number of statements per connection to a value higher then 1. The suggested value for DB2 is 90. The maximum number of statements per connection can be set in the DB2_DRV.INT configuration file with the keyword Max_Active_Statements. If the Max_Active_Statement setting is not higher then 1, locking will not work correctly when accessing DB2 through the ODBC Connectivity Kit. MS SQL 7.0 Sybase ASA 7.0 Oracle 8.0 Oracle 8.i MS Access 2000 DC PU PU PU SPDU Make sure Access is set up to use pessimistic locking. The way to set this up can be found at http://support.microsoft.com/support/kb/articles/q225/9/26.asp. Transactions Deadlock In most databases that can be accessed through ODBC deadlock can occur. If the database detects deadlock depends on the database in use. See the database documentation if deadlocks can occur and if so if detection is supported. If deadlock is detected, the transaction will be aborted and the deadlock detection error is sent to the victim transaction. If the database supports a lock timeout, the same happens when lock timeout occurs. There is no way DataFlex Connectivity Kit for ODBC can know upfront what the error numbers of the database for the two events are. Errors generated by databases will be reported under number 12289. There is no way to know if an error was a deadlock, timeout or some other error that can be generated by the database in use. Since the ODBC Connectivity Kit does not know if the database generated a timeout error it will also not pass the DFERR_LOCK_TIMEOUT (4106) error to DataFlex. Even if it would be able to determine if the lock timeout error occurs the DFERR_LOCK_TIMEOUT error would not be generated. Since most databases use record locking, a lock timeout error can occur at any time during a transaction. If the DFERR_LOCK_TIMEOUT error would be passed, 108 User s Guide

Transactions DataFlex Connectivity Kit for ODBC the runtime would retry the transaction, jumping to the Begin_Transaction command. If the transaction was well underway at the moment the lock timeout occurs, variables and properties used in the transaction will have changed. An automatic retry would not give the desired result, a transaction rollback does not restore variables and properties to their original value. This situation cannot occur when using DataFlex data, Begin_Transaction will do all locking that will ever be done in a transaction so it is safe to automatically retry transactions when using DataFlex data. The only way to detect if Deadlock occurred is by parsing the error text of the latest error. The ODBC Connectivity Kit formats it errors in a special way: <SQLState> (<Native error number>)-<error text>. If the SQLState that is set when deadlock or timeout occurs is known a DataFlex program could test for this state. SQLState is a five-character string. Not all databases pass native error numbers through ODBC, if it does the DataFlex programmer could also check the native error number. Version 2.2 109

Chapter 13 Database Builder DataFlex Connectivity Kit for ODBC 13 The DataFlex Connectivity Kit for ODBC fully supports the DataFlex restructure capabilities. You can use any DataFlex program based on these capabilities to maintain your table s data definition. This includes Visual DataFlex Database Builder. Of course you can use your database s utilities to maintain the tables. In most cases this is the preferred way. Most changes made to table definitions outside of DataFlex will be seen by the Connectivity Kit automatically. Others require a manual edit of the intermediate file for the table in question. Changes that require additional edit operations are: index definition changes and trigger additions for example. If structure caching is used (the default), be sure to delete the cache files (*.cch) after changing the table definition outside of Database Builder. For more information on structure caching see Chapter 6 Structure caching. Database Menu Database related options can be found in the Database menu of Database Builder. Relevant options are discussed in this section. Load database driver To be able to work with the DataFlex Connectivity Kit for ODBC from within Database Builder, the Connectivity Kit must be loaded. This can be done by choosing Load database driver from the Database menu in Database Builder. In the open file panel choose odbc_drv.dll. Alternatively you can load the Connectivity Kit when Visual DataFlex starts by adding the line 4096=ODBC_DRV to the dfini.cfg Visual DataFlex configuration file. If this is the case, the Connectivity Kit will be loaded automatically whenever Database Builder starts. For more information on dfini.cfg see the Visual DataFlex documentation. A third way to load the Connectivity Kit is by opening a table that is accessed through it. Once the DataFlex Connectivity Kit for ODBC has been loaded, the Database menu of Database Builder will have 4 additional choices: Convert to ODBC, Convert to ODBC from script, Connect To ODBC Database Builder Version 2.2 111

Chapter 13 Table and ODBC Administrator. Once a non DataFlex database driver is loaded, the Database menu will have 2 additional choices: Remove.INT extension and Add.INT extension. These choices enable you to switch between DataFlex and non- DataFlex data quickly, provided both forms of the data are present. Once a CLI based database driver is loaded (DB2, ODBC or SQL Server), the Database menu will have one additional choice: OEM Ansi Wizard. This choice will start the OEM Ansi conversion wizard. Login Database Builder The login panel allows the user to login to a Data Source. In the Driver combo box choose the DataFlex Connectivity Kit for ODBC by selecting ODBC_DRV. If the choice is not available you can load the Connectivity Kit by using the Load driver button. In the Server form enter the connection string for the Data Source you want to login to. For a detailed discussion of connection strings see Chapter 7 - Intermediate File. In the User Name form enter the user name and in the Password form the password for the database. 112 User s Guide

DataFlex Connectivity Kit for ODBC Logout The logout panel allows the user to logout from a Data Source. In the Driver combo box choose the DataFlex Connectivity Kit for ODBC by selecting ODBC_DRV. In the Server form enter the connection string for the Data Source you want to logout from. For a detailed discussion of connection strings see Chapter 7 - Intermediate File. Convert to ODBC, Convert to ODBC from script The two conversion options are discussed in detail in Chapter 4 - Converting Data to ODBC. Connect To ODBC Table The connection option is discussed in detail in Chapter 5 - Connecting to Existing ODBC Data. ODBC Administrator The ODBC Administrator menu choice starts the ODBC Administrator. The Administrator allows you to create, modify and delete ODBC Data Sources. See Appendix B - ODBC Data Sources for more details on the ODBC Administrator. Remove.INT extension The Remove.INT extension menu option is intended to change filelist entries so they point to DataFlex data instead of an intermediate file. When chosen Database Builder Version 2.2 113

Chapter 13 you will be presented with a filelist selection panel. Select the filelist entries you want to remove the INT extension from. The option only edits the filelist. The physical intermediate files will not be removed. Add.INT extension The Add.INT extension menu option is intended to change filelist entries so they point to an intermediate file instead of DataFlex data. When chosen you will be presented with a filelist selection panel. Select the filelist entries you want to add the INT extension to. The option only edits the filelist. The physical DataFlex files will not be removed, nor will the intermediate files be created. OEM Ansi Wizard The OEM Ansi Wizard menu option will start the OEM Ansi conversion wizard. The wizard allows you to convert tables from using the OEM character format to Ansi character format (or vice versa). For more information on character formats see Chapter 8 - Character formats (OEM or Ansi). File menu Table manipulation options can be found in the File menu of Database Builder. Relevant options are discussed in this section. Database Builder New To create a new table in a Data Source choose New from the File menu. To illustrate the way to create a new table we will follow the steps needed to create a new table called NewSample for Microsoft Access 2000. 114 User s Guide

DataFlex Connectivity Kit for ODBC In the Type combo box choose the DataFlex Connectivity Kit for ODBC by selecting ODBC_DRV. Choose the desired Filelist number for the new table in the File Number spin form. Enter the desired rootname for the new table in the Rootname form. You can use the.int extension or the driver prefix (odbc_drv:). The name of table that will be created will be the same as the rootname entered here, without the.int extension or the driver prefix. If all information has been entered, press OK. This will start a file panel for the new table. Database Builder Version 2.2 115

Chapter 13 Database Builder To specify the Data Source in which the table will be created fill the Login form with the connection string for the desired data source. For a detailed discussion of connection strings see Chapter 7 - Intermediate File. On the Fields tab page the columns for the new table can be defined. We have created one column per DataFlex type plus one record identity column. For every column you can define the name, type, length, assign a main index, its nullability (if supported by the backend) and a default value (if supported by the backend). The Nullable and Default value columns will default to the settings in the configuration file, see Appendix C Configuration file ODBC_DRV.INT for more details. 116 User s Guide

DataFlex Connectivity Kit for ODBC On the Index tab page indexes for the new table can be defined. We specify an index for the record identity. On the Parameters tab page table level settings can be specified. Most of the settings on the page are not relevant for ODBC tables. Only the Record Identity form and the System File and Use dummy zero date checkboxes are of interest. All other settings are either DataFlex database specific or read only. We set the record identity to 6 the number of the RID column we created on the Fields tab page. Database Builder Version 2.2 117

Chapter 13 Database Builder If all information for the new table has been entered, it can be saved. In our case we create a Microsoft Access 2000 table. If we re-open the created table and compare the definition to the one we originally specified we see a few differences. On the "Parameters tab page the File Revision form is now filled with ACCESS 04.00.0000 The process described above has created the Access table and the associated intermediate file, NewSample.int. DRIVER_NAME ODBC_DRV SERVER_NAME DSN=AccessBig DATABASE_NAME NEWSAMPLE PRIMARY_INDEX 1 TABLE_CHARACTER_FORMAT OEM USE_DUMMY_ZERO_DATE YES FIELD_NUMBER 2 FIELD_LENGTH 10 FIELD_PRECISION 0 118 User s Guide

DataFlex Connectivity Kit for ODBC FIELD_NUMBER 3 FIELD_LENGTH 10 FIELD_NUMBER 6 FIELD_LENGTH 10 FIELD_PRECISION 0 FIELD_INDEX 1 INDEX_NUMBER 1 INDEX_NAME NEWSAMPLE001 INDEX_NUMBER_SEGMENTS 1 INDEX_SEGMENT_FIELD 6 Open Tables can be opened by choosing the Open option in the File menu. To open a table through the DataFlex Connectivity Kit for ODBC select a Filelist entry that points to an ODBC table. Once the table is opened you can change its definition. See the Visual DataFlex documentation on Database Builder for more information. Open as Tables that are not in the Filelist can be opened by choosing the Open as option in the File menu. In the File Number form type the number you want to use to open this table. In the Rootname form type the table s rootname. If we want to open the table that was created in the New section above we must enter NewSample.int. Database Builder Version 2.2 119

Chapter 13 Load DEF The Load DEF option is intended to create a new table based on an existing definition that is stored in a DataFlex.DEF file. A.DEF file can be generated by Database Builder in the Output DEF/FD option. Database Builder In the File Number form type the number you want to use for the new table. In the New file s type combo form choose the DataFlex Connectivity Kit for ODBC by selecting ODBC_DRV. In the Rootname form enter the rootname of the new table. You must use the.int extension or the driver prefix (odbc_drv:). The name of the table that will be created will be the same as the rootname entered here, without the.int extension or the driver prefix. After the information has been entered, press OK. This will start a file panel for the new table. The table will be created when the file panel is saved. This enables you to make adjustments to the definition of the table before actually creating it. Maintenance menu Database maintenance can be started from the Maintenance menu. Most options in the menu are specific for DataFlex data and will not have any effect on data accessed through the DataFlex Connectivity Kit for ODBC. The options that are DataFlex specific are: Reindex, Cleanup, Attributes, Repair and Recompress. Copy records The Copy records option will copy records from one existing table to 120 User s Guide

Database Builder DataFlex Connectivity Kit for ODBC another. It will map the columns of the source table to the columns of the destination table by comparing the column names. Columns with identical names will be mapped. This option can be useful when during conversion it is desired to change the converted table s definition in the target database. A lot of database systems only allow table definition changes if no data is present in the table. In such cases it is possible to convert the definition of the table only, leave the original intact. Then use the target database utilities to change the table definition. After that use the Copy records option to convert the data. Version 2.2 121

Chapter 14 - Error Handling DataFlex Connectivity Kit for ODBC 14 The DataFlex Connectivity Kit for ODBC handles several types of errors. We distinguish between driver level and database level errors. Driver level errors are detected at Connectivity Kit level. Trying to set an attribute for a non-existing column, for example, will trigger a driver level error. Database level errors are raised by the ODBC Data Source and passed on by the Connectivity Kit to the API. Not having sufficient rights to perform a certain operation will trigger a database level error. Driver Level Errors The Connectivity Kit raises the driver level errors when it detects an illegal operation. In general the Connectivity Kit is as verbose as possible to identify the cause of the error. If a program tries to set a field attribute in table MyTable of a non-existing column with number 12 the kit will report the following message: Field number out of range [MyTable, 12]. The text between square brackets is intended as extra information to help identify the cause of the error and is referred to as verbosity term. Most of the time the verbosity term will identify a table name or number. Sometimes however the term may not be clear at first sight, those terms are listed in the following table. In the following table we use abbreviations TN table name, CN column name, AN attribute number, IN index number, ISN index segment number. ERROR Setting a non supported attribute Field number out of range Index number out of range Index segment out of range String too long VERBOSITY TERM [<TN>,<AN >] [<TN>,<CN >,<AN>] [<TN>,index.<IN >,<AN >] [<TN>,index.<IN>,segment.<ISN>,<AN>] [<TN>, <IllegalFieldNumber>] [<TN>, index.<in>] [<TN>,index.<IN>,<ISN>] [TheString] Error Handling Version 2.2 123

Chapter 14 ERROR Bad attribute value Index is not available Invalid intermediate file value Invalid intermediate file keyword Number to large for field VERBOSITY TERM [<TN>,<AN >,<BadValue>] [<TN>,<CN >,<AN >,<BadValue>] [<TN>,index.<IN>,<AN >,<BadValue>] [<TN>,index.<IN>,segment.<ISN>,<AN>,< BadValue.] [<TN>,index.<IN>] [<TN>,<Keyword>,<IllegalValue>] [<IllegalKeyword>] [<TN>,<CN>] ODBC Error Class The DataFlex error system defines several classes of errors: user, system and utility errors. Every driver adds an error class to the DataFlex error classes. The DataFlex Connectivity Kit for ODBC adds the following errors: NUMBER TEXT DESCRIPTION Error Handling 12289 General error Database level error entry point, see the paragraph on database level errors. 12290 Can t initialize The Connectivity Kit is unable to initialize. Check the client setup. 12291 Can t de-initialize The Connectivity Kit is unable to de-initialize and free up the environment. Check the client setup. 12192 Bad or no primary index specified. The primary index is bad. This can be caused by several reasons. Either the index does not exist or it contains more than one segment. 12293 Login unsuccessful Unable to login to the specified server. Either the client is not properly setup or the user does not 124 User s Guide

DataFlex Connectivity Kit for ODBC NUMBER TEXT DESCRIPTION have sufficient privileges to login to the server. 12294 Logout unsuccessful Unable to logout from the server. Check the client setup. 12295 Table not in connection 12296 Null value not allowed 12297 Segment number out of range 12298 Index number out of range 12299 Login attribute must be set 12300 Physical name must be set 12301 Invalid registration file An attempt is made to open a table that cannot be found on the specified server for the specified owner. An attempt has been made to put a NULL value into a column does not allow NULL values. An attempt has been made to set or get an attribute of a non-existing index segment. An attempt has been made to set or get an attribute of a non-existing index. Structure changes have been made to the table. The Structure_End operation is missing login information for the table and is unable to save the changes. Make sure to set the DF_FILE_LOGIN attribute of the table. Trying to create a new table. The Structure_End operation is missing the physical name of the table. This is the name the table will get in ODBC. Make sure to set the DF_FILE_PHYSICAL_NAME attribute. The registration file is invalid. 12302 License expired A temporary license is expired. 12303 Deadlock or timeout The current transaction was interrupted and rolled back because Error handling Version 2.2 125

Chapter 14 Error Handling NUMBER TEXT DESCRIPTION of a deadlock or timeout. For more information about handling deadlocks and/or timeouts, see the Transaction chapter. 12304 Embedded SQL error General Embedded SQL error. 12305 Invalid SQL statement handle 12306 Invalid SQL connection handle 12307 Invalid SQL Connectivity Kit Identifier A statement using the handle cannot be found for the specified connection. A connection using the handle cannot be found. The Connectivity Kit identifier is illegal. 12308 Invalid SQL bind file The specified bind file is not open, it is not an ODBC table or the pibindfile property has not been set. 12309 Invalid SQL column A column with the specified number does not exist in the result set. 12310 Invalid attribute The attribute identifier is illegal. 12311 Invalid buffer The file passed to FetchActivatesBuffer is not open. 12312 Invalid configuration keyword 12313 Unique index required for restructure Database Level Errors A keyword in the global configuration file odbc_drv.int is invalid. When restructuring an existing table at least one unique index is required. The ODBC error handling mechanism is quite different from the DataFlex mechanism. Ideally, we would be able to project the ODBC error system on to the DataFlex system. Unfortunately, this is not possible. The identification mechanism in both systems is incompatible. DataFlex uses error number 126 User s Guide

DataFlex Connectivity Kit for ODBC where ODBC uses SQL State, a 5-byte character string. DataFlex needs an error number when reporting an error to a program. We have chosen to use one number for every ODBC error that is reported to the Connectivity Kit. This number is 12289. Database level errors will always be formatted: 12289 -- <SQL State> (<Native Error>)--<Component identifier> <Error text> SQL State SQL States are used in SQL environments to identify a certain exception condition that has occurred. SQL states are 5 character strings using only uppercase letters A Z and the digits 0 9. The string is divided into two components. The first two characters are the class code; the last three are the subclass code. For more information on SQL States, see your database s documentation. Native Error The native error number reported by the backend. Component identifier The component identifier is supposed to help you identify the component that causes the error. The communication to the ODBC Data Source uses a number of components, each of which can raise an error. For errors and messages that occur outside of the data source the component identifier format is: [Vendor identifier][component identifier] For errors that occur inside the data source the component identifier format is: [Vendor identifier][component identifier][data source identifier] Error text Contains the text of the error or message. Truncated error text The error messages generated by ODBC Data Sources tend to be a bit larger than is common in a traditional DataFlex environment. The object oriented Error handling Version 2.2 127

Chapter 14 error handlers within a DataFlex environment can handle these long texts without problems. Procedural programs however tend to reserve one line of 80 positions that is used for error reporting. If the error text is larger than 80 positions, it will be truncated. If the error text is truncated, use your database s documentation to find the SQL State. Alternatively you can set the Error_Debug_Mode configuration keyword, see Appendix C Configuration file ODBC_DRV.INT for details on the configuration file. Error Handling SQL State table SQLSTATE ERROR 01000 General warning 01001 Cursor operation conflict 01002 Disconnect error 01003 NULL value eliminated in set function 01004 String data, right truncated 01006 Privilege not revoked 01007 Privilege not granted 01S00 Invalid connection string attribute 01S01 Error in row 01S02 Option value changed 01S06 Attempt to fetch before the result set returned the first rowset 01S07 Fractional truncation 01S08 Error saving File DSN 01S09 Invalid keyword 07002 COUNT field incorrect 07005 Prepared statement not a cursor-specification 07006 Restricted data type attribute violation 07009 Invalid descriptor index 07S01 Invalid use of default parameter 128 User s Guide

DataFlex Connectivity Kit for ODBC SQLSTATE ERROR 08001 Client unable to establish connection 08002 Connection name in use 08003 Connection does not exist 08004 Server rejected the connection 08007 Connection failure during transaction 08S01 Communication link failure 21S01 Insert value list does not match column list 21S02 Degree of derived table does not match column list 22001 String data, right truncated 22002 Indicator variable required but not supplied 22003 Numeric value out of range 22007 Invalid datetime format 22008 Datetime field overflow 22012 Division by zero 22015 Interval field overflow 22018 Invalid character value for cast specification 22019 Invalid escape character 22025 Invalid escape sequence 22026 String data, length mismatch 23000 Integrity constraint violation 24000 Invalid cursor state 25000 Invalid transaction state 25S01 Transaction state 25S02 Transaction is still active 25S03 Transaction is rolled back 28000 Invalid authorization specification 34000 Invalid cursor name 3C000 Duplicate cursor name Error handling Version 2.2 129

Chapter 14 Error Handling SQLSTATE 3D000 3F000 ERROR Invalid catalog name Invalid schema name 40001 Serialization failure 40002 Integrity constraint violation 40003 Statement completion unknown 42000 Syntax error or access violation 42S01 Base table or view already exists 42S02 Base table or view not found 42S11 Index already exists 42S12 Index not found 42S21 Column already exists 42S22 Column not found 44000 WITH CHECK OPTION violation HY000 General error HY001 Memory allocation error HY003 Invalid application buffer type HY004 Invalid SQL data type HY007 Associated statement is not prepared HY008 Operation canceled HY009 Invalid use of null pointer HY010 Function sequence error HY011 Attribute cannot be set now HY012 Invalid transaction operation code HY013 Memory management error HY014 Limit on the number of handles exceeded HY015 No cursor name available HY016 Cannot modify an implementation row descriptor HY017 Invalid use of an automatically allocated descriptor handle 130 User s Guide

DataFlex Connectivity Kit for ODBC SQLSTATE HY018 HY019 HY020 HY021 HY024 HY090 HY091 HY092 HY095 HY096 HY097 HY098 HY099 HY100 HY101 HY103 HY104 HY105 HY106 HY107 HY109 HY110 HY111 HYC00 HYT00 HYT01 IM001 IM002 ERROR Server declined cancel request Non-character and non-binary data sent in pieces Attempt to concatenate a null value Inconsistent descriptor information Invalid attribute value Invalid string or buffer length Invalid descriptor field identifier Invalid attribute/option identifier Function type out of range Invalid information type Column type out of range Scope type out of range Nullable type out of range Uniqueness option type out of range Accuracy option type out of range Invalid retrieval code Invalid precision or scale value Invalid parameter type Fetch type out of range Row value out of range Invalid cursor position Invalid driver completion Invalid bookmark value Optional feature not implemented Timeout expired Connection timeout expired Driver does not support this function Data source name not found and no default driver Error handling Version 2.2 131

Chapter 14 SQLSTATE IM003 IM004 IM005 IM006 IM007 IM008 IM009 IM010 IM011 IM012 IM013 IM014 IM015 ERROR specified Specified driver could not be loaded Driver s SQLAllocHandle on SQL_HANDLE_ENV failed Driver s SQLAllocHandle on SQL_HANDLE_DBC failed Driver s SQLSetConnectAttr failed No data source or driver specified; dialog prohibited Dialog failed Unable to load translation DLL Data source name too long Driver name too long DRIVER keyword syntax error Trace file error Invalid name of File DSN Corrupt file data source Error Handling 132 User s Guide

Appendix A ODBC Escape sequences DataFlex Connectivity Kit for ODBC To setup default values, ODBC Escape Sequences can be used. ODBC defines a number of escape sequences of which we can use the literal and scalar function escape sequence. An ODBC escape sequence is enclosed in curly brackets {}. The escape sequence is regocnized by the data source and translated into the data source equivalent. A A Literals The ODBC literal escape sequence is used to define date, time and datetime literals. These are defined by using {literaltype literal }. The following literal types are defined: Type Meaning Format of value d Date yyyy-mm-dd t Time hh:mm.ss ts Timestamp yyyy-mm-dd hh:mm:ss ODBC Escape sequences Scalar functions The ODBC scalar function escape sequence is used to define a function call to a scalar function. A scalar function returns a value for every row. Function calls are defined by using {fn scalar-function}. Several different types of functions have been defined: string functions, numeric functions, date time functions, system functions and data type conversion functions. Version 2.2 133

Appendix A ODBC Escape sequences String Functions String Function ASCII (string_exp) BIT_LENGTH (string_exp) CHAR (code) CHAR_LENGTH (string_exp) CHARACTER_LENGTH (string_exp) CONCAT (string_exp1, string_exp2) DIFFERENCE (string_exp1, string_exp2) INSERT (string_exp1, start, length, string_exp2) Description Returns the ASCII code value of the leftmost character of string _exp as an integer. Returns the length in bits of the string expression. Returns the character that has the ASCII code value specified by code. The value of code should be between 0 and 255. Returns the length in characters of the string expression, if the string expression is of a character data type; otherwise returns the length in bytes of the string expression. Returns a character string that is the result of concatenating string_exp2 to string_exp1. Returns an integer value that indicates the difference between the values returned by the SOUNDEX function for string_exp1 and string_exp2. Returns a character string where length characters have been deleted from string_exp1 at start and where string_exp2 has been inserted into string_exp1 at start. 134 User s Guide

DataFlex Connectivity Kit for ODBC LCASE (string_exp) LEFT (string_exp, count) LENGTH (string_exp) LOCATE (string_exp1, string_exp2 [,start]) LTRIM (stirng_exp) OCTET_LENGTH (string_exp) POSITION (string_exp1 IN string_exp2) REPEAT (string_exp, count) REPLACE (stirng_exp1, string_exp2, string_exp3) Returns a string where all uppercase characters in string_exp have been converted to lowercase. Returns the count leftmost characters of string_exp. Returns the number of characters in string_exp, excluding trailing blanks. Returns the starting position of the first occurrence of string_exp1 within string_exp2. The search begins on the first character position in string_exp2 unless the optional argument start is specified. If start is specified the search begins on character position indicated by the value of start. Character positions start at 1. The function returns 0 if string_exp1 is not found in string_exp2. Returns string_exp with leading blanks removed. Returns the length in bytes of the string_exp. Returns the position of string_exp1 in string_exp2. Returns a string composed of string_exp repeated count times. Replace occurences of string_exp2 in string_exp1 with string_exp3. ODBC Escape sequences Version 2.2 135

Appendix A ODBC Escape sequences RIGHT (string_exp, count) RTRIM (string_exp) SOUNDEX (string_exp) SPACE (count) SUBSTRING (string_exp, start, length) UCASE (string_exp) Returns the count rightmost characters of string_exp. Returns string_exp with trailing blanks removed. Returns a character string representing the sound of the words in string_exp. Returns a string consisiting of count spaces. Returns the substring from string_exp beginning at start that is length characaters long. Returns a string where all lowercase characters in string_exp have been converted to uppercase. Numeric Functions Numeric Function ABS (numeric_exp) ACOS (float_exp) ASIN (float_exp) ATAN (float_exp) ATAN2 (float_exp1, float_exp2) Description Absolute value. Arc cosine as an angle expressed in radians Arc sine as an angle expressed in radians Arc tangent as an angle expressed in radians Arc tangent of the x- and y- coordinates specified by flloat_exp1 and float_exp2 as an angle expressed in radians 136 User s Guide

DataFlex Connectivity Kit for ODBC CEILING (numeric_exp) COS (float_exp) COT (float_exp) DEGREES (numeric_exp) EXP (float_exp) FLOOR (numeric_exp) LOG (float_exp) LOG10 (float_exp) MOD (integer_exp1, integer_exp2) PI () POWER (numeric_exp, integer_exp) RADIANS (numeric_exp) RAND ([integer_exp]) ROUND (numeric_exp, Smallest integer greater than or equal to numeric_exp. The cosine of float_exp where float_exp is an angle expressed in radians. The cotangent of float_exp where float_exp is an angle expressed in radians. Converts to the number of degrees where numeric_exp is an angle expressed in radians. Exponential value of float_exp Largest integer less than or equal to numeric_exp. Natural logarithm of float_exp. Base 10 logarithm of float_exp. Remainder (modulus) of integer_exp1 divided by integer_exp2. Constant value of pi. Numeric_exp to the power of integer_exp. Converts to the number of radians where numeric_exp is an angle expressed in degrees. Random number using the optional integer_exp as seed value. Numeric_exp rounded to ODBC Escape sequences Version 2.2 137

Appendix A ODBC Escape sequences integer_exp) SIGN (numeric_exp) SIN (float_exp) SQRT (float_exp) TAN (float-exp) TRUNCATE (numeric_exp, integer_exp) integer_exp positions right of the decimal separator. Returns an indicator of the sign of numeric_exp. If numeric_exp is less than zero, -1 is returned. If numeric_exp is zero, 0 is returned. If numeric_exp is greater than zero, 1 is returned. The sine of float_exp where float_exp is an angle expressed in radians. Square root of float_exp. The tangent of float_exp where float_exp is an angle expressed in radians. Numeric_exp truncated to integer_exp positions right of the decimal separator. Time, Date and Interval Functions Time, Date, Interval Function CURRENT_DATE () CURRENT_TIME ([time_precision]) CURRENT_TIMESTAMP ([timestamp_precision]) Description Current date. Current local time, the time_precision optional argument determines the seconds precision of the returned value. Current local data and local time as a timestamp value, the timestamp_precision 138 User s Guide

DataFlex Connectivity Kit for ODBC CUR_DATE () CURTIME () DAYNAME (date_exp) DAYOFMONTH (date_exp) DAYOFWEEK (date_exp) DAYOFYEAR (date_exp) EXTRACT (extract_field FROM extract_source) HOUR (time_exp) MINUTE (time_exp) MONTH (date_exp) MONTHNAME (date_exp) optional argument determines the seconds precision of the returned value. Current date. Current local time. The name of the day of the passed date. The number of the day in the month in date_exp in the range of 1 31. The number of the day of the week in date_exp in the range 1-7 where 1 is Sunday. The number of the day of the year in date_exp in the range 1-366. Extracts the extract_field portion of the extract_source. Extract_source is a datetime or interval expression. Extract_field can be one of the following keywords: YEAR, MONTH, DAY, HOUR, MINUTE, SECOND. The hour of the time_exp in the range 0 23. The minute in time_exp in the range 0 59. The month in date_exp in the range 1 12. The name of the month in date_exp. ODBC Escape sequences Version 2.2 139

Appendix A ODBC Escape sequences NOW () QUARTER (date-exp) SECOND (time_exp) TIMESTAMPDIFF (interval, timestamp_exp1, timestamp_exp2) WEEK (date_exp) YEAR (date_exp) Current date and time as a timestamp value. The quarter in date_exp in the range 1 4. Where quarter 1represents January 1 through March 31. The seconds in time_exp in the range 0 59. The integer number of intervals that timestamp_exp2 is greater than timestamp_exp1. Interval can be one of the following keywords: SQL_TSI_FRAC_SECOND, SQL_TSI_SECOND, SQL_TSI_MINUTE, SQL_TSI_HOUR, SQL_TSI_DAY, SQL_TSI_WEEK, SQL_TSI_MONTH, SQL_TSI_QUARTER, SQL_TSI_YEAR. The weeknumber of the year in date_exp in the range 1 53. The year of date_exp. System Functions System Function DATABASE () Description The name of the database. 140 User s Guide

DataFlex Connectivity Kit for ODBC IFNULL (exp, value) USER () If exp is null, value is returned. If exp is not null, exp is returned. The urser name. This may be different from the login name. Data Type Conversion Functions. ODBC defines one explicit data type conversion function, convert. Its syntax is CONVERT (value_exp, data_type). The function returns value_exp converted to data_type where data_type is one of the folowing keywords: SQL_BIGINT SQL_BIT SQL_DECIMAL SQL_FLOAT SQL_INTERVAL_MONTH SQL_INTERVAL_YEAR_TO_MONTH SQL_INTERVAL_HOUR SQL_INTERVAL_SECOND SQL_INTERVAL_DAY_TO_MINUTE SQL_INTERVAL_HOUR_TO_MINUTE SQL_INTERVAL_MINUTE_TO_SECOND SQL_LONGVARCHAR SQL_REAL SQL_TYPE_DATE SQL_TYPE_TIMESTAMP SQL_VARBINARY SQL_WCHAR SQL_WVARCHAR SQL_BINARY SQL_CHAR SQL_DOUBLE SQL_INTEGER SQL_INTERVAL_YEAR SQL_INTERVAL_DAY SQL_INTERVAL_MINUTE SQL_INTERVAL_DAY_TO_HOUR SQL_INTERVAL_DAY_TO_SECOND SQL_INTERVAL_HOUR_TO_SECOND SQL_LONGVARBINARY SQL_NUMERIC SQL_SMALLINT SQL_TYPE_TIME SQL_TINYINT SQL_VARCHAR SQL_WLONGVARCHAR ODBC Escape sequences Version 2.2 141

Appendix B - ODBC Data Sources DataFlex Connectivity Kit for ODBC B There are two types of Data Sources, machine- and file Data Sources. Both types contain similar information, they differ in the way the information is stored. Because of these differences, they are used in a somewhat different manner. Machine Data Sources are stored on the client system. Associated with the Data Source is all the information the ODBC Manager and database driver need to connect to the specified database. There are two Data Source subtypes, User- and System Data Sources. One specific user of the machine can use a User Data Sources; all users that use the machine where the Data Source is defined can use a System Data Source. File Data Sources are stored in a file with extension.dsn (in ASCII format). The file Data Source stores all the information the ODBC Manager and database driver need to connect to the specified database. The file can be manipulated like any other text file. Creating a data Source Data Sources are created with a program called the ODBC Administrator. The ODBC Adminstrator can be started from the Control Panel or from Database Builder. Some database environments add a shortcut to the ODBC Administrator to the start menu options. ODBC Data Sources Version 2.2 143

Appendix B ODBC Data Sources When adding a Data Source the ODBC Administrator presents a list of all available drivers. The user chooses one driver and the Administrator passes control to the driver setup logic so it can obtain all required information. The information required to setup a Data Source for a driver is driver (and backend) specific. See below for a sample of a driver setup panel. Depending on the Data Source type the information is stored in the machines registry or a disk file. 144 User s Guide

DataFlex Connectivity Kit for ODBC Information of a particular Data Source is stored in the following way: ODBC Data Sources DSN Type User System Information location User data sources are stored in the registry at HKEY_CURRENT_USER\SOFTWARE\ODBC\ODB C.INI under this key a subkey for every User Data Source can be found. User Data Sources are typically used in an environment where thee is only one user or as a test for ODBC connectivity. System Data Sources are stored in the registry at HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODB C.INI under this key a subkey for every System Data Source can be found. Version 2.2 145

Appendix B File System Data Sources are typically used when there is a need to access data from one machine. This usually is a server type process like WebApp Server. File Data Sources are stored in disk files. For specifics on the contents of the file data source see your ODBC driver s documentation. File Data Sources can be shared among all users that have access to the disk file. ODBC Data Sources When an application needs to access the data defined in a Data Source, it calls the ODBC Manager and passes the name of the data source. The ODBC Manager identifies the driver to use, loads it and passes the Data Source name. The driver uses the Data Source name to connect to the Data Source. Connecting to a Data Source may involve prompting the user for login information. 146 User s Guide

DataFlex Connectivity Kit for ODBC Appendix C Configuration file ODBC_DRV.INT The general behavior of the DataFlex Connectivity Kit for ODBC can be configured through the configuration file, odbc_drv.int. The configuration file is read when the Connectivity Kit initializes. Configuration files can be located anywhere in DFPATH. In general one configuration file per install is enough. There are situations where there is a need to have different configurations for different deploy environments on one machine/network. In that case the configuration file should be placed in the deploy environment rather then in the overall DataFlex environment. A sample configuration file ODBC_DRVSample.INT is installed in the DataFlex bin directory. The sample configuration file can be used to base an environments configuration file on. The keywords will be presented in the following format: <Keyword> Value Associated attribute Where <Possible values> <Attribute_name> (<Type>) C C Configuration file ODBC_DRV.INT <Keyword> <Possible values> <Attribute_name> (<Type>) The keyword to set in the intermediate file. A list of values or a description of possible values for the keyword. The name of the attribute associated with the keyword. The type of the associated keyword. The supported keywords for the global intermediate file are: Version 2.2 147

Appendix C Cache_path Value Associated attribute Path to a valid directory None Sets up a directory to store cache files. By default, cache files will be stored in the same directory as the corresponding intermediate file. Default_Default_ASCII Value Associated attribute Default specification None Configuration file ODBC_DRV.INT Sets up the default value that will be used when an ASCII field is created. Fields can be created during conversion or within a restructure operation. Default_Default_Binary Value Associated attribute Default specification None Sets up the default value that will be used when a Binary field is created. Fields can be created during conversion or within a restructure operation. Default_Default_Date Value Associated attribute Default specification None Sets up the default value that will be used when a Date field is created. Fields can be created during conversion or within a restructure operation. Default_Default_Numeric Value Associated attribute Default specification None 148 User s Guide

DataFlex Connectivity Kit for ODBC Sets up the default value that will be used when a Numeric field is created. Fields can be created during conversion or within a restructure operation. Default_Default_Text Value Associated attribute Default specification None Sets up the default value that will be used when a Text field is created. Fields can be created during conversion or within a restructure operation. Default_Nullable_ASCII Value Associated attribute Integer value None Sets up if ASCII fields allow null values by default. Null values are not allowed if the attribute is set to 0 (zero), all other integer values will allow null values. Default_Nullable_Binary Value Associated attribute Integer value None Configuration file ODBC_DRV.INT Sets up if Binary fields allow null values by default. Null values are not allowed if the attribute is set to 0 (zero), all other integer values will allow null values. Default_Nullable_Date Value Associated attribute Integer value None Sets up if Date fields allow null values by default. Null values are not allowed if the attribute is set to 0 (zero), all other integer values will allow null values. Version 2.2 149

Appendix C Default_Nullable_Numeric Value Associated attribute Integer value None Sets up if Numeric fields allow null values by default. Null values are not allowed if the attribute is set to 0 (zero), all other integer values will allow null values. Default_Nullable_Text Value Associated attribute Integer value None Configuration file ODBC_DRV.INT Sets up if Text fields allow null values by default. Null values are not allowed if the attribute is set to 0 (zero), all other integer values will allow null values. Default_Table_Character_Format Value Associated attribute ANSI, OEM None The default table character format to use when creating new tables. Default_Use_Dummy_Zero_Date Value Associated attribute Integer value None Sets up the default value of the DF_FILE_USE_DUMMY_ZERO_DATE attribute for new tables created in a structure operation. Dummy zero dates will not be used if set to 0, all other integer values will use dummy zero dates. Driver_Date_Format Value Associated attribute EUROPEAN, MILITARY, USA None 150 User s Guide

DataFlex Connectivity Kit for ODBC ODBC demands that drivers use the military format (yyy-mm-dd) in the supported SQL. Nevertheless there are drivers that do not use this format but rather a format that depends on the country settings of the machine or the database client software. For those environments the driver date separators can be set A configuration file that sets the date format to European would look like: DRIVER_DATE_FORMAT EUROPEAN DRIVER_DATE_SEPARATOR / Driver_Date_Separator Value Associated attribute Character None ODBC demands that drivers use the military format (yyy-mm-dd) in the supported SQL. Nevertheless there are drivers that do not use this format but rather a format that depends on the country settings of the machine or the database client software. For those environments the driver date separators can be set A configuration file that sets the date format to European would look like: DRIVER_DATE_FORMAT EUROPEAN DRIVER_DATE_SEPARATOR / Configuration file ODBC_DRV.INT Driver_Decimal_Separator Value Associated attribute Character None ODBC demands that drivers use the US decimal separator in the supported SQL. Nevertheless there are drivers that do not use this separator but rather a separator that depends on the country settings of the machine or the database client software. For those environments the driver separators can be set. A configuration file that sets the numeric format to European would look like: DRIVER_DECIMAL_SEPARATOR, DRIVER_THOUSANDS_SEPARATOR. Version 2.2 151

Appendix C Driver_Thousands_Separator Value Associated attribute Charcater None Configuration file ODBC_DRV.INT ODBC demands that drivers use the US thousands separator in the supported SQL. Nevertheless there are drivers that do not use this separator but rather a separator that depends on the country settings of the machine or the database client software. For those environments the driver separators can be set. A configuration file (ODBC_DRV.INT) that sets the numeric format to European would look like: DRIVER_DECIMAL_SEPARATOR, DRIVER_THOUSANDS_SEPARATOR. Dummy_Zero_Date_Value Value Associated attribute String value with dummy date value None Sets up the value of the dummy zero date. This should be set to the lowest possible date value that the database supports. IN most databases the default 0001-01-01 can be used. Some databases however support a different lowest possible date value. Error_Debug_Mode Value Associated attribute Integer value None Sets the error debug mode off if attribute is set to 0 (zero), all other integer values will switch the error debug mode on. When the error debug mode is on all errors generated by the database backend will be displayed in a message box. This mode can be used in procedural environments where the screen space reserved to show error messages is often too small to show the complete text of the error message. 152 User s Guide

DataFlex Connectivity Kit for ODBC Max_Active_Statements Value Associated attribute Integer value None The maximum number of concurrently active statements allowed per connection, 0 (zero) means there is no limit. This attribute can be queried through ODBC but we have found that some drivers do not return a reliable value. The value of the attribute is used by the Connectivity Kit to determine the size of the statement pool. The Connectivity Kit keeps track of the statements that are used per connection in a Most Recently Used sorted list. If the maximum number of statements is in use, the least recently used statement will be freed whenever an additional statement is required. Some ODBC drivers return a value of 1 for this attribute. This will make the Connectivity Kit free and re-allocate statements all the time. For some drivers the value is accurate (MS Jet Engine ODBC Driver e.g.) but for others it is not (Oracle ODBC driver e.g.). In case the attribute value is not accurate you can overwrite it by setting the Max_Active_Statements keyword. Report_Cache_Errors Value Associated attribute Integer value None Configuration file ODBC_DRV.INT Switches reporting on cache read errors on or off. The reporting is off if the attribute is set to 0 (zero), all other integer values will switch reporting on. By default, cache read errors are not reported. Use_Cache Value Integer value Associated attribute None Switches the use of structure caching on or off. Structure cache is off if the attribute is set to 0 (zero), all other integer values will switch structure caching on. By default, structure cache is on. Version 2.2 153

Appendix C Use_Cache_Expiration Value Associated attribute Integer value None Switches the structure caching intermediate file expiration checking on or off. Expiration check is off if the attribute is set to 0 (zero), all other integer values will switch reporting on. By default, expiration checking is on. Sample configuration file A configuration file that sets up date columns not to allow null values and to use the system date as default value looks like: Configuration file ODBC_DRV.INT DEFAULT_NULLABLE_DATE 0 DEFAULT_DEFAULT_DATE {fn current_date()} 154 User s Guide

Appendix D - Getting Support DataFlex Connectivity Kit for ODBC D How to Get Technical Support The support of this product is provided exclusively through the Data Access Client/Server Newsgroup: news://dataaccess.com/dac-public-newsgroups.connectivity-kit-support How to Contact Data Access Where to find us on-line Internet: http://www.dataaccess.com Where to write to us Data Access Corporation 14000 SW 119th Avenue Miami, FL 33186 Support Version 2.2 156