Migration of MS Access Databases to Mendix Platform Schema Migration and Foreign Key Recovery
|
|
|
- Curtis Simpson
- 9 years ago
- Views:
Transcription
1 Migration of MS Access Databases to Mendix Platform Schema Migration and Foreign Key Recovery Theodora Boudale September 28, 2014, 52 pages Supervisor: Host organisation: Internship Supervisor: Tijs van der Storm Mendix Technology BV Jouke Waleson Universiteit van Amsterdam Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering
2 Contents Abstract 2 1 Introduction Background and Context Problem Description Research Questions Outline Problem Analysis Schema Migration Data Migration Queries Migration Overview of our approach Access2Mendix: Access Schema Migration Solution: Schema Mappings Implementation Evaluation Foreign Key Discovery Approach: Inclusion Dependencies Implementation Evaluation Discussion Schema Migration Foreign Key Recovery Queries Migration Related Work 35 7 Conclusion and Future Work Summary and Conclusion Future Work Bibliography 39 Appendices 41 A Access Field & Table Properties 42 B Access Expression Language - Mendix Expressions 47 C Access SQL - Mendix XPath Operators 49 D Access SQL - Mendix OQL Predicates & Operators 51 1
3 Abstract This thesis is concerned with the migration of Microsoft Access databases to Mendix Platform. We investigate similarities and differences between the data models of the two systems, discuss issues in regards to data migration and also examine possible options for migrating database queries. A tool was implemented for the automated migration of an MS Access schema to Mendix Domain Model and we present its possibilities and limitations. Additionally, since many real-world databases lack an explicit definition of foreign key relationships, we implemented a tool for discovering those constraints. Our approach was to examine the data for inclusion dependencies, and our results show that these constraints are successfully retrieved. However, a big number of false positives is also discovered in databases that make extended use of Autonumber fields. 2
4 Chapter 1 Introduction 1.1 Background and Context Legacy information systems have an important role within many organizations. They usually operate daily supporting important business processes. They are also the source of valuable business information, since many of them represent a long-term investment of a company. However, these systems often cause problems to their host organizations. Some of those systems may be poorly designed or become complex with time. As a result, maintenance becomes a very time-consuming and expensive task. Adding new functionality or fixing a bug may require a significant amount of time and result in making the program more complex. Lack of documentation can make this task even more difficult. Additionally, skill shortage for obsolete technologies, for example COBOL, makes it difficult for companies to find experts in supporting some legacy systems [HCHH08]. For these reasons many organizations choose to migrate from one platform to another or one language to another. There are several examples of software migration projects in literature, for example converting from one language to another, converting to a newer version of a language, or migrating to a different type of database [AGA + 06]. A software migration project requires the investigation of a major issue: what can be migrated from the source system to the target one. A mapping between the components of the two systems needs to be determined. These components may have different semantics and a major challenge here is the resolution of this semantic gap. As Andrade et al. note [AGA + 06], the width of this gap decides the feasibility and complexity of such a conversion. The wider the gap, the less feasible, more complex and less automated the migration is [AGA + 06]. In migration projects one may also have to deal with underspecified data sources. Information about the database schema might be missing, and there might be no documentation to aid in its recovery. For instance, many relational databases lack an explicit definition of Foreign Key Relationships. The recovery of such constraints is an interesting challenge and a necessary step for a successful migration. 1.2 Problem Description Mendix is a company based in Rotterdam that offers a Model-Driven Development (MDD) Tool for developing web and mobile applications. MDD tools aim to separate the business logic of a program from its implementation technology [MRA05]. They allow developers to work on a higher level of 3
5 abstraction by creating models to capture a program s structure and behavior. These models are consequently used in order to automatically generate third generation language code [MRA05]. Many customers of Mendix have applications built with Microsoft Access and want to migrate them to Mendix Platform. Mendix wants to offer their customers a tool that can perform an automated migration of MS Access applications to their platform. Additionally, many of those customers applications do not have Foreign Key constraints explicitly defined. The automated discovery of such constraints would be a big aid in correctly migrating the Relationships of MS Access schema to Mendix Overview of Platforms Microsoft Access is a 4GL application design and development tool. It combines the Microsoft Jet database engine with software development tools, and can be used for the development of desktop and web applications. A MS Access application consists of the Database, the User Interface, and the Application Code. In Table 1.1 we present the main elements of a MS Access application from a developer s viewpoint, categorized along these three dimensions. Database Table: A set of data elements that is organized in rows and columns. Query: Queries are used in order to retrieve or operate on data. Form: An object that is used to create the user interface for an application. User Interface Application Code Report: An object that is used to display or summarize data, usually formatted for printing. Macro: A script for performing a task. It allows automation of tasks and adds functionality to forms, reports and controls. Module: A collection of Visual Basic declarations, statements and procedures, stored as one named unit. Table 1.1: Elements of a MS Access Application Mendix Business Modeler is the tool that allows for the development of Mendix applications with the creation of models. There are three main model types that can be used to build a Mendix application. In Table 1.2 we give a brief overview of them. Domain Model: A data model that describes information in the application domain. This model is used for the generation of the database schema. Pages: Pages are used in order to create the interface for the end-user. A number of widgets are available. Microflows: Process models that express the logic of the application. Table 1.2: Elements of a Mendix application 4
6 1.2.2 Scope In this project we are going to focus on the first dimension of the migration, the Database. Our approach is based on the Database First migration method [WLB + 97]. A MS Access database consists of the Tables (schema and data) and the Queries. In this research we will explore mappings between MS Access schema elements and Mendix Domain Model elements. We will discuss the data migration process, and finally investigate options for migrating MS Access Queries. Afterwards we will create a tool for the automatic migration of a MS Access database schema to Mendix Domain Model. Additionally, since many real-world applications do not have Relationships explicitly defined, we will create a tool for recovering Foreign Key constraints. We will use version 2010 of Microsoft Access and version of Mendix Business Modeler. MS Access can also be part of a more complex system, comprising of other databases and programs. In this project we will focus on applications built with MS Access only. We do not consider databases that make use of passwords or applications with multiple user accounts, since user-level security is not available in Access Microsoft Access might be referred to as Access only in the rest of this thesis. 1.3 Research Questions The main question we will try to answer is: Can we automatically migrate a Microsoft Access database to Mendix Business Modeler? We have the following subquestions: Which features of an Access database schema can be mapped to Mendix Domain Model? Which features cannot be mapped and what are the consequences? Can we automatically migrate MS Access Queries? When or where is human intervention needed? Can we recover missing Foreign Key Relationships from an Access database? 1.4 Outline The remainder of this thesis is structured as follows: In Chapter 2 we give a detailed problem analysis along the three dimensions of the database migration process, namely the schema, data, and queries. In Chapter 3 we present our solution for schema migration. We discuss about the tool we implemented, the features it migrates and its limitations. Chapter 4 is concerned with foreign key recovery. In Chapter 5 we discuss our results in relation with our research questions. In Chapter 6 we give an overview of related work. Finally, in Chapter 7 the conclusion and directions for future work are presented. 5
7 Chapter 2 Problem Analysis 2.1 Schema Migration Schema conversion is the translation of the legacy database structure into an equivalent database structure expressed in the new technology [HCHH08]. This process requires identifying similar or semantically related elements between the source and target systems [DSDR07]. In this section we are going to present MS Access schema elements and explore possible mappings to Mendix Domain Model elements Main Schema Elements Microsoft Access is a relational database management system; data is represented in tuples, grouped into Tables. Each Field of a Table retrieves its values from a certain domain (it has a certain Data Type). Every Table can have exactly one Primary Key that uniquely identifies each row in the table. Additionally, Tables may relate to other Tables with a Foreign Key Relationship. Indexes may also be defined on Fields to speed up search and sorting. The Domain Model in Mendix Business Modeler is based on concepts similar to the relational model. It consists of Entities and their relations, Associations. An Entity can have a number of attributes, each one with its own type and properties. Indexes may also be defined on attributes. We present the mapping between the main constructs of the two systems in Table 2.1. MS Access Schema Elements Tables Fields Relationships Mendix Domain Model Elements Entities Attributes Associations Table 2.1: MS Access Schema - Mendix Domain Model Elements Field Data Types Every Field in an Access Table has a certain Data Type that determines what kind of data can be stored in that Field. Similarly, Attributes in Mendix Domain Model have their own types. We examined the available Data Types in both systems and present them in Table
8 MS Access Field Data Types Text Memo Number Byte Integer Long Integer Single Double Replication ID Decimal Date/Time Currency Autonumber Mendix Domain Model Attribute Types String String Integer Integer Long Integer Float Float Not available as an attribute type Float Date and Time Currency Long Integer Increment Random Replication ID Yes/No Hyperlink OLE Object Attachment Calculated Lookup Field Autnonumber Not available as an attribute type Not available as an attribute type Boolean Not available as an attribute type Not available as an attribute type Not available as an attribute type Not available as an attribute type Not available as an attribute type Table 2.2: Access Field - Mendix Attribute Data Types Most Field Data Types have an equivalent Attribute Type in Mendix. As we can see in Table 2.2 we have mismatches in certain cases, and discuss those below. Replication ID is a Data Type used in database replication, a process where multiple copies of an Access application are created and used in locations that are not always connected to each other. A Field of that type holds a Global Unique Identifier for every row that distinguishes it from any other row in the replica set. This functionality is not available in Mendix Platform. Autonumbers with random values are not available in Mendix Domain Model, and neither are Hyperlinks. Fields of the latter Data Type store text that can be either a UNC path or a URL, and can be followed directly. A possible option would be mapping them to a String Field while the possibility for it to act as a Hyperlink could be determined in Mendix GUI via a special widget. OLE Object and Attachment Fields store files. OLE Object Fields can hold exactly one file for each row, while Attachment fields can hold multiple files. Such an Attribute Type is not available in Mendix, although a possible workaround is available. An Entity in Mendix can inherit from System.FileDocument, a predefined Entity that can represent a file. OLE Object and Attachment Fields could be mapped to such an Entity. An Association should also be created between this Entity and the Entity mapped to the original Access Table. The Association would have to be 1-1 for OLE Object Fields and 1-* for Attachment Fields. Calculated fields are calculated based on other Fields of the same Table. Although there is no such 7
9 Attribute Type in Mendix, it is possible to specify that a certain Attribute is calculated and retrieves its values from a Microflow. The Expressions used to calculate those Fields in Access use operators and functions of Access Expression Language. Thus, MS Access Expression Language needs to be mapped to Mendix Microflow Expressions. The difficulty that arises here is that certain functions and operators in Access do not have an equivalent in Mendix. In Appendix B we present a detailed mapping of the two expression languages operators. The mapping of functions is not provided due to their very large number (over 100) and limitations of time. Lookup Fields are not a separate Data Type although it appears as such in Access. A Field defined as Lookup has an actual Data Type of Text or Number and retrieves its values either from a Table Column, a Query Column or a user-defined List. In the first two cases we have in essence a Foreign Key Field to another Table Column. However, in the case of a user-defined List, we have an extra construct with data that also needs to be migrated. We distinguish two cases: One column value list: A similar construct exists in Mendix Domain Model, the Enumeration. The latter holds a List of user defined values and can be assigned to an Attribute. Multi column value list: A Field of that type retrieves its values from a user-defined multicolumn List. Multi-column Enumerations are not available in the Domain Model. Multi-value Lookup Fields: Fields defined as Lookup may be set to allow Multiple Values. This means that for a single row, the Field can hold multiple values from the Table/Query/List that it retrieves values from. This is equivalent to a Many-To-Many Relationship, where the implementation of the middle Table is hidden from the user. In Mendix Domain Model, Attributes cannot store multiple values for a single object Field & Table Properties A Table Field in Access has a number of Properties that define its characteristics or aspects of its behavior. Certain properties are relevant to the Data Model, for example the Field Size property that determines the length of characters of Text Fields. However, there are also properties that specify GUI related settings and are not related to the database schema. For the schema migration process we are interested only in those properties that are related to Access data model. In Appendix A we present a detailed list of all Properties available for Access Fields and Tables and possible mappings to Mendix Platform Primary Keys In relational databases a Primary Key consists of one or more Fields that uniquely identify each row of a Table. The concept of Primary Keys is not present in Mendix Domain Model. The issue of unique identifiers for each row is dealt internally in the physical database. A unique identifier column is created for every table when the actual database is created and its values are filled automatically. It is possible though to preserve the uniqueness of values for Primary Key columns when they are migrated to Mendix. The Domain Model offers the possibility to set Validation Rules for the Attributes of each Entity. Via such a Validation Rule, Attributes that correspond to Primary Key columns can be set as Unique. We note that these rules cannot be applied to a combination of Attributes. Thus, the case of multi-column Primary Keys cannot be properly mapped. 8
10 2.1.5 Relationships & Foreign Keys In relational databases there is often the need to refer to an object of a Table from a different Table. This is achieved with the use of Foreign Key Relationships. A Foreign Key is a Field that retrieves its values from a Primary Key field. Foreign Keys In Mendix Domain Model, Associations relate Entities in a similar way that Foreign Key Relationships relate Tables in Access. However, the concepts of Primary and Foreign Keys are not present. In Access Relationships the Foreign Key Field of a Relationship points to the Primary Key Field. In the Domain Model Associations are not defined over Attributes, but Entities themselves. We illustrate the difference in Figures 2.1 and 2.2 below. Consider an Access Database holding information about Customers and Orders, with the following schema: Figure 2.1: Foreign Key Relationship in MS Access The equivalent Domain Model for that schema in Mendix Platform would be the following: Figure 2.2: Association in Mendix We can see that Foreign Key Fields are not necessary in Mendix Domain Model for the implementation of Associations. Types of Relationships The following types of Relationships are available in MS Access: One-To-Many Relationship: a row in Table A can have many matching rows in Table B, but a row in Table B has only one matching row in Table A. One-To-One Relationship: each row in Table A can have only one matching row in Table B. Similarly, each row in Table B can have only one matching row in Table A. 9
11 Many-To-Many Relationship: A row in Table A can have multiple matching rows in Table B, and a row in Table B can have multiple matching rows in Table A. This type of relationship requires the definition of a third table. The same types of Associations with the same semantics are available in Mendix Domain Model as well. The difference is that the Many-To-Many Association is not implemented with the use of an extra (middle) Entity. Referential Integrity Referential Integrity is one of the most important constraints in relational databases. It requires every value of a Foreign Key Field to exist as a value of the Field that it references. This constraint is important for the consistency of data, however it is one that is not always enforced in Access Relationships. Three options are possible regarding the enforcement of Referential Integrity in Access: Referential Integrity is enforced without Cascade Deletes/Updates: In this case users cannot enter an invalid Foreign Key value. Deleting referenced records is prohibited, as is updating a referenced Primary Key value. Referential Integrity is enforced with Cascade Deletes/Updates: Invalid Foreign Key values are prohibited. The deletion of referenced records is possible and it is cascaded to the referencing records as well. The same applies for the update of referenced Primary Key values. Referential Integrity is not enforced: Users are not prohibited from entering invalid Foreign Key values, deleting referenced records, or updating referenced Primary Key values. We can see that there are three actions that affect the consistency of data and are affected by the enforcement of Referential Integrity. Those are: a) entering Foreign Key values b) deleting referenced records c) updating referenced Primary Key values. The latter is not applicable in the Domain Model since an Association does not make use of Primary Keys. Entering (valid or invalid) foreign key values is an action equivalent with entering an associated/unassociated object in Mendix, while the deletion of referenced rows in Access is equivalent with the deletion of Mendix objects. In Mendix Platform it is not possible to enter an object with a non-existing reference. It is possible, however, to enter an object with no reference at all. Finally, the deletion of referenced objects can be determined via Associations settings, and the options available correspond to the behaviors that exist in Access, as described before Indexes Indexes are used in Access in order to speed up sorting or searching by a particular Field. The definition of single or multi-column Index is possible. In Mendix Domain Model Indexes can be defined on Attributes, and they can also be single or multi column.this results in the creation of an Index in the underlying database when the model is executed Underspecification Underspecification is a common problem in many software systems. An important constraint that is often missing is the explicit definition of Foreign Key Relationships. The database consists of a set of Tables with columns and there is often no knowledge about how these refer to each other. 10
12 Microsoft Access is often used for development from people that do not have extensive knowledge on software or database design. This idea is clearly expressed by the motto used in Microsoft s website for Access: Create custom apps fast without being a developer. Applications are often built from people that lack the necessary knowledge, and as a result many applications lack an explicit definition of Relationships. Additionally, Microsoft s Access option for Lookup Columns allows for the creation of relationships that are not explicitly defined. Consider the Access schema that we presented in Subsection with Customers and Orders Tables. The CustomerID column of the Table Orders is a Foreign Key Field to the CustomerID column of the Customers Table. Instead of defining it this way, one can do the following: Define the Query Customers Extended as presented in Listing SELECT CustomerID, FirstName, LastName, FirstName + LastName 2 FROM Customers Listing 2.1: SQL Query: Customers Extended Afterwards, define the CustomersID field of table Orders as a Lookup Field, retrieving values from the CustomerID column of Customers Extended Query. 1 SELECT CustomerID 2 FROM Customers Extended Listing 2.2: SQL Query for Lookup Field This Field will retrieve values from the CustomerID field of Customers Table. The Field is explicit in the Query, and it would be possible to retrieve it with parsing of both SQL Queries. However, we consider it as a form of underspecification because no Relationship is explicitly created between the two fields: it is not visible in the Relationships window of Access, it cannot be retrieved as such with available tools and no Referential Integrity can be enforced on it. 2.2 Data Migration Once the Access database schema is converted to Mendix Domain Model, the next step is the migration of data. Data migration is handled by a process referred to as Extract-Transform-Load and requires three steps [HCHH08]: First, the data are extracted from the legacy database. Second, they are transformed in such a way that their structure matches the target schema. Finally, they are loaded to the new database. This process relies on the mapping between the source and target database schema. Different databases may be used as back-end in Mendix. The most common case is that an application is deployed on Mendix Cloud, where PostgreSQL is used. It is also possible for a user to configure their own server, like Oracle or MS SQL Server Finally, a Mendix application may be deployed locally for the users and developers to test and the built-in HSQL database is used then Different source-target schema The databases used as back-end are all relational, and the mapping between the Domain Model elements and the physical database elements is the following: For each Entity of the Domain Model, a Table is created in the database, while Entity Attributes correspond to Table Columns. Additionally, 11
13 an extra Column is created with a unique identifier for each row. For each Association, a separate Table is created with Columns the unique identifiers of the associated Tables. Consider again the following Access database schema: Figure 2.3: MS Access Schema - Customers & Orders And the corresponding Domain Model for that schema: Figure 2.4: Domain Model - Customers & Orders In the images below we present the tables created for the two Entities, Customer and Order, and the Association between them, in the built-in HSQL database: Figure 2.5: Customer Table in Mendix built-in HSQL Database Figure 2.6: Order Table in Mendix built-in HSQL Database Figure 2.7: Table for the CustomerOrder Association in Mendix built-in HSQL database The target database that is going to be generated in Mendix will have a different schema than the source database. First, additional Columns are created in every Table that stores an Entity, holding 12
14 unique identifier values for each row. Separate extra Tables are created for every Association of the Domain Model. Additionally, since Foreign Key Fields are not necessary for Associations in Mendix Domain Model, they could be omitted. Finally, Attachment and OLE Object Fields do not have equivalent Attributes in Mendix, but they can be mapped to a separate Entity. For Access applications that make use of such Fields, the target database will have extra Tables as a result of mapping those Fields Data Quality Data is a significant resource in all organizations. Activities and important decisions in companies are based on data and information obtained from data analysis [SSPA + 12]. Thus, its quality is critical. Sidi et al. [SSPA + 12] present an overview of different dimensions that affect data s quality. One such dimension is the consistency of data, which refers to the violation of semantic rules defined over a set of data [BCFM09]. In section we presented the constraint of Referential Integrity and the impact that its absence can have on the consistency of data. We consider again the example with Customer and Order Tables: If referential integrity is not enforced in the Relationship between them, the users may enter Orders with a Customer reference that does not exist. The CustomerID field of the Customer table may be updated for a certain referenced row, with no warning from Access, and no consequent update in the Orders that reference it. Additionally, Customers with existing Orders may be deleted, which will result in rows in Orders Tables with an invalid Customer reference. An additional feature of Microsoft Access also allows for inconsistent data. In section we presented Lookup Fields in Access, that retrieve values either from a Table, a Query or a user-defined List. Via a property available for those fields, it is possible to specify that the values entered are not limited to the List. In this case, users may enter values that do not exist in those Fields data source, and thus refer to a non-existent object. Finally, data quality problems may arise at instance level as well [SSPA + 12]. For instance, redundant duplicates entered by the users of the software system. For those reasons, data cleaning may be a necessary step before the migration of data. 2.3 Queries Migration Queries are Microsoft Access Objects that are used in order to retrieve or operate on data, or define data structures. Access uses Structured Query Language (SQL) as a query language. Queries can be used in various locations in an Access application: they can be a data source for a Form, a Report, a GUI widget, or a Lookup Field. They may also be used in Macros or called from the application code. A single query may be used in more than one location. In Mendix Business Modeler there are two constructs that perform retrieval and operations on data: Microflows and Object Query Language (OQL). OQL is a query language similar to SQL that retrieves data from Entity objects and tt can be used as a data source in Mendix Reports only. Microflows can perform a series of actions and data retrieval is one of them. They can be used as a data source in Mendix Pages, although this use is not very common. Thus, the optimal mapping for Access SQL is not immediately apparent. The set of statements available in SQL can be divided into the Data Definition Language (DDL) and the Data Manipulation Language (DML). The Data Definition Language consists of the set of statements for defining the database schema. With Access SQL it is possible to define, alter or delete the following constructs: Tables, Users, Indexes, Views, and Procedures. In Mendix Business Modeler, Entities, Indexes and Users are created in the graphical editor of Mendix Business Modeler. There is 13
15 no language or other construct for the automatic creation of these objects. We will therefore explore a mapping for Queries that constitute the Data Manipulation Language Mapping Queries to Microflows In Mendix Business Modeler, Microflows are constructs that can perform a series of Actions. Those Actions include retrieval of Entity objects, creation, deletion or change of Entity objects. Microflows accept parameters, they can perform Actions in Loops and may return a result after their execution. Their execution is possible with a button call from Mendix Pages, within another Microflow, and they can additionally be used as a data source in widgets of Mendix Pages SQL Action Queries Action Queries perform changes on data. There are three types of Action Queries, for deleting, updating and inserting data respectively. In Table 2.3 we present the three SQL statements that belong in Action Queries and a possible mapping to Mendix Microflows. Microsoft Access Query DELETE Statement DELETE [table.*] FROM table WHERE criteria INSERT INTO Statement Single-record append query INSERT INTO target [(field1 [, field2 [, ]])] VALUES (value1 [, value2 [, ]) Multiple-record append query INSERT INTO target [(field1 [, field2 [, ]])] [IN externaldatabase] SELECT [source.]field1 [, field2 [, ] FROM tableexpression UPDATE Statement UPDATE table SET newvalue WHERE criteria; Mendix Microflow Microflow Action: Retrieve a list of Entity objects Map criteria to XPath constraint Action: Delete list of Entity objects For single-record append query Microflow Action: Create an object of a specific Entity Set values for its Attributes Commit object to Database For multiple-record append query The mapping depends on the complexity of the Query. See information later on this chapter. Microflow Action: Retrieve a list of Entity objects Map criteria to XPath constraint Action: In a Loop, set new values for the Attributes of each Entity object Table 2.3: SQL Action Queries - Mendix Microflows The mapping of an SQL Action Query to a Microflow may or may not be possible depending on the features that it includes. Specifically, we note two issues: a) Mapping the criteria in WHERE clause to Mendix XPath b) Mapping Queries with nested SELECT statements, like the multi-record append query in Table 2.3. We elaborate on those in the next two subsections. 14
16 WHERE clause to Mendix XPath The WHERE clause specifies the criteria that records must satisfy to be included in the Query results. The criteria have the form of an expression, which can be a comparison statement, an expression including Access special operators, a function, or an exists statement. In Microflows such criteria can be determined with an XPath constraint. Mendix XPath is a query language used in Mendix Modeler in order retrieve data from Entity objects, their Attributes or Associations. Only the constraint part of the XPath query is used in a Microflow, and we therefore need to investigate if SQL expressions can be mapped to XPath expressions. SQL expressions in WHERE clause may perform comparisons, for example checking if a Table Column has a certain value. These expressions are linked by logical operators and may also include arithmetic operations as well. In Appendix C we present a list of Comparison, Special, Logical and Arithmetic Operators that are used in SQL and their equivalent in Mendix XPath. For most of them have there is an equivalence in Mendix XPath. However, we note mismatches in some cases, like the Like and In operators. Criteria that use such operators cannot be translated to XPath. An additional difficulty is migrating Queries that use functions in the WHERE clause. Over 100 VBA functions are available in Access SQL. Due to time constraints and the big number of functions, we do not present a detailed list of mapping for those functions. However, in XPath constraints the functions available are only 18. Thus, for many of those a mapping is not possible. Finally, a SELECT query may be used in a WHERE clause, either as part of a comparison or in a Subquery. Investigating the mapping of SELECT queries to XPath Queries would be the next step in this process. However, an XPath Query cannot be included in the XPath constraint part of Microflows. Since this mapping is not possible, we did not investigate this any further. Nested SELECT statements Multi-record append queries use a SELECT statement in order to retrieve the data that are going to be inserted in a certain Table. The possibility to map such a Query depends on the features that it uses. Certain features can be mapped to Microflows, but there are also cases where the mapping is not possible. We discuss the mapping of SELECT statements to Microflows in the next subsection. This analysis applies here as well SQL Select Queries The syntax of a SELECT query is the following: SELECT [predicate] { table. [table.]field1 [AS alias1] [, [table.] field2 [AS alias2] [,...]]} FROM tableexpression [,...] [IN externaldatabase] [WHERE...] [GROUP BY...] [HAVING...] [ORDER BY...] [WITH OWNERACCESS OPTION] Listing 2.3: SQL Select Statement The arguments in square brackets are optional. A SELECT statement may return columns of one or multiple tables, by performing a JOIN operation in the FROM clause. In Microflows an XPath constraint can be used to retrieve objects for which an Association exists. This is equivalent with an INNER JOIN in SQL. The operations of LEFT and RIGHT JOIN cannot be performed in a Microflow. Additionally, a Microflow can return objects of only one Entity. Thus, for Queries that 15
17 return objects from multiple tables a mapping to a Microflow is not possible even if an INNER JOIN is used. We can see that a big number of features is available in a SELECT query. The mapping of the WHERE clause to Mendix XPath constraint was examined in the previous section for Action queries, and it applies here as well. The possibility to migrate this clause depends on the operators and functions used in the Query. The ORDER BY clause is a feature that is available in Microflows. However, the GROUP BY clause provides functionality that cannot be performed in Microflows. Although it is possible to aggregate objects in Mendix Microflows, grouping results by a certain column the way it is performed in SQL is not possible. Finally, a SELECT query may perform operations among Table columns and return those in the results as well. However, a Microflow cannot return additional columns along with an Entity s Attributes. The features [IN externaldatabase] and [WITH OWNERACCESSOPTION] are not relevant with our research, as they concern retrieval of data from external databases and multi-user environments and are outside of our scope Mapping Queries to OQL Object Query Language (OQL) is a relational query language used in Mendix Platform in order to retrieve data from the database. OQL is very similar to SQL, and it uses Entity and Association names instead of the actual database table names. It supports SELECT queries only. In Listing 2.3 we presented the syntax for an SQL Select statement. Below we give the syntax of an OQL Select statement, where the arguments in square brackets are optional: SELECT [ DISTINCT ] { { entity name from alias }. { expression [ [ AS ] column alias ] } [,... n ] } FROM { entity name ( sub oql query ) } [ [ AS ] from alias ] { { INNER { { LEFT RIGHT FULL } [ OUTER ] } } JOIN entity name [ [ AS ] from alias ] ON <constraint> } [,... n] [WHERE <constraint>] [GROUP BY expression [,...n ]] [HAVING <constraint>] [ORDER BY { order by expression [ ASC DESC ] } ] [LIMIT number ] [ OFFSET number ] Listing 2.4: OQL Select Statement The SELECT clause specifies which Entity Attributes are returned by the query. Attributes from multiple Entities may be return as well as expressions like in SQL. Aliases and predicates are used in both languages. We give a detailed list of predicates supported in both languages in Appendix D. We note a mismatch with the [TOP n [PERCENT]] and DISTINCT predicates, which are not supported in OQL. The FROM clause specifies the Table(s) or Query(-ies) that the data is retrieved from. Similarly with SQL, OQL can retrieve data from Entities or other OQL subqueries. In the latter case, the subquery 16
18 should be included within the main query, as it is not possible to reference an OQL Subquery by name as in SQL. The types of JOINS available are the same in both languages. However, in OQL a JOIN may be performed on Entities that are associated. In SQL, it is possible to perform a JOIN on Fields that do not have a Primary-Foreign Key Relationship. The WHERE clause of SQL specifies the criteria that records must satisfy in order to be included in the query result. As we already saw, the expression in the WHERE clause may perform a comparison using Table columns, and it may include Subqueries and VBA functions. In Appendix D we present a mapping from SQL to OQL operators. Most of them are the same in both languages. The operators ANY, ALL and SOME that are used with Subqueries in SQL are not available in OQL. The ORDER BY and GROUP BY clauses are used in both languages with the same functionality. The only difference is the Aggregate functions supported, which are presented in Appendix D as well. 2.4 Overview of our approach In this project, we will focus on Schema Migration of Access applications to Mendix Domain Model. We will create a description of mappings between Access and Mendix schema elements, and based on that we will implement a tool that performs an automated schema migration. Additionally, since many real-world databases lack an explicit definition of Foreign Keys, we will try to recover those constraints by examining the data for inclusion dependencies. The following two chapters describe details of our approach, the solution given, and its limitations. 17
19 Chapter 3 Access2Mendix: Access Schema Migration After examining the schema elements of the two systems we implemented a tool, Access2Mendix, for migrating the database schema of MS Access applications to Mendix Domain Model. In this chapter we present a description of the schema mappings and details of our implementation. In the end, we perform an evaluation of our tool and discuss Access2Mendix s possibilities and limitations. 3.1 Solution: Schema Mappings In section 2.1 we investigated the similarities and differences between the schema elements of the two systems, Access and Mendix Modeler. As Hainaut et al. note in [HCHH08], schema conversion produces a formal description of the mapping between the objects of the source and target system. In this section we present those mappings. In Table 3.1 we can see the mappings of the main Access schema elements. Tables are mapped to Domain Model Entities, while Table Fields are mapped to either Attributes or an Entity and Association. The latter case applies for Fields whose Data Type is OLE Object or Attachment. Relationships are mapped to Associations and Indexes to Mendix Indexes. MS Access Schema Elements Mendix Domain Model Elements Tables Entities Fields Attributes OR Entity + Association Relationships Associations Indexes Indexes Table 3.1: MS Access - Mendix : Mapping of main schema elements Fields are mapped according to their Data Type, as presented in Table 3.2: 18
20 MS Access Field Data Types Text Memo Number Byte Integer Long Integer Single Double Replication ID Decimal Date/Time Currency Autonumber Mendix Domain Model Attribute Types String String Integer Integer Long Integer Float Float Not mapped Float Date and Time Currency Long Integer Increment Random Replication ID Yes/No Hyperlink OLE Object Attachment Calculated Lookup Field - One Column Value List Lookup Field - Multi Column Value List Multi-value Lookup Field One Column Value List Multi-value Lookup Field Multi Column Value List Autonumber Autonumber Not mapped Boolean String Entity that inherits from System.FileDocument 1-1 Association with the Entity mapped to the Table this field belongs to Entity that inherits from System.FileDocument 1-* Association with the Entity mapped to the Table this field belongs to Set Attribute as Calculated and create a Microflow Attribute of Type String or Number, according to the actual Data Type Enumeration assigned to Attribute Entity with Attributes the Columns of the Value List 1-1 Association with the Entity mapped to the Table this field belongs to Entity with an Attribute holding the List values *-* Association with the Entity mapped to the Table this field belongs to Entity with Attributes the Columns of the Value List *-* Association with the Entity mapped to the Table this field belongs to Table 3.2: MS Access - Mendix Data Type Mappings 19
21 Fields of type Replication IDs are not migrated. In case that a Microsoft Access application is part of a replica set, only one copy of the replica set will be used for the migration. Since replication is not supported in Mendix, Fields of that type would not be useful and thus we chose to omit them. We decided to map Random Autonumbers from MS Access to incremental Autonumbers in Mendix. The random Autonumber values could be replaced with incremental ones in data migration. Extra care should be taken if those Fields are part of a Foreign Key Relationship. In that case, one should also correctly replace and migrate the referenced values as well. Regarding the mapping of OLE Objects and Attachment Fields to a separate Entity and Association, we also have to correctly map the Delete Behavior of that Association. Consider a Table Customer that includes an Attachment Field named Picture. When a row is deleted from Customers Table in Access, the contents of Picture are deleted as well. In the migrated Domain Model, Customer and Picture are represented by separate Entities. In order to simulate the delete behavior of a row in Access, we specified the Delete Behavior of such an Association as On Delete of a Customer Object, Delete Picture as well. and On Delete of Picture object, keep Customer object(s).. Fields that are Calculated are mapped to an Attribute in Mendix with Type the actual Data Type of the Field. Additionally, they are set as Calculated and a Microflow is created and assigned to the Attribute. This Microflow performs the Action of creating a new Variable. The initial value of this Variable is set as the expression from Access, and then returned to the Attribute. In order for those Fields to be migrated correctly, the Expression Language of Access needs to be mapped to Microflow Expressions. The two languages have some differences in the operators and the functions they use. Due to time limitations, we did not perform that mapping. We performed a mapping from Access Field Names to Attribute Names and transfered the rest of the expression as is. As a result, some Microflows may contain errors that will have to be manually corrected, where it is possible. Finally, we were not able to map Multi-value Lookup Fields because of limitations in the API we used. In order to correctly migrate those fields to Mendix we need to know what their data source is, the number of columns of the data source and the values themselves. Our API did not provide methods for retrieving this information for Multi-Value Fields. Thus, their migration is a task that must be done manually by the user, according to Table 3.2. Field and Table Properties in Access determine their characteristics or behavior. As we discussed in certain properties are related to the Data Model while others determine GUI related settings. We migrated only the properties that are relevant to the Data Model, and we present those in the following Table: MS Access Field Properties Mendix Properties Default Value Default Value Required Validation Rule of Required for the relevant Attribute Field Size Max length Expression (for Calculated Fields) Initial Value in the variable of the Microflow Table 3.3: MS Access - Mendix : Mapping of Field Properties In order to preserve the uniqueness of values for Attributes that correspond to Primary Key columns, we implemented in Mendix a Validation Rule of Unique for them. This was not possible for multicolumn Primary Keys, as Mendix Validation Rules can be applied to a single Attribute only. 20
22 MS Access Primary Keys Mendix Unary Primary Keys Validation Rule of Unique for the relevant Attribute Multi-column Primary Keys Not mapped Table 3.4: MS Access - Mendix : Mapping of Primary Keys Relationships are mapped to Associations according to their type, as presented in Table 3.5. MS Access Relationships 1-1 Relationship 1-* Relationship Mendix Associations 1-1 Association Foreign Key Fields omitted 1-* Association Foreign Key Fields omitted Table 3.5: MS Access - Mendix : Mapping of Relationships A many-to-many Relationship is implemented in Access with the use of a third (middle) Table. Each of the two related Tables has a one-to-many relationship with the middle Table. The Primary Key of the middle Table consists of two fields: each of them is a Foreign Key to one related Table. A many-to-many Association in Mendix does not require the use of a middle Entity and one could think omitting this Table. However, apart from the Foreign Keys to the related Tables, this middle Table may contain extra Fields. Thus, we decided to migrate it. Finally, we specified the Delete Behavior of Mendix Associations based on Referential Integrity settings of Access Relationships. The mappings are presented in Table 3.6. MS Access Referential Integrity Referential Integrity not enforced Referential Integrity enforced, with Cascade Deletes Referential Integrity enforced, without Cascade Deletes Mendix Associations - Delete Behavior On Delete of Object A Keep Object B On Delete of Object A Delete Object B Delete Object A only if it is not associated with B Objects Table 3.6: MS Access - Mendix : Mapping of Delete Behavior 3.2 Implementation Access2Mendix was implemented using Java and C#. The implementation was part of Mendix Modeler, which allowed us to reuse existing C# code for the generation of the Domain Model. Java was used for the retrieval of MS Access schema, since the library Jackcess provided an easy to understand and use interface. We describe in this section the algorithms we used for migrating the Access schema elements and also details of our implementation Algorithms The main constructs of an Access schema are Tables, Fields, Relationships and Indexes. Our tool iterates through all Tables and their Fields in order to generate the migrated Domain Model in Mendix. 21
23 Afterwards, it iterates through the Relationships in order to create the necessary Associations. The generation of the Domain Model is described in Algorithms 1 and 2. Data: Tables, Fields and Indexes of Access Schema Result: Entities, Attributes and Indexes in Mendix Domain Model foreach Table t do create Entity e; foreach Field f in Table t do if f is OLE Oject then create Entity entfile that inherits from System.FileDocument; create 1-1 Association between entfile and e; end else if f is Attachment then create Entity entfile that inherits from System.FileDocument; create 1-* Association between entfile and e; end else create Attribute attr; add attr to e; if f is Calculated then create Microflow m; assign m to attr; end end end foreach Index in Table t do create Index i; end end Algorithm 1: Algorithm to create Domain Model Entities, Attributes and Indexes Data: Relationships of Access Schema Result: Associations in Mendix Domain Model foreach Relationship r do if Type of r is 1-1 then create 1-1 Association; end else create 1-* Association; end if Referential Integrity is enforced with Cascade Deletes then On Delete of Object A Delete Object B; end else if Referential Integrity is enforced without Cascade Deletes then Delete Object A only if it is not associated with B Objects; end else On Delete of Object A Keep Object B; end end Algorithm 2: Algorithm to create Domain Model Associations 22
24 3.2.2 Retrieval of MS Access Schema Jackcess is a Java library for reading and writing to MS Access databases. This part of the tool reads information about MS Access schema elements, and returns it in JSON format. The program consists of 245 lines of code and contains 14 methods for retrieving MS Access schema elements, with an average of 9 lines of code each Generation of Domain Model Creating our tool as part of Mendix Modeler allowed us to reuse existing code for the generation of the Domain Model. The language used in Mendix Modeler is C#. At the initialization of the tool, the user is prompted to select an MS Access application. Then, the Java process is called for the retrieval of the application s database schema. The schema information is returned by the Java process in JSON format, and deserialized in a collection of C# classes. After the deserialization, the Domain Model is generated using Algorithms 1 and 2. The C# implementation contains 430 lines of code. JSON.NET framework was used for the deserialization of the JSON string holding the Access schema. A custom converter was created in order to deserialize Fields of different data types to separate subclasses. 33 methods were used for mapping Access schema elements to Domain Model elements, with an average of 5 lines of code each. 3.3 Evaluation The goal of Access2Mendix was to have an automated lossless migration at the schema level. In this section, we will evaluate the results of this attempt. In order to evaluate our tool we performed a migration of two MS Access applications to Mendix Platform. The first application is a sample application from Microsoft s website, while the second one is a real-world application. Afterwards, we performed a manual inspection of the source and target schema elements and checked if the results are as intended First Migration: Northwind Database First, we tested Access2Mendix with a sample MS Access application from Microsoft s website, Northwind. Northwind is a database that holds information about Sales, Products, Suppliers, Customers etc. Figures 3.1 and 3.2 show the Access database schema and the Domain Model that is generated with our migration tool. 23
25 Figure 3.1: Northwind - MS Access Database Schema Figure 3.2: Northwind - Migrated Mendix Domain Model The Access database consists of 20 Tables with 177 Fields in total, and 21 Relationships. The migrated Domain Model consists of 25 Entities with 150 Attributes and 26 Associations. The Domain Model contains 5 extra Entities and Associations than the Tables and Relationships in the Access database. Those extra Entities are holding a Document File, and are a result of mapping Attachment Fields from Access. This explains the extra 5 Associations as well. We also noted a difference in the number of Fields and Attributes between the two systems. We performed a manual inspection of each Entity s Attributes to determine the reason, and we present our results in Table
26 Access Table No. of No. of Attributes Mendix Entity Fields Customers 18 Customers 17 1 Attachment Field Employee Privileges 2 Employee Privileges 0 2 Foreign Key Fields Employees 18 Employees 17 1 Attachment Field Inventory Transaction Type on Inventory Transacti- 2 2 Type Transac- Inventory tions 9 Inventory Transactions Invoices 7 Invoices Foreign Key Fields 1 Foreign Key Field Order Details 10 Order Details 7 3 Foreign Key Fields Order Details Status 2 Order Details Status Orders 20 Orders 15 Orders Status 2 Orders Status 2 Orders Tax Status 2 Orders Tax Status 2 Privileges 2 Privileges 2 Products 14 Products 12 Purchase Order Details Purchase Order Status 8 2 Purchase Order Details Purchase Order Status Purchase Orders 16 Purchase Orders Foreign Key Fields 1 Attachment Field 1 Multi-Value Field, not mapped 3 Foreign Key Fields 3 Foreign Key Fields Sales Reports 5 Sales Reports 5 Shippers 18 Shippers 17 1 Attachment Field Strings 2 Strings 2 Suppliers 18 Suppliers 17 1 Attachment Field Table 3.7: Northwind - Comparison of Fields and Attributes We can see that most of the differences in the number of Fields and Attributes are due to removal of Foreign Key Fields. There were also a number of Attachments Fields in the source database, that were mapped to an extra Entity in Mendix instead of an Attribute. Additionally, one Multi-Value Lookup Field was not mapped at all due to Jackcess limitations, as explained in 3.1. The data types of Fields have also been migrated correctly as our manual inspection showed. All the Relationships in the source database were 1-*, and those were mapped to 1-* Associations in the Domain Model. We can also see that the Delete Behavior of associated records was migrated. This is indicated by the different Association colors in Figure 3.2. The red color indicates Cascade Deletes behavior, while the blue color indicates Delete only Records that are not associated. We also observed that Validation Rules of Unique were created for the those Attributes that are unary Primary Keys in Access Tables. Additionally, there were 3 Lookup Fields with user-defined Value Lists. This resulted in the generation of 3 Enumerations in the Domain Model. Finally, Validation Rules of Required were created for the Required Fields of the Access database. 25
27 3.3.2 Second Migration: APKNoP Database We performed the second migration with a real-world database, containing information about basketball games. Figures 3.3 and 3.4 show the Access database schema and the Domain Model that is generated with Access2Mendix. Figure 3.3: APKNoP - MS Access Database Schema Figure 3.4: APKNoP - Migrated Mendix Domain Model 26
28 AKPNoP is an application that consists of 13 Tables with 190 Fields and 15 Relationships. The migrated Domain Model consists of 15 Entities with 174 Attributes and 17 Associations. There are 2 extra Entities and Associations than the Tables and Relationships in MS Access schema, which is a result of mapping 2 Attachment Fields. Similarly with the previous migration, we created a list of migrated Tables and the numbers of Fields, to examine the difference in the number of Fields/Attributes between the two schemas. Access Table No. of No. of Attributes Mendix Entity Fields Categories 1 Categories 1 Courts 1 Courts 1 Fixtures 18 Fixtures 9 Groups 1 Groups 1 Players 13 Players 11 Referees 6 Referees 6 Scorers 6 Scorers 6 Switchboard Items 5 Switchboard Items 5 tblawayteams 1 tblawayteams 1 tbldataentrya 65 tbldataentrya 63 tbldataentryh 65 tbldataentryh 63 9 Foreign Key Fields 1 Foreign Key Field 1 Attachment Field 2 Foreign Key Fields 2 Foreign Key Fields tblhometeams 2 tblhometeams 1 1 Attachment Field Timekeepers 6 Timekeepers Table 3.8: APKNoP - Comparison of Fields and Attributes As we see in Table 3.8, the difference in the number of Fields and Attributes is due to either the removal of Foreign Key Fields or the mapping of Attachment Fields to Entities. All the Relationships in Access database were 1-*, and those were mapped to 1-* Associations in the Domain Model. We can also see that the Delete Behavior of associated records was mapped correctly, which is indicated by the different Association colors in Figure 3.4. The red color indicates Cascade Deletes, while the black color indicates that Referential Integrity is not enforced. We also observed that Validation Rules of Unique were created for the those Attributes that are unary Primary Keys in Access Tables. One Table contained a multi-column Primary Key which remains unmapped in the Domain Model. Additionally, this Access database contained a big number of Calculated Fields. For those Fields, Microflows were created in the Domain Model. An example of such a Microflow can be seen in Figure
29 Figure 3.5: Example of migrated Microflow Finally, certain Microflows contained errors, since we did not implement a mapping from Access Expression language to Mendix Expressions. Those errors will have to be corrected manually where a mapping is possible Summary The goal of implementing Access2Mendix was to have an automated lossless migration at the schema level. We performed two Access database migrations to Mendix, and then checked if Access schema elements are migrated as intended, according to our mappings in section 3.1. We observed that our tool maps correctly the main Access schema elements, namely Tables, Fields, Relationships and Indexes. Our two test databases contained almost all features presented in 3.1, the only features not present were Fields of type OLE Object and Replication IDs. A possible threat to validity is that we used manual inspections to evaluate the migration. Due to time constraints, we were unable to write test ourselves. Certain limitations also exist in our tool. Multi-column Primary Keys cannot be mapped because Mendix does not allow Validation Rules to be applied to a combination of Attributes. Thus, the uniqueness of the combination of values for such fields is not guaranteed in the migrated application. Additionally, the Microflows that are mapped to Calculated Fields may contain errors if the functions or operators used in them have a different name or if they are not available in Mendix. In those cases, the errors will have to be manually corrected by the user where a mapping is possible. Finally, Multivalued Lookup Fields were not migrated due to the API s inability to retrieve the information we needed. Thus, their addition to the migrated Domain Model is a task that should be done manually as well. 28
30 Chapter 4 Foreign Key Discovery Foreign Key Relationships are one of the most important constraints in relational databases. However, in many existing databases they are not explicitly defined. Their discovery is of great importance in migration projects. Without knowledge about those relationships, the source (and target) system will consist of a set of Tables (Entities) with no knowledge about how these refer to each other. Thus, their discovery is a necessary step for a successful migration. 4.1 Approach: Inclusion Dependencies The approach we followed in this project was to examine the data for inclusion dependencies. Foreign Keys are essentially inclusion dependencies (IND): consider two attributes a and b, where a is a Foreign Key of b. The set of values of the dependent attribute (a) is completely contained in the set of values of the referenced attribute (b)[bln06]. Thus, for recovering Foreign Key constraints it is logical to check the data for set inclusion. It should be noted though that a set inclusion does not imply a Foreign Key constraint, but rather the other way around [BLN06]. Therefore, the inclusion dependencies that are discovered will have to be confirmed as Foreign Key relationships from a user that has domain knowledge. Inclusion dependencies may be unary (between single attributes) or n-ary, n >1 (between multiple attributes). Unary inclusion dependencies are the most common type of INDs, and the only one we found in Access databases that we encountered. Thus, we focused on the discovery of unary INDs Approximate Inclusion Dependencies When constraints are not defined or enforced, databases usually contain inconsistent data [DMP05]. To take into account for those inconsistencies, the notion of approximate inclusion dependencies has been proposed [MLP09]. The idea is to define an error measure, which represents the proportion of distinct values one has to remove from the dependent attribute so that the inclusion dependency is satisfied [MLP09]. Consider n ab the number of values contained in both sets, and n a the number of values included in attribute a. Then the error measure, g, can be defined as follows: g = 1 n ab n a 29
31 4.1.2 Brute Force Approach We followed what Bauckmann et al. call a Brute Force approach [BLN06] for testing the IND candidates. First, we separate attributes by Data Type. For each Data Type, we build IND candidates by iterating over candidate dependent and candidate referenced attributes. IND candidates are tested after their creation Selection of Dependent and Referenced Attributes We consider as candidate dependent attributes all attributes except those of type Memo, OLE, Attachment and Boolean. Attributes of the first three Data Types cannot be used as a Primary Key in Access, and thus cannot be part of a Foreign Key relationship. For attributes of type Boolean it is possible, however since only two values are possible it does not make sense. We consider as candidate referenced attributes all attributes (except for the types mentioned above) with unique values Test of a Single IND Candidate Consider two attributes a and b. We retrieve values of both from the database, and store them in sets. Afterwards, we scan through both sets and examine if each value of a is contained in b. While scanning, we keep track of the number of values that are included in both sets in order to compute the error measure g. If the error measure is less than the threshold we consider, then we have a satisfied inclusion dependency. In Algorithm 3 we present the algorithm for testing a single IND candidate. Data: dependentvalues, referenced Values, threshold Result: Is the IND satisfied? numberofcommonvalues = 0; foreach Value v in dependentvalues do if referencedvalues contains v then numberofcommonvalues++; end end g = 1 - numberofcommonvalues / sizeof(dependentvalues); if g <threshold then return true; end return false; Algorithm 3: Algorithm to test a single IND Candidate 4.2 Implementation Our implementation was done in Java using Jackcess library for the extraction of data from the Access database. We decided to keep the data sets in the main memory and not store them in files, so that data is not stored outside of the database but also in order to avoid unnecessary I/O costs. Our two test databases were quite small, their size was 6MB and 12MB respectively, so the main memory was enough to keep the data. For larger applications where the main memory is not enough, a bigger server might be needed. Our implementation contains 244 lines of code. 30
32 4.3 Evaluation In order to evaluate our tool, we tested it with the same databases that we used in the evaluation of the migration tool, databases Northwind and AKPNoP. Both of them have predefined Relationships, which we used as standard in order to check our results. For our tool, we considered a threshold of 0.10 (10%) for the error measure, when checking for approximate inclusion dependencies. Tests were performed on a Windows 7 computer with 4GB RAM IND Discovery: Northwind Database The first database is Northwind, which contains 20 Tables, with 178 Fields in total and 21 defined Relationships. Our results are the following: Total number of discovered inclusion dependencies: 93 Relationships defined in database 18 Lookup Query Columns 2 Likely PK-FK based on column names 2 False positives 71 Table 4.1: Northwind: Discovered Inclusion Dependencies Our tool discovered 93 inclusion dependencies in total. Out of those, 18 are Relationships that are explicitly defined in the database. 2 of them, are a case of Lookup Query Columns, as described in section The Lookup Column retrieves values from the column of a Query, however, no relationship is explicitly defined between the tables that they belong to. As we inspected manually our results, we also assumed that 2 of the discovered inclusion dependencies are a very likely Foreign Key constraint, based on the column names. For example, the values of the column [Inventory ID] of the table Order Details are included in the column [Transaction ID] of the table Inventory Transactions, but no relationship is explicitly defined between them in the database. However, we observed that there were other Tables in the database with a column named [Inventory ID] which is a Foreign Key to [Transaction ID]. Thus, we assumed that this is very likely the case here. Our Foreign Key recovery tool discovered a big number, namely 71, false positives: i.e. inclusion dependencies that do not correspond to a Foreign Key constraint. We investigated further the reasons for this big number of false positives. We observed that in this database most of the tables use Autonumbers as Primary Keys, i.e. semantic free integers that start from 1. A few tables had as Primary Keys fields of type Byte and contained only a few values that were also incremental, starting from 1. Consequently: The Autonumber Primary Key values of some tables are included in the Primary Key values of other tables. For instance, an Autonumber Primary Key Column containing the values 1, 2, 3 is included in any other Autonumber Primary Key, whose table contains more than 3 rows. Some Foreign Key Columns are included in the Autonumber Primary Keys of more than one table. This is the case especially if the Foreign Key column contains only one or a few values. Then, this column is included in all Primary Key columns of the database that have the same values. Finally, 3 relationships that are explicitly defined in the database were not detected by our tool. The reason for that is that the Foreign Key columns of those Relationships did not contain any data. 31
33 4.3.2 IND Discovery: APKNoP Database The second database that we used for the evaluation has 13 Tables, with 190 Fields in total and 15 defined Relationships. In Table 4.2 we see the results for the discovered inclusion dependencies: Total number of discovered inclusion dependencies: 77 Relationships defined in database 14 False positives 63 Table 4.2: APKNoP: Discovered Inclusion Dependencies Out of 77 discovered inclusion dependencies, 14 are predefined Relationships in the database. The tool also discovered a big number of false positives. The reason also lies in the use of Autonumbers as Primary Keys for many Tables. Some Autonumber Primary Key columns are fully included in other Primary Keys, while some Foreign Key Columns are included in the Autonumber Primary Keys of multiple table. Taking a closer look at the data, we also observed an additional reason for some of the false positives in this particular database: Duplication of data. The application holds information about basketball teams. The same teams are stored in two separate tables. As a result, the columns holding the team names include each other. Also, Foreign Keys referring to a team, appear to be included by the columns in both tables. Finally, we noticed that 1 defined Relationship is not discovered by our tool. The reason is that this column contains a very big number of inconsistent data Summary In this chapter we described how we tried to recover Foreign Key Relationships from an Access database. We checked the data for approximate inclusion dependencies and tested our tool with two databases. Our results showed that Foreign Key constraints can be successfully retrieved from the data. However, a big number of false positive results (inclusion dependencies that do not correspond to a Foreign Key constraint) is also detected in databases that make extended use of Autonumber Fields. For such databases, a different method of detecting Foreign Key relationships should be devised. Finally, our solution was tested with databases of a small size (<15MB). The data sets could fit into main memory and our implementation could run within seconds. For databases of bigger size the execution might be slower or the data sets may be too large to fit into main memory. In these cases, a bigger server might be needed. 32
34 Chapter 5 Discussion In this research we attempted to automatically migrate Microsoft Access databases to Mendix Platform and recover missing Foreign Key constraints. In this chapter we discuss the results of our attempt in relation with our research questions. 5.1 Schema Migration In our project we attempted to automatically migrate Access database schema to Mendix Domain Model. Our research questions regarding schema migration were which Access schema features can be migrated to Mendix, which cannot be migrated and where is human intervention needed. Andrade et al. note that in migration projects, the width of the semantic gap between the two systems decides the feasibility and complexity of the conversion [AGA + 06]. Access and Mendix Business Modeler have similar constructs available, and in most cases the mapping is straightforward and one-to-one. The semantic gap between the two software systems is not very big, and the main elements of the Access schema, namely the Tables, Fields, Relationships and Indexes can be successfully migrated to Mendix Domain Model. For most mismatches that we found there is a workaround available. For example, Hyperlinks can be mapped to String fields and the ability for them to act as Hyperlinks can be determined with the use of a special widget in Mendix Pages. Random Autonumbers can be mapped to incremental ones with extra care in the migration of data, if such fields are part of a Foreign Key Relationship, while OLE Object and Attachment Fields can be mapped to separate Entities. Other mismatches can be resolved with an extension of Mendix Business Modeler s constructs and languages. Access Expression Language is used in Access Calculated Fields. Calculated Fields in Mendix make use of Microflow Expressions and certain Access operators and functions used are not available in it. As a result, the ability to migrate those Fields correctly depends on the features used in the expression. Additionally, Validation Rules in Mendix could be extended for the correct mapping of multi-column Primary Keys. Right now, the uniqueness of the values of such columns cannot be preserved in the migrated application. An extension of Mendix Validation Rules would be helpful for a mapping of Access Validation Rules to Mendix Validation Rules, which are not migrated now. Since a mapping of Access Expression Language to Microflow Expressions was not performed, some migrated Microflows may contain errors that will have to be manually corrected by the user. Human intervention is also needed for the addition of Multi-value Lookup Fields to the migrated Domain Model, because the API we used did not allow for the retrieval of the necessary information for those Fields. 33
35 5.2 Foreign Key Recovery Recovering missing Foreign Key constraints is a necessary step for the migration of Access Relationships. Our approach was to examine the data for approximate inclusion dependencies, and our results showed that Relationships can be successfully retrieved. The error measure we took into account was 10%. However, in databases that make extended use of Autonumber Fields a big number of false positive results is also discovered, i.e. inclusion dependencies that do not correspond to Foreign Key Relationships. For such cases, a different approach could be devised or a method for pruning the number of false positives. 5.3 Queries Migration We explored in section 2.3 of our Problem Analysis the possibility to migrate Access SQL Queries to Mendix Platform. We presented two similar constructs in Mendix Business Modeler, OQL and Microflows. OQL is a language that is very similar to SQL. Although there are certain mismatches between the two languages, most SQL features are supported in OQL as well. However, OQL is used only as a data source in Mendix Reports. Reports in Mendix are not very widely used, usually Pages are used for displaying the data from Domain Model Entities. Additionally, Mendix is a model-driven development tool, and as such, models are preferable and more understandable to users with nonextensive programming knowledge. Thus, a migrated application that consists of a big number of OQL queries is not desirable. Microflows are process models that express the logic of the application. However, they do not support a big number of features that are available in SQL queries, for instance the GROUP BY functionality. Additionally, we observed in our test applications that many Queries return columns of multiple Entities and act as a data source for GUI widgets. With Microflows, it is possible to return objects of only one Entity. SQL Queries with e.g. a simple WHERE clause could be mapped to a Microflow. In these cases though, a Microflow may not be necessary because a simple XPath constraint can be defined in Mendix Pages. We can see that an automated migration of SQL Queries may not achieve the desired results. A migrated application with a big number of OQL Queries may be harder to understand for the user, and additionally those queries might not be useful if there is no intention of using Mendix Reports. Microflows on the other hand do not support all features available in SQL to support an automated migration. For those reasons, the manual inspection of Queries in an Access application might be better for finding out their function and consequently finding a way to implement this functionality in Mendix. 34
36 Chapter 6 Related Work Software migration is the substitution of a modern software system for a legacy one. It is essentially a process that moves an existing operational system to a new platform, retaining the legacy system s functionality [BLWG99]. Software migration encompasses a large number of areas of software engineering. Bisbal et al. provide a classification of migration issues in [BLWG99]; some of those are legacy system understanding, schema mapping, data conversion, data cleaning, target system architecture and development. Due to space limitations and the scope of this project, in this section we will focus on literature related to software migration methodologies, database conversion strategies, and foreign key recovery. Software Migration Methodologies In many software migration projects the legacy systems are mission-critical: the business cannot operate if the system stops working. Additionally, the scale and complexity of such projects can be big. Several migration methodologies have been proposed that determine the migration steps (the order in which different system components are migrated) and address the issue of interoperability of the source and target system during the migration process. The Big Bang approach, also referred to as Cold Turkey, involves redeveloping the legacy system from scratch using modern tools and technologies [BLW + 97]. It is a naive approach that can be quite risky because specifications rarely exist, the development of a new system may take years to finish and requirements may change in the meanwhile [BS93]. As Brodie et al. mention, this strategy has been tried and failed by many organizations, so other strategies have been proposed. Most of them include the use of a gateway, a software module that is introduced between two software components to mediate between them [BS93]. The Database First (Forward Migration) method involves the initial migration of legacy data to a modern database system, and then the incremental migration of legacy applications and interfaces [BLW + 97] [WLB + 97]. The legacy system operates in parallel with the target system during the development of applications and interfaces through a gateway that enables the legacy applications to access the target migrated database [WLB + 97]. Using the Database Last (Reverse Migration) approach, legacy applications are gradually migrated to the target platform, while the database remains on the original platform [WLB + 97]. A gateway enables target applications to access the legacy data management environment, while the legacy database is the last part of the migration process. In Chicken Little approach the legacy system is migrated in small incremental steps, while the legacy and target system operate in parallel throughout the migration [BS93]. Initially the target system is 35
37 very small, but as the migration progresses it eventually performs all the functionality of the legacy system. During the migration process data is duplicated across the legacy and target database. Bing et al. note in [WLB + 97] that the need for systems to interoperate through the migration process via gateways adds to the complexity of the migration process. Additionally, in Chicken Little approach, maintaining data consistency across two possibly heterogeneous systems can be a complex problem [BLWG99]. They propose a different migration methodology, the Butterfly Methodology, which eliminates the need for the source and target system to interoperate. Using this approach, a sample target database is built and all components are migrated except for data, while the legacy system is still in production. The final step of the migration is the gradual migration of data and then the cut-over to the new system. Database Conversion Strategies Hainaut et al. propose in [HCHH08] a migration reference model that identifies six migration strategies. They consider the migration of a software system that consists of a database and a program, and ignore other components like the user interface. Along these two dimensions they suggest a number of migration strategies and discuss each. In regard to the database dimension, two extreme strategies are considered: Physical Conversion and Conceptual Conversion. Physical Conversion consists of translating each construct of the source database to the closest construct in the target database. It focuses only on explicit constructs, i.e. those that are declared in the DDL code of the system, while the semantics of the data are ignored [HCHH08]. Conceptual Conversion, on the other hand, attempts to recover the precise semantic description of the source database first. First, the physical schema of the source database is extracted which includes only the explicit constructs. As a next step, the source code and the data are examined in order to recover implicit constructs and detect constraints. A conceptual schema is created which is used afterwards for developing the target database schema. Foreign Keys Recovery Several approaches have been proposed in literature for detecting foreign key constraints. Checking the data for inclusion dependencies is one of them, which has been researched by Bauckmann et al. [BLN06][BLNT07] and De Marchi et al. [MLP09]. The work of Bauckmann et al. [BLN06] focuses on unary inclusion dependencies, i.e. inclusion dependencies between single attributes. Their solution is the following: Considering two attributes a and b, retrieve their values and sort them with duplicate removal. Then scan linearly through both sets, starting from the smallest items, to check if the values of attribute a are included in the set of values of attribute b. The sorted sets allow for early interruption of the comparison process. They propose two approaches for checking IND candidates. With the Brute Force approach IND candidates are created by iterating over all dependent and referenced attributes and tested after their creation. In the Single Pass approach, value sets are read only once and IND candidates are tested in parallel. De Marchi et al. [MLP09] propose a different way to discover inclusion dependencies. For each data type, they build a binary relation that associates each value of the database with the attributes that have this value. An inclusion dependency between A and B is satisfied if for every value v such that (v, A) belongs to the binary relation, then (v, B) belongs to this binary relation as well. The authors additionally address the problem of n-ary IND inference, inclusion dependencies between sequences of attributes. An algorithm is proposed to generate candidate INDs of size i + 1 from satisfied INDs of size i. Finally, since inconsistent data often exist in real-world databases, they consider approximate INDs, that take into account an error measure. 36
38 Inclusion dependencies may detect many false positive results, i.e. pairs of columns that do not correspond to a foreign key constraint. This is especially the case when databases make use of incremental Autonumber fields. For this reason, Zhang et. al. propose in [ZHO + 10] a method for pruning the big number of false positive results. Their approach is based on the proposition that the distinct values of the Foreign Key have the same distribution as the Primary Key. They evaluate Randomness of the values on pairs that satisfy inclusion dependency in order to determine the likelihood that this pair is a likely foreign key constraint. 37
39 Chapter 7 Conclusion and Future Work 7.1 Summary and Conclusion This thesis was concerned with the migration of Access databases to Mendix Platform. We examined the schema elements of the two systems, and based on the mappings we implemented a tool that performs an automated migration of an Access schema to Mendix Domain Model. The relational schema of Access and Mendix Domain Model are based on similar concepts and constructs, and the mapping was quite straightforward. Our tool migrates correctly the main elements of Access schema. Human intervention is needed in cases where the library we used could not retrieve the information we needed. Certain mismatches that occur, can be resolved by an extension of Mendix constructs and languages. Additionally, we tried to recover the foreign key constraints of an Access database by examining the data for approximate inclusion dependencies. Our results showed that those constraints can be successfully recovered. Nonetheless, a very big number of false positives is also detected in databases that make extensive use of incremental Autonumbers. For those cases, a different approach might be needed or further research in order to prune the false positive results. 7.2 Future Work Foreign Keys Recovery In order to prune the false positive results that inclusion dependencies yield for Autonumber fields, Zhang et al. [ZHO + 10] suggest computing Randomness of values. Specifically, their approach is to test that the distinct values of the foreign key have the same distribution as the values of the primary key. This method could be applied in order to check if it provides better primary key - foreign key suggestions. Future work might also try different methods of recovering those constraints. An example would be by analyzing the join conditions on sql statements. 38
40 Bibliography [Acc99] Database Replication in Microsoft Jet 4.0, URL: en-us/library/office/aa140024(v=office.10).aspx. [AGA + 06] L. Andrade, J. Gouveia, M. Antunes, M. El-Ramly, and G. Koutsoukos. Forms2netmigrating oracle forms to microsoft.net. In Ralf Lmmel, Joo Saraiva, and Joost Visser, editors, Generative and Transformational Techniques in Software Engineering, volume 4143 of Lecture Notes in Computer Science, pages Springer Berlin Heidelberg, URL: doi: / _8. [BCFM09] C. Batini, C. Cappiello, C. Francalanci, and A. Maurino. Methodologies for data quality assessment and improvement. ACM Comput. Surv., 41(3):16:1 16:52, July URL: doi: / [BLN06] [BLNT07] J. Bauckmann, U. Leser, and F. Naumann. Efficiently computing inclusion dependencies for schema discovery. In Data Engineering Workshops, Proceedings. 22nd International Conference on, pages 2 2, doi: /icdew J. Bauckmann, U. Leser, F. Naumann, and V. Tietz. Efficiently detecting inclusion dependencies. In Data Engineering, ICDE IEEE 23rd International Conference on, pages , April doi: /icde [BLW + 97] J. Bisbal, D. Lawless, B. Wu, J. Grimson, V. Wade, R. Richardson, and D. O Sullivan. An overview of legacy information system migration. In Software Engineering Conference, Asia Pacific... and International Computer Science Conference APSEC 97 and ICSC 97. Proceedings, pages , doi: /apsec [BLWG99] J. Bisbal, D. Lawless, B. Wu, and J. Grimson. Legacy information systems: issues and directions. Software, IEEE, 16(5): , doi: / [BS93] M. L. Brodie and M. Stonebraker. Darwin: On the incremental migration of legacy information systems. Distributed Object Computing Group, Technical Report TR , GTE Labs Inc, [DMP05] [DSDR07] Fabien De Marchi and Jean-Marc Petit. Approximating a set of approximate inclusion dependencies. In Intelligent Information Processing and Web Mining, pages Springer, C. Drumm, M. Schmitt, H. Do, and E. Rahm. Quickmig: automatic schema matching for data migration projects. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages ACM, [HCHH08] J.-L. Hainaut, A. Cleve, J. Henrard, and J.-M. Hick. Migration of legacy information systems. In Software Evolution, pages Springer Berlin Heidelberg, URL: http: //dx.doi.org/ / _6, doi: / _6. 39
41 [MLP09] F.D. Marchi, S. Lopes, and J. Petit. Unary and n-ary inclusion dependency discovery in relational databases. Journal of Intelligent Information Systems, 32(1):53 73, URL: doi: / s x. [MRA05] A. MacDonald, D. Russell, and B. Atchison. Model-driven development within a legacy system: an industry experience report. In Software Engineering Conference, Proceedings Australian, pages 14 22, doi: /aswec [SSPA + 12] F. Sidi, P.H. Shariat Panahy, L.S. Affendey, M.A. Jabar, H. Ibrahim, and A. Mustapha. Data quality: A survey of data quality dimensions. In Information Retrieval Knowledge Management (CAMP), 2012 International Conference on, pages , doi: /InfRKM [WLB + 97] B. Wu, D. Lawless, J. Bisbal, J. Grimson, V. Wade, D. OSullivan, and R. Richardson. Legacy system migration: A legacy data migration engine. In Proceedings of the 17th International Database Conference (DATASEM97), pages , [ZHO + 10] M. Zhang, M. Hadjieleftheriou, B. C. Ooi, C. M Procopiuc, and D. Srivastava. On multicolumn foreign key discovery. Proceedings of the VLDB Endowment, 3(1-2): ,
42 Appendices 41
43 Appendix A Access Field & Table Properties A.1 Field Properties Access Field Property Fields Applicable Mendix Field Size: Determines the maximum number of characters for Text Fields. For Number Data Types, it specifies the type of Number (Byte, Integer, etc). Text Number Autonumber The property Max Length is available for Strings in the Domain Model. Default value: Specifies the default value for new records. Precision: Specifies the number of digits that can be stored both to the left and right of the decimal point for Decimal Numbers. Expression: Holds the expression for a Field that is Calculated. Result Type: Holds the actual data Type for the calculated Field, such as Text, Integer, etc. Input Mask: A pattern for all data to be entered in this field. Special characters are used that specify the type of data, such as a character or a number that must be entered for each character in the input mask. Text Memo Number Date/Time Currency Yes/No Hyperlink Numbers of type Decimal Calculated Calculated Text Date/Time This property is available in the Domain Model for the respective Attribute Types as well. The property is not available for Mendix Float Numbers. This property can be mapped to a Microflow in Mendix. This can be mapped to the Attribute Type. This property is not available in Mendix Domain Model. It can be specified in Mendix Pages, as property of input widgets. However, different special characters are used. 42
44 Validation Rule: An expression that limits the values that can be entered in that field. Validation Text: It specifies the message displayed when an invalid value is entered in that field. Required: Specifies whether a value is required in that field. Allow Zero Length: Specifies whether a zero-length string is a valid entry in this field. Indexed: This property sets a singlefield index. Unicode Compression: Specify whether to apply Unicode compression for this field or not. IME Mode, IME Sentence Mode: It sets the mode of the Input Method Editor for a field. It applies to East Asian languages, and is ignored for other applications. Smart Tags: MS Access has a number of predefined Actions, that can be tagged to a field, e.g. sending an with Outlook. Those are defined via this property. Text Memo Hyperlink Number Currency Date/Time Yes/No Text Memo Hyperlink Number Currency Date/Time Yes/No Text Memo Hyperlink Number Currency Date/Time OLE Object Attachment Text Memo Hyperlink Text Hyperlink Memo Number Currency Autonumber Date/Time Yes/No Text Memo Hyperlink Text Memo Hyperlink Autonumber Number Currency Text Memo Hyperlink Date/Time Validation Rules are available in the Domain Model. However, the expression language used in MS Access is richer than the options available in Mendix Validation Rules. This property can be mapped to the Error Message property in Mendix Validation Rules. This can be mapped to a Validation Rule Required in Mendix. However, since OLE Object and Attachment Fields are mapped to a separate Entity, it is not possible to apply the property in that case. Not applicable in the Domain Model. Indexes can be retrieved as separate objects from Access schema and mapped to Mendix Domain Model. So, this property does not need separate mapping. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. 43
45 Format: It is used to customize the way data are displayed and printed. It s a GUI related setting, with different values depending on the Data Type of the field. Text Format: Specifies whether to store Plain Text, or Rich Text, storing text as HTML and allowing rich formatting. Decimal Places: It specifies the number of Decimal Places displayed (not the ones that are actually stored in the database). Show Date Picker: Determines whether to display a Date Picker for the relevant field in a Form. Caption: Specifies the text used for the label of the field when used on a Form. Text Align: It specifies the text alignment in controls bound to that particular field. Append Only: Specifies whether to keep a history on the field. Display Control: Specifies the type of control that is used when this field is displayed in a form. Row Source Type: Specifies the data source for the field (Table, Query or a user-defined Value List). Row Source: Holds the Table, Query or Value List where data for this field is retrieved from. Bound Column: Specifies the bound column for this field. Column Count: Specifies the number of columns to be displayed. Column Heads: Specifies whether to display field names as column headings, when displaying the list to the user. Text Memo Hyperlink Number Autonumber Currency Date/Time Yes/No Calculated Memo Number Currency Date/Time Fields of all data types Fields of all data types except Attachment Memo Hyperlink Text Yes/No Number Lookup Fields Lookup Fields Lookup Fields Lookup Fields Lookup Fields Such a property is not available in Mendix Domain Model. It is possible to format fields in Mendix Pages, although CSS is used for styling. Such a feature is not available in the Domain Model. It is possible to display the contents of such a field with a custom widget in Mendix Pages. Not applicable in the Domain Model. There is an option to specify this property in widgets of Mendix Pages, but this also affects the number of digits that are stored in the database. Not applicable in the Domain Model. In Mendix Pages, a Date Picker is always displayed for Date/Time Attributes. GUI related setting, not applicable in Mendix Domain Model. In Mendix Pages, there is a Caption property for Label widgets. As a default value, the Caption has as a value the name of the Attribute. GUI related setting, not applicable in Mendix Domain Model. It is possible to set it in widgets of Mendix Pages with CSS. Not available in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. 44
46 Column Widths: Specifies the column widths in multi-column list box or combo box. List Rows: Specifies the maximum number of rows to be displayed in a combo box list. List Width: Specifies the width of the combo box drop-down list. Lookup Fields Lookup Fields Lookup Fields Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Allow Value List Edits: Specifies whether it is allowed to edit the list Lookup Fields Not applicable in the Domain Model. items. List Items Edit Form: Specifies the form to open in order to edit the list Lookup Fields Not applicable in the Domain Model. items. Show Only Row Source Values: Lookup Fields Not applicable in the Domain Model. A.2 Table Properties Access Table Property Default View: Specifies how the table is displayed by default when the user opens it. In Datasheet View data is presented in rows and columns, Pivot tables can also display sums while Pivot Charts present graphs with data totals or summaries. Read Only When Disconnected: Applicable for web databases, this property specifies whether the user can update a table that is linked to a Microsoft SharePoint Services site when they are working offline. Subdatasheet Expanded: Specifies whether the subdatasheets are automatically expanded, when the user views the table in a data sheet view. Subdatasheet Height: Specifies the height of the subdatasheet. Orientation: Left-to-right or right-to-left, it affects the order of table s columns in datasheet view. Description: Description for the table that appears in tooltips. Validation Rule: Rules for the whole table, possibly using a combination of table columns. Validation Text: The message that is displayed when a record violates the expression in the Validation Rule. Filter: Criteria that are used to display only matching rows in Datasheet view. Mendix Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Validation Rules in Mendix can be applied to single attributes only. Therefore, a mapping is not possible unless they get extended to be applied on multiple attributes. Error message Not applicable in the Domain Model. 45
47 Order By: Specifies the field or fields that are used for the default order of rows in datasheet view. Subdatasheet Name: Specifies the table or query that is used for supplying data for the subdatasheet. Link Child Fields: When defining a subdatasheet, this property holds the field that is used as a foreign key or matching field that will provide data for the subdatasheet. Link Master Fields: When defining a subdatasheet, this property holds the field that is used as the primaary key or matching field for the main table or query. Filter On Load: Specifies whether to automatically apply the filter criteria in the Filter property when the table is opened in the Datasheet view. Order By On Load: Specifies whether to automatically apply the sort criteria when the table is opened in the Datasheet view Not applicable in the Domain Model. Not applicable Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. Not applicable in the Domain Model. 46
48 Appendix B Access Expression Language - Mendix Expressions Access Arithmetic Operators Mendix Expressions Arithmetic Operators * * / div or : \ Not available Mod mod ˆ Not available Table B.1: Access - Mendix Expressions Arithmetic Operators Access Comparison Operators Mendix Expressions Comparison Operators < < <= <= > > >= >= = = <>!= Table B.2: Access - Mendix Expressions Comparison Operators Access Special Operators Mendix Operators Is Null (or Is Not Null) = empty (or!= empty) Like pattern Not available Between val1 And val2 Not available but the operation can be performed using <= and >= operators In(val1, val2...) Not available Table B.3: Access - Mendix Expressions Special Operators Access Concatenation Operators Mendix Concatenation Operators $ Not available
49 Access Logical Operators Mendix Expressions Logical Operators And and Or or Eqv Not available Not not Xor Not available Table B.5: Access - Mendix Expressions Logical Operators 48
50 Appendix C Access SQL - Mendix XPath Operators SQL Comparison Operators XPath Comparison Operators < < <= <= > > >= >= = = <>!= Table C.1: Access SQL - Mendix XPath Comparison Operators SQL Special Operators XPath Operators Is Null (or Is Not Null) = NULL (or!= NULL) Like pattern Not available Between val1 And val2 Not available but the operation can be performed using <= and >= operators In(val1, val2...) Not available Table C.2: Access SQL - Mendix XPath Special Operators SQL Logical Operators XPath Logical Operators And and Or or Eqv Not available Not not Xor Not available Table C.3: Access SQL - Mendix XPath Logical Operators 49
51 SQL Arithmetic Operators XPath Arithmetic Operators * * / div \ Not available Mod Not available ˆ Not available Table C.4: Access SQL - Mendix XPath Arithmetic Operators 50
52 Appendix D Access SQL - Mendix OQL Predicates & Operators SQL Predicates OQL Predicates ALL Default choice DISTINCT Not available DISTINCTROW DISTINCT [TOP n [PERCENT]] [LIMIT number] Table D.1: Access SQL - Mendix OQL Predicates SQL Comparison Operators OQL Comparison Operators < < <= <= > > >= >= = = <>!= Table D.2: Access SQL - Mendix OQL Comparison Operators SQL Special Operators OQL Operators Is Null (or Is Not Null) = NULL (or!= NULL) Like pattern LIKE Between val1 And val2 Not available but the operation can be performed using <= and >= operators In(val1, val2...) IN Table D.3: Access SQL - Mendix OQL Special Operators SQL Logical Operators And Or Eqv Not Xor OQL Logical Operators AND OR Not available NOT Not available 51
53 Table D.4: Access SQL - Mendix OQL Logical Operators SQL Arithmetic Operators OQL Arithmetic Operators * * / div \ Not available Mod % ˆ Not available Table D.5: Access SQL - Mendix OQL Arithmetic Operators SQL Aggregate Functions OQL Aggregate Functions Avg AVG Count COUNT First Not available Last Not available Min MIN Max MAX StDev Not available StDevP Not available Sum SUM Var Not available VarP Not available Table D.6: Access SQL - Mendix OQL Aggregate Functions 52
Database Programming with PL/SQL: Learning Objectives
Database Programming with PL/SQL: Learning Objectives This course covers PL/SQL, a procedural language extension to SQL. Through an innovative project-based approach, students learn procedural logic constructs
How To Create A Table In Sql 2.5.2.2 (Ahem)
Database Systems Unit 5 Database Implementation: SQL Data Definition Language Learning Goals In this unit you will learn how to transfer a logical data model into a physical database, how to extend or
Databases in Engineering / Lab-1 (MS-Access/SQL)
COVER PAGE Databases in Engineering / Lab-1 (MS-Access/SQL) ITU - Geomatics 2014 2015 Fall 1 Table of Contents COVER PAGE... 0 1. INTRODUCTION... 3 1.1 Fundamentals... 3 1.2 How To Create a Database File
MOC 20461C: Querying Microsoft SQL Server. Course Overview
MOC 20461C: Querying Microsoft SQL Server Course Overview This course provides students with the knowledge and skills to query Microsoft SQL Server. Students will learn about T-SQL querying, SQL Server
Using AND in a Query: Step 1: Open Query Design
Using AND in a Query: Step 1: Open Query Design From the Database window, choose Query on the Objects bar. The list of saved queries is displayed, as shown in this figure. Click the Design button. The
Duration Vendor Audience 5 Days Oracle End Users, Developers, Technical Consultants and Support Staff
D80198GC10 Oracle Database 12c SQL and Fundamentals Summary Duration Vendor Audience 5 Days Oracle End Users, Developers, Technical Consultants and Support Staff Level Professional Delivery Method Instructor-led
Oracle 10g PL/SQL Training
Oracle 10g PL/SQL Training Course Number: ORCL PS01 Length: 3 Day(s) Certification Exam This course will help you prepare for the following exams: 1Z0 042 1Z0 043 Course Overview PL/SQL is Oracle's Procedural
Exploring Microsoft Office Access 2007. Chapter 2: Relational Databases and Multi-Table Queries
Exploring Microsoft Office Access 2007 Chapter 2: Relational Databases and Multi-Table Queries 1 Objectives Design data Create tables Understand table relationships Share data with Excel Establish table
Table and field properties Tables and fields also have properties that you can set to control their characteristics or behavior.
Create a table When you create a database, you store your data in tables subject-based lists that contain rows and columns. For instance, you can create a Contacts table to store a list of names, addresses,
Introduction to Microsoft Jet SQL
Introduction to Microsoft Jet SQL Microsoft Jet SQL is a relational database language based on the SQL 1989 standard of the American Standards Institute (ANSI). Microsoft Jet SQL contains two kinds of
Microsoft Access Basics
Microsoft Access Basics 2006 ipic Development Group, LLC Authored by James D Ballotti Microsoft, Access, Excel, Word, and Office are registered trademarks of the Microsoft Corporation Version 1 - Revision
Oracle Database: SQL and PL/SQL Fundamentals NEW
Oracle University Contact Us: + 38516306373 Oracle Database: SQL and PL/SQL Fundamentals NEW Duration: 5 Days What you will learn This Oracle Database: SQL and PL/SQL Fundamentals training delivers the
Oracle Database 10g: Introduction to SQL
Oracle University Contact Us: 1.800.529.0165 Oracle Database 10g: Introduction to SQL Duration: 5 Days What you will learn This course offers students an introduction to Oracle Database 10g database technology.
ICAB4136B Use structured query language to create database structures and manipulate data
ICAB4136B Use structured query language to create database structures and manipulate data Release: 1 ICAB4136B Use structured query language to create database structures and manipulate data Modification
- Suresh Khanal. http://mcqsets.com. http://www.psexam.com Microsoft Excel Short Questions and Answers 1
- Suresh Khanal http://mcqsets.com http://www.psexam.com Microsoft Excel Short Questions and Answers 1 Microsoft Access Short Questions and Answers with Illustrations Part I Suresh Khanal Kalanki, Kathmandu
Oracle Database: SQL and PL/SQL Fundamentals
Oracle University Contact Us: 1.800.529.0165 Oracle Database: SQL and PL/SQL Fundamentals Duration: 5 Days What you will learn This course is designed to deliver the fundamentals of SQL and PL/SQL along
MS Access Lab 2. Topic: Tables
MS Access Lab 2 Topic: Tables Summary Introduction: Tables, Start to build a new database Creating Tables: Datasheet View, Design View Working with Data: Sorting, Filtering Help on Tables Introduction
MySQL for Beginners Ed 3
Oracle University Contact Us: 1.800.529.0165 MySQL for Beginners Ed 3 Duration: 4 Days What you will learn The MySQL for Beginners course helps you learn about the world's most popular open source database.
SQL Databases Course. by Applied Technology Research Center. This course provides training for MySQL, Oracle, SQL Server and PostgreSQL databases.
SQL Databases Course by Applied Technology Research Center. 23 September 2015 This course provides training for MySQL, Oracle, SQL Server and PostgreSQL databases. Oracle Topics This Oracle Database: SQL
Guide to Upsizing from Access to SQL Server
Guide to Upsizing from Access to SQL Server An introduction to the issues involved in upsizing an application from Microsoft Access to SQL Server January 2003 Aztec Computing 1 Why Should I Consider Upsizing
Oracle Database: SQL and PL/SQL Fundamentals
Oracle University Contact Us: +966 12 739 894 Oracle Database: SQL and PL/SQL Fundamentals Duration: 5 Days What you will learn This Oracle Database: SQL and PL/SQL Fundamentals training is designed to
A Brief Introduction to MySQL
A Brief Introduction to MySQL by Derek Schuurman Introduction to Databases A database is a structured collection of logically related data. One common type of database is the relational database, a term
Databases with Microsoft Access. Using Access to create Databases Jan-Feb 2003
Databases with Microsoft Access Using Access to create Databases Jan-Feb 2003 What is a Database? An Organized collection of information about a subject. Examples: Address Book Telephone Book Filing Cabinet
Database design 1 The Database Design Process: Before you build the tables and other objects that will make up your system, it is important to take time to design it. A good design is the keystone to creating
IT2305 Database Systems I (Compulsory)
Database Systems I (Compulsory) INTRODUCTION This is one of the 4 modules designed for Semester 2 of Bachelor of Information Technology Degree program. CREDITS: 04 LEARNING OUTCOMES On completion of this
Microsoft Access 2003 Module 1
Microsoft Access 003 Module http://pds.hccfl.edu/pds Microsoft Access 003: Module June 005 006 Hillsborough Community College - Professional Development Services Hillsborough Community College - Professional
A Basic introduction to Microsoft Access
A Basic introduction to Microsoft Access By Ojango J.M.K Department of Animal Sciences, Egerton University, Njoro, Kenya and International Livestock Research Institute, Nairobi, Kenya Ms Access is a database
SQL Server. 2012 for developers. murach's TRAINING & REFERENCE. Bryan Syverson. Mike Murach & Associates, Inc. Joel Murach
TRAINING & REFERENCE murach's SQL Server 2012 for developers Bryan Syverson Joel Murach Mike Murach & Associates, Inc. 4340 N. Knoll Ave. Fresno, CA 93722 www.murach.com [email protected] Expanded
SQL Server. 1. What is RDBMS?
SQL Server 1. What is RDBMS? Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained
Oracle Database 12c: Introduction to SQL Ed 1.1
Oracle University Contact Us: 1.800.529.0165 Oracle Database 12c: Introduction to SQL Ed 1.1 Duration: 5 Days What you will learn This Oracle Database: Introduction to SQL training helps you write subqueries,
HP Quality Center. Upgrade Preparation Guide
HP Quality Center Upgrade Preparation Guide Document Release Date: November 2008 Software Release Date: November 2008 Legal Notices Warranty The only warranties for HP products and services are set forth
TIM 50 - Business Information Systems
TIM 50 - Business Information Systems Lecture 15 UC Santa Cruz March 1, 2015 The Database Approach to Data Management Database: Collection of related files containing records on people, places, or things.
2874CD1EssentialSQL.qxd 6/25/01 3:06 PM Page 1 Essential SQL Copyright 2001 SYBEX, Inc., Alameda, CA www.sybex.com
Essential SQL 2 Essential SQL This bonus chapter is provided with Mastering Delphi 6. It is a basic introduction to SQL to accompany Chapter 14, Client/Server Programming. RDBMS packages are generally
Instant SQL Programming
Instant SQL Programming Joe Celko Wrox Press Ltd. INSTANT Table of Contents Introduction 1 What Can SQL Do for Me? 2 Who Should Use This Book? 2 How To Use This Book 3 What You Should Know 3 Conventions
Oracle SQL. Course Summary. Duration. Objectives
Oracle SQL Course Summary Identify the major structural components of the Oracle Database 11g Create reports of aggregated data Write SELECT statements that include queries Retrieve row and column data
Chapter 5 More SQL: Complex Queries, Triggers, Views, and Schema Modification
Chapter 5 More SQL: Complex Queries, Triggers, Views, and Schema Modification Copyright 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 5 Outline More Complex SQL Retrieval Queries
IT2304: Database Systems 1 (DBS 1)
: Database Systems 1 (DBS 1) (Compulsory) 1. OUTLINE OF SYLLABUS Topic Minimum number of hours Introduction to DBMS 07 Relational Data Model 03 Data manipulation using Relational Algebra 06 Data manipulation
MS ACCESS DATABASE DATA TYPES
MS ACCESS DATABASE DATA TYPES Data Type Use For Size Text Memo Number Text or combinations of text and numbers, such as addresses. Also numbers that do not require calculations, such as phone numbers,
Microsoft' Excel & Access Integration
Microsoft' Excel & Access Integration with Office 2007 Michael Alexander and Geoffrey Clark J1807 ; pwiueyb Wiley Publishing, Inc. Contents About the Authors Acknowledgments Introduction Part I: Basic
SQL Server An Overview
SQL Server An Overview SQL Server Microsoft SQL Server is designed to work effectively in a number of environments: As a two-tier or multi-tier client/server database system As a desktop database system
Information Systems SQL. Nikolaj Popov
Information Systems SQL Nikolaj Popov Research Institute for Symbolic Computation Johannes Kepler University of Linz, Austria [email protected] Outline SQL Table Creation Populating and Modifying
Oracle Database 11g SQL
AO3 - Version: 2 19 June 2016 Oracle Database 11g SQL Oracle Database 11g SQL AO3 - Version: 2 3 days Course Description: This course provides the essential SQL skills that allow developers to write queries
How to test and debug an ASP.NET application
Chapter 4 How to test and debug an ASP.NET application 113 4 How to test and debug an ASP.NET application If you ve done much programming, you know that testing and debugging are often the most difficult
4. The Third Stage In Designing A Database Is When We Analyze Our Tables More Closely And Create A Between Tables
1. What Are The Different Views To Display A Table A) Datasheet View B) Design View C) Pivote Table & Pivot Chart View D) All Of Above 2. Which Of The Following Creates A Drop Down List Of Values To Choose
Oracle Database: SQL and PL/SQL Fundamentals NEW
Oracle University Contact Us: 001-855-844-3881 & 001-800-514-06-97 Oracle Database: SQL and PL/SQL Fundamentals NEW Duration: 5 Days What you will learn This Oracle Database: SQL and PL/SQL Fundamentals
Section of DBMS Selection & Evaluation Questionnaire
Section of DBMS Selection & Evaluation Questionnaire Whitemarsh Information Systems Corporation 2008 Althea Lane Bowie, Maryland 20716 Tele: 301-249-1142 Email: [email protected] Web: www.wiscorp.com
SQL. Short introduction
SQL Short introduction 1 Overview SQL, which stands for Structured Query Language, is used to communicate with a database. Through SQL one can create, manipulate, query and delete tables and contents.
Querying Microsoft SQL Server
Course 20461C: Querying Microsoft SQL Server Module 1: Introduction to Microsoft SQL Server 2014 This module introduces the SQL Server platform and major tools. It discusses editions, versions, tools used
Microsoft Access Glossary of Terms
Microsoft Access Glossary of Terms A Free Document From www.chimpytech.com COPYRIGHT NOTICE This document is copyright chimpytech.com. Please feel free to distribute and give away this document to your
Utility Software II lab 1 Jacek Wiślicki, [email protected] original material by Hubert Kołodziejski
MS ACCESS - INTRODUCTION MS Access is an example of a relational database. It allows to build and maintain small and medium-sized databases and to supply them with a graphical user interface. The aim of
Chapter 6: Physical Database Design and Performance. Database Development Process. Physical Design Process. Physical Database Design
Chapter 6: Physical Database Design and Performance Modern Database Management 6 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Fred R. McFadden Robert C. Nickerson ISYS 464 Spring 2003 Topic 23 Database
Once the schema has been designed, it can be implemented in the RDBMS.
2. Creating a database Designing the database schema... 1 Representing Classes, Attributes and Objects... 2 Data types... 5 Additional constraints... 6 Choosing the right fields... 7 Implementing a table
Access Tutorial 2: Tables
Access Tutorial 2: Tables 2.1 Introduction: The importance of good table design Tables are where data in a database is stored; consequently, tables form the core of any database application. In addition
Data Modeling Basics
Information Technology Standard Commonwealth of Pennsylvania Governor's Office of Administration/Office for Information Technology STD Number: STD-INF003B STD Title: Data Modeling Basics Issued by: Deputy
Microsoft Access Part I (Database Design Basics) ShortCourse Handout
Microsoft Access Part I (Database Design Basics) ShortCourse Handout July 2004, Technology Support, Texas Tech University. ALL RIGHTS RESERVED. Members of Texas Tech University or Texas Tech Health Sciences
Access 2007. Creating Databases - Fundamentals
Access 2007 Creating Databases - Fundamentals Contents Database Design Objectives of database design 1 Process of database design 1 Creating a New Database... 3 Tables... 4 Creating a table in design view
Ken Goldberg Database Lab Notes. There are three types of relationships: One-to-One (1:1) One-to-Many (1:N) Many-to-Many (M:N).
Lab 3 Relationships in ER Diagram and Relationships in MS Access MS Access Lab 3 Summary Introduction to Relationships Why Define Relationships? Relationships in ER Diagram vs. Relationships in MS Access
AV-005: Administering and Implementing a Data Warehouse with SQL Server 2014
AV-005: Administering and Implementing a Data Warehouse with SQL Server 2014 Career Details Duration 105 hours Prerequisites This career requires that you meet the following prerequisites: Working knowledge
www.dotnetsparkles.wordpress.com
Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.
MS Access: Advanced Tables and Queries. Lesson Notes Author: Pamela Schmidt
Lesson Notes Author: Pamela Schmidt Tables Text Fields (Default) Text or combinations of text and numbers, as well as numbers that don't require calculations, such as phone numbers. or the length set by
Web Development using PHP (WD_PHP) Duration 1.5 months
Duration 1.5 months Our program is a practical knowledge oriented program aimed at learning the techniques of web development using PHP, HTML, CSS & JavaScript. It has some unique features which are as
User Services. Microsoft Access 2003 II. Use the new Microsoft
User Services July 2007 OBJECTIVES Develop Field Properties Import Data from an Excel Spreadsheet Create Relationships Create a Form with a Subform Create Action Queries Create Command Buttons Create a
Course 103402 MIS. Foundations of Business Intelligence
Oman College of Management and Technology Course 103402 MIS Topic 5 Foundations of Business Intelligence CS/MIS Department Organizing Data in a Traditional File Environment File organization concepts Database:
Oracle Database: Develop PL/SQL Program Units
Oracle University Contact Us: 1.800.529.0165 Oracle Database: Develop PL/SQL Program Units Duration: 3 Days What you will learn This Oracle Database: Develop PL/SQL Program Units course is designed for
Foundations of Business Intelligence: Databases and Information Management
Foundations of Business Intelligence: Databases and Information Management Content Problems of managing data resources in a traditional file environment Capabilities and value of a database management
MOC 20461 QUERYING MICROSOFT SQL SERVER
ONE STEP AHEAD. MOC 20461 QUERYING MICROSOFT SQL SERVER Length: 5 days Level: 300 Technology: Microsoft SQL Server Delivery Method: Instructor-led (classroom) COURSE OUTLINE Module 1: Introduction to Microsoft
www.gr8ambitionz.com
Data Base Management Systems (DBMS) Study Material (Objective Type questions with Answers) Shared by Akhil Arora Powered by www. your A to Z competitive exam guide Database Objective type questions Q.1
Microsoft Access 3: Understanding and Creating Queries
Microsoft Access 3: Understanding and Creating Queries In Access Level 2, we learned how to perform basic data retrievals by using Search & Replace functions and Sort & Filter functions. For more complex
INFORMATION BROCHURE Certificate Course in Web Design Using PHP/MySQL
INFORMATION BROCHURE OF Certificate Course in Web Design Using PHP/MySQL National Institute of Electronics & Information Technology (An Autonomous Scientific Society of Department of Information Technology,
Oracle Database 10g Express
Oracle Database 10g Express This tutorial prepares the Oracle Database 10g Express Edition Developer to perform common development and administrative tasks of Oracle Database 10g Express Edition. Objectives
CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY
CHAPTER 2 DATABASE MANAGEMENT SYSTEM AND SECURITY 2.1 Introduction In this chapter, I am going to introduce Database Management Systems (DBMS) and the Structured Query Language (SQL), its syntax and usage.
Physical Design. Meeting the needs of the users is the gold standard against which we measure our success in creating a database.
Physical Design Physical Database Design (Defined): Process of producing a description of the implementation of the database on secondary storage; it describes the base relations, file organizations, and
Course ID#: 1401-801-14-W 35 Hrs. Course Content
Course Content Course Description: This 5-day instructor led course provides students with the technical skills required to write basic Transact- SQL queries for Microsoft SQL Server 2014. This course
DbSchema Tutorial with Introduction in SQL Databases
DbSchema Tutorial with Introduction in SQL Databases Contents Connect to the Database and Create First Tables... 2 Create Foreign Keys... 7 Create Indexes... 9 Generate Random Data... 11 Relational Data
Developing Web Applications for Microsoft SQL Server Databases - What you need to know
Developing Web Applications for Microsoft SQL Server Databases - What you need to know ATEC2008 Conference Session Description Alpha Five s web components simplify working with SQL databases, but what
Qlik REST Connector Installation and User Guide
Qlik REST Connector Installation and User Guide Qlik REST Connector Version 1.0 Newton, Massachusetts, November 2015 Authored by QlikTech International AB Copyright QlikTech International AB 2015, All
Database Query 1: SQL Basics
Database Query 1: SQL Basics CIS 3730 Designing and Managing Data J.G. Zheng Fall 2010 1 Overview Using Structured Query Language (SQL) to get the data you want from relational databases Learning basic
Workflow Conductor Widgets
Workflow Conductor Widgets Workflow Conductor widgets are the modular building blocks used to create workflows in Workflow Conductor Studio. Some widgets define the flow, or path, of a workflow, and others
Oracle Database: Introduction to SQL
Oracle University Contact Us: +381 11 2016811 Oracle Database: Introduction to SQL Duration: 5 Days What you will learn Understanding the basic concepts of relational databases ensure refined code by developers.
Structured Query Language. Telemark University College Department of Electrical Engineering, Information Technology and Cybernetics
Telemark University College Department of Electrical Engineering, Information Technology and Cybernetics Structured Query Language HANS- PETTER HALVORSEN, 2014.03.03 Faculty of Technology, Postboks 203,
COMP 5138 Relational Database Management Systems. Week 5 : Basic SQL. Today s Agenda. Overview. Basic SQL Queries. Joins Queries
COMP 5138 Relational Database Management Systems Week 5 : Basic COMP5138 "Relational Database Managment Systems" J. Davis 2006 5-1 Today s Agenda Overview Basic Queries Joins Queries Aggregate Functions
Querying Microsoft SQL Server 20461C; 5 days
Lincoln Land Community College Capital City Training Center 130 West Mason Springfield, IL 62702 217-782-7436 www.llcc.edu/cctc Querying Microsoft SQL Server 20461C; 5 days Course Description This 5-day
SQL Simple Queries. Chapter 3.1 V3.0. Copyright @ Napier University Dr Gordon Russell
SQL Simple Queries Chapter 3.1 V3.0 Copyright @ Napier University Dr Gordon Russell Introduction SQL is the Structured Query Language It is used to interact with the DBMS SQL can Create Schemas in the
Guide to SQL Programming: SQL:1999 and Oracle Rdb V7.1
Guide to SQL Programming: SQL:1999 and Oracle Rdb V7.1 A feature of Oracle Rdb By Ian Smith Oracle Rdb Relational Technology Group Oracle Corporation 1 Oracle Rdb Journal SQL:1999 and Oracle Rdb V7.1 The
Introducing Microsoft SQL Server 2012 Getting Started with SQL Server Management Studio
Querying Microsoft SQL Server 2012 Microsoft Course 10774 This 5-day instructor led course provides students with the technical skills required to write basic Transact-SQL queries for Microsoft SQL Server
INTERNET PROGRAMMING AND DEVELOPMENT AEC LEA.BN Course Descriptions & Outcome Competency
INTERNET PROGRAMMING AND DEVELOPMENT AEC LEA.BN Course Descriptions & Outcome Competency 1. 420-PA3-AB Introduction to Computers, the Internet, and the Web This course is an introduction to the computer,
Oracle Database: Introduction to SQL
Oracle University Contact Us: 1.800.529.0165 Oracle Database: Introduction to SQL Duration: 5 Days What you will learn View a newer version of this course This Oracle Database: Introduction to SQL training
Aras Corporation. 2005 Aras Corporation. All rights reserved. Notice of Rights. Notice of Liability
Aras Corporation 2005 Aras Corporation. All rights reserved Notice of Rights All rights reserved. Aras Corporation (Aras) owns this document. No part of this document may be reproduced or transmitted in
Microsoft Access 2007 Module 1
Microsoft Access 007 Module http://pds.hccfl.edu/pds Microsoft Access 007: Module August 007 007 Hillsborough Community College - Professional Development and Web Services Hillsborough Community College
SAP Data Services 4.X. An Enterprise Information management Solution
SAP Data Services 4.X An Enterprise Information management Solution Table of Contents I. SAP Data Services 4.X... 3 Highlights Training Objectives Audience Pre Requisites Keys to Success Certification
MICROSOFT OFFICE ACCESS 2007 - LEVEL 2
MICROSOFT OFFICE 2007 MICROSOFT OFFICE ACCESS 2007 - LEVEL 2 Modifying Tables Setting Field Properties Using Operators in Queries Designing Advanced Queries Creating Action Queries Using Advanced Query
The Relational Model. Why Study the Relational Model? Relational Database: Definitions. Chapter 3
The Relational Model Chapter 3 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Why Study the Relational Model? Most widely used model. Vendors: IBM, Informix, Microsoft, Oracle, Sybase,
SQL Server 2008 Core Skills. Gary Young 2011
SQL Server 2008 Core Skills Gary Young 2011 Confucius I hear and I forget I see and I remember I do and I understand Core Skills Syllabus Theory of relational databases SQL Server tools Getting help Data
INTRODUCTION TO MICROSOFT ACCESS MINIMAL MANUAL
University of Glasgow Department of Computing Science INTRODUCTION TO MICROSOFT ACCESS MINIMAL MANUAL 1 Databases in Access...2 2 The Database Window...2 3 Help...2 4 Saving...3 5 Wizards...3 6 Tables...3
Creating Database Tables in Microsoft SQL Server
Creating Database Tables in Microsoft SQL Server Microsoft SQL Server is a relational database server that stores and retrieves data for multi-user network-based applications. SQL Server databases are
EUROPEAN COMPUTER DRIVING LICENCE. Module AM5, Database, Advanced-Level
EUROPEAN COMPUTER DRIVING LICENCE Module AM5, Database, Advanced-Level Copyright 2002 The European Computer Driving Licence Foundation Ltd. All rights reserved. No part of this publication may be reproduced
Microsoft Access 2010 Part 1: Introduction to Access
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES Microsoft Access 2010 Part 1: Introduction to Access Fall 2014, Version 1.2 Table of Contents Introduction...3 Starting Access...3
Database Administration with MySQL
Database Administration with MySQL Suitable For: Database administrators and system administrators who need to manage MySQL based services. Prerequisites: Practical knowledge of SQL Some knowledge of relational
D61830GC30. MySQL for Developers. Summary. Introduction. Prerequisites. At Course completion After completing this course, students will be able to:
D61830GC30 for Developers Summary Duration Vendor Audience 5 Days Oracle Database Administrators, Developers, Web Administrators Level Technology Professional Oracle 5.6 Delivery Method Instructor-led
Chapter 6 FOUNDATIONS OF BUSINESS INTELLIGENCE: DATABASES AND INFORMATION MANAGEMENT Learning Objectives
Chapter 6 FOUNDATIONS OF BUSINESS INTELLIGENCE: DATABASES AND INFORMATION MANAGEMENT Learning Objectives Describe how the problems of managing data resources in a traditional file environment are solved
