KNOWLEDGE-BASED DEVELOPMENT
|
|
|
- Cuthbert Shaw
- 10 years ago
- Views:
Transcription
1 KNOWLEDGE-BASED DEVELOPMENT PHILOSOPHY AND THEORETICAL FOUNDATION OF GENEXUS By Breogán Gonda and Nicolás Jodal Copyright Artech All rights reserved. May 2007 SUMMARY: THIS IS A SIMPLE, SYSTEMATIC OUTLINE OF THE PHILOSOPHY AND THEORETICAL FOUNDATION OF GENEXUS. EVEN THOUGH MOST OF THESE ELEMENTS ARE WELL-KNOWN IN THE GENEXUS COMMUNITY, THEY ARE ONLY PARTIALLY DOCUMENTED. INTRODUCTION The purpose of this paper is to help better understand the basic ideas and theoretical foundation that made GeneXus possible, as well as its essence and future developments. The first objective of GeneXus was to create a robust data model. Only after this model had been carefully built was it possible to consider expanding it and adding declarative elements such as rules, formulas, screens, etc., trying to describe users' views of data and thus include all the knowledge in business systems. Why try to encompass all the knowledge in business systems? Because it is a necessary condition to design, generate and maintain any company s business computer systems in a completely automated fashion. It seemed obvious that the IT community would eventually choose this path and that we should take the lead as soon as possible. But, how could we encompass all the knowledge in business systems? How could we include the many different cases of business systems? The first GeneXus version (100% declarative) was aimed at generating and maintaining 70% of the programs. These programs were handled automatically by the system, while the remaining 30% had to be manually programmed and maintained. Prospects understood that automatically solving 70% of their problem was very good, but they doubted that automatic maintenance would be possible. The first ones to adopt GeneXus did so only expecting to increase development productivity. After a few months, those customers were able to appreciate a large increase in development productivity, but much more so the advantages of automatic maintenance. One year after launch, the GeneXus project started facing great pressure from customers. The success of automatic maintenance and its advantages drove them to demand "automatic maintenance of everything," which required "automatic generation of everything." This was unprecedented in the global market, and many years later, GeneXus still is the only product in the world to solve these problems. So, what is GeneXus' objective today? The objective of GeneXus is to achieve very good automated management of business systems knowledge. Page 1
2 That's the essence of GeneXus. Once this objective is achieved, a wide set of sub products are possible, such as: Automatic design, generation and maintenance of the database and application programs needed by a company. Generation, based on the same knowledge, for multiple platforms. Integration of knowledge from different sources to address complex needs at lower costs in terms of time and money. Generation for future technologies when they become available, from the knowledge currently stored by GeneXus customers in their knowledge bases. GeneXus works with pure knowledge, which remains relevant regardless of the then current technologies. The entire development has followed certain guidelines, and this document aims at providing a simple, conceptual explanation of these guidelines. Page 2
3 NEW PARADIGM: DESCRIBING INSTEAD OF PROGRAMMING How do we intend to describe reality? We intend to "describe" instead of "programming." We intend to maximize declarative descriptions and minimize procedural specifications. Why describing instead of programming? This objective constitutes a paradigm change and implies a culture shock. The problem with paradigm changes is their slow adoption, but if they are correct (and this is undoubtedly so!) at some point events occur that bring about their fast, widespread adoption. What are those events? Today, most systems are manually developed and maintained, but how has the market evolved in the last 40 years (data up to 2006)? System complexity has increased by 2000% Programming language productivity has increased by 150% This situation is unsustainable for businesses; costs are skyrocketing. We can, nevertheless, group these costs around two dimensions: Time and Money Thanks to the availability of fast, cheap communications, companies have discovered that they can act on the Money dimension by hiring system development and maintenance in low-wage countries, and they have done it extensively in recent years. Actually, the cost problem hasn t been solved; instead, the unit of measurement has been changed. The number of work hours is basically the same (it s the same workload) but the price per hour is much lower. If we measure it in money, there was a dramatic cost reduction. What about the Time dimension? The above doesn t apply to time. Business systems have to address new needs: new devices, new users, new usage methods, new integration possibilities. That s why they grow increasingly complex and companies are subject to an ever-critical time-tomarket that they can t change. As a result, more companies are being adversely affected by long development and maintenance times of new and current solutions. Today, and especially in the near future, it's impossible to do good business unless you rely on adequate systems. This situation can't go on like this for long. Slowly at first and currently at a faster pace, more examples emerge: Companies in most developed countries are increasingly resorting to outsourcing system development generally accompanied by offshoring to low-wage countries but at the same time, a growing number of companies are taking the opposite direction because of unsatisfactory outsourcing experiences. Page 3
4 More and more companies find it very difficult to fire their own technicians and transfer their jobs to foreign workers in distant countries. In order to avoid that, they need their in-house costs to be similar to those of outsourcing. Only the adoption of high-level technologies will make it possible. A dramatic increase in productivity is necessary, but the productivity of programming languages has long reached a plateau. Can this problem be solved through better programming languages? The facts show it cannot. The problem doesn't lie in programming languages but in programming itself. So, how can we achieve the necessary increase in productivity? With knowledge-based, not programming-based, development: the solution is to describe instead of programming! At Artech we're sure of that and we're getting ready for the market explosion of knowledge-based development that will take place in the coming years. DATA MODEL Historically, the IT community started working with one rigid physical model. Later on it added other models in an attempt to obtain more flexibility and some degree of description permanence. The most outstanding work on the subject is the 1978 report of the ANSI SPARC [1] group, which introduces three models: External Model: where external views are represented. Logical or Conceptual Model: it's another model usually an E-R model to be obtained by abstraction from reality. Physical Model: basically, it was the database schema. The idea was to develop all three models in parallel so that programs only interacted with the External Model and, from the Logical Model, map those programs and the database (Physical Model). In this way, programs would be independent from the database. The report was praised at the time and later almost forgotten. In the early 80s, only one manufacturer tried to implement this three-model schema: Cincom Systems, for their product SUPRA and for different languages accessing SUPRA databases. Beyond all this, the ANSI SPARC report is still valid. The main difference today is the available technology resulting from the significant technological changes that have occurred in recent years. In view of current technology, we conclude that there may be several models, but the most important model for users and developers is the External Model: it gathers external knowledge and everything else such as auxiliary models that may be helpful must be automatically inferred from this External Model. In this paper we focus on the External Model, which contains genuine knowledge and the essence and uniqueness of GeneXus (the What). In the face of new technology possibilities and new customer needs, the first thing to do is expand this External Model. Page 4
5 NECESSARY MODEL FEATURES In the future, GeneXus object types will evolve, but following certain principles explained here. However, there is one previous question: who has the knowledge in companies? The only objective and detailed knowledge is that of users of all levels about views of the data they use, need or want. There isn't any objective knowledge about data itself. That's why we must favor the concrete over the abstract. There will be many users with different profiles and roles within the company, and they deal better with concrete elements of their surroundings than with abstract concepts. That's why the representation of knowledge should be simple, detailed and objective. It seems appropriate to rely on an External Model that gathers these views. An External Model can't have any physical or internal elements such as files, tables, entities, entity relationships, indexes or anything else that may be automatically inferred from it. What elements are desirable and what elements are unacceptable in GeneXus? We're trying to create a model with the following features: Knowledge that can be automatically managed. A model that is automatically manageable by computer software. That is to say, our model's knowledge should be automatically understandable and operable. Consistency. A model that is always consistent. Orthogonality. The objects that constitute a model have to be independent from each other. Adding, modifying or deleting an object won't require changing any other object. Automatic design, generation and maintenance of the database and programs. Automatic design, generation and maintenance of the database and programs are essential tasks that computer software can do automatically from a model. Scalability. The previous items describe the qualitative features that our model should have. But since we intend to solve big problems, we need the model's storage, access and inference mechanisms to work efficiently regardless of the problem size. REFERENCE FRAMEWORK How can we accurately represent knowledge? By creating a set of predefined object types that enable representation of reality and are automatically understandable and operable. The path usually followed by the industry is to start from an E-R or similar data model. But, how can we obtain an accurate data model? A corporate data model needs hundreds or even thousands of tables. Traditionally, data models were designed through an abstraction process, which implies communicating with several end users who have the necessary knowledge about the company, and identifying their relevant objects and relationships in order to include the corresponding entities and relationships in the model. Page 5
6 This abstraction process was highly subjective: who in the company had a deep knowledge of the data? Who really knew the relevant objects and relationships that were the data input for the model? The answer is clear and conclusive: nobody. So it was necessary to resort to the knowledge of different people, and each one added their own subjectivity. Where was the objective, detailed knowledge necessary to build our model? This knowledge about data certainly didn't exist in the company. However, all users of any level knew very well the data views that they used, needed, or wanted: Nobody knows the data but everybody usually works with views of this data. These data views were chosen as basic input for the model but one question remained: how to describe them objectively. The answer was to create a solid reference framework to base the other descriptions. This reference framework is constituted by Attributes. Basic Semantic Element: Attribute Each attribute has a name, a feature set and a meaning. In order to obtain a model of great expressive power, we need to represent semantic elements besides the syntactic elements which are well-known to the IT community. But it's difficult to work automatically with semantics. To automatically manage a semantic element, it has to be given a syntactic representation. Only one semantic element was chosen to be represented in our model: Attribute meaning. So, the second basic element is Meaning, and in our model it's defined as follows: meaning (attr) = name (attr). Attribute names become essential to the model Clear rules are needed to assign them so that consistency is ensured throughout the model. To that end, we adopted the URA (Universal Relational Assumption). It means that an attribute will have the same name everywhere and there won't be two different attributes (with different meanings) that share the same name. It s important that names are self-explanatory and easy to understand by other users. Page 6
7 So, the third basic element is the URA (Universal Relational Assumption). Because reality has shown that the URA isn t enough, we've created attribute subtypes, and later on, attribute group subtypes. The fourth element is constituted by subtypes. IN SUM: Attributes as described here (considering meaning, URA, and subtypes), are the basic semantic element of the GeneXus theory. Attributes constitute the reference framework on which we build the GeneXus models. DESCRIPTION OF REALITY After introducing the reference framework, we ll describe reality through a set of object type instances. They will be defined on the basis of their attributes, which will be used to gather characteristics of the reality to be represented. TRANSACTIONS Every user has one or more views of the data he/she uses daily. Among these views, we can make a first group with those used to manage data (with restricted access to add, modify, delete and view it). These users' views are called Transactions and are the first GeneXus object type. Each transaction has a set of elements: Transaction data structure: Data is presented within a structure and we have to accurately gather these structures. Below is the structure of a well-known transaction, a regular invoice. Page 7
8 [INVOICE] Invoice_Code * Invoice_Date Customer_Name Customer_Address (Product_Code * Product_Description Invoice_Product_Quantity_Sold Invoice_Product_Sales_Price Invoice_Line_Amount) Invoice_Subtotal Invoice_Discount Invoice_VAT Invoice_Total This representation has three attribute groups: A prolog or header that appears once at the top, a body that appears several times, and an epilogue or footer that appears only once. In this header, next to the Invoice_Code attribute is an asterisk (*), which means that for each Invoice (each occurrence of the header and footer) there is only one Invoice_Code. Next, there is a set of attributes between brackets, meaning that this attribute set can occur several times within each Invoice; this repetitive group is usually called "body. The body should have an identifier which clearly marks each Invoice line. In this case it's Product_Code and this condition is indicated with an asterisk (*) on the right side. After the repetitive group there's a footer. This Data Structure Diagram and the one currently used by GeneXus, which has more graphic features, are based on the diagrams introduced by Warnier-Orr's [2] and Jackson's [3] works. Rules: Most likely, there will be a set of rules for the transaction, as shown below: There won't be two Invoices with the same Invoice_Code. The Invoice_Code will be correlatively assigned. The Invoice_Date will default to the date of issuance. The Invoice_Date can't be earlier than the Invoice date of issuance. An Invoice will be rejected if any Product_Stock is negative. An Invoice will be rejected if the Customer_Debit_Balance exceeds the customer's credit limit. When the acceptance of an Invoice determines that the Product_Stock < Product_Reorder_Point for any product, the Provisioning (Product) routine should be activated. Formulas: Most likely, there will be a set of formulas for the transaction, as shown below: Invoice_Line_Amount = Invoice_Product_Quantity_Sold * Invoice_Product_Sales_Price Invoice_Subtotal = sum of Invoice_Line_Amount of the Invoice (Invoice_Code) Page 8
9 Invoice_Discount = discount calculation function (Invoice_Code) Invoice_VAT = VAT calculation function (Invoice_Subtotal Invoice_Discount) Invoice_Total = Invoice_Subtotal Invoice_Discount + Invoice_VAT Customer_Debit_Balance = sum of unpaid invoices (Customer_Code) Display elements: An object has to be associated with display elements such as screen formats for Windows or Web, graphic and dialog box styles, etc. Transactions are declarative, and a great part of the desired description of reality is obtained by capturing their knowledge and systematizing it in a Knowledge Base. CONSIDERATIONS ABOUT TRANSACTIONS AND CODD'S RELATIONAL THEORY [4] A quick look at this object may indicate that our model contravenes Codd's Relational Model. New elements such as formulas aren't included in Codd's Relational Model, and repetitive groups are explicitly left out. The Relational Model and our model have different objectives. It should be made clear that Codd never said that reality is formed by a set of relationships, which would be a big mistake. He said that the relational model is very good for storing and operating data. The Relational Model is aimed at accurately representing data in a database by satisfying some desirable conditions: eliminating redundancy, introducing a small set of rules so as to avoid the greatest sources of data inconsistency, and including a set of operators to manage data appropriately. The External Model will be used to obtain and store knowledge. This model aims at representing reality in the most direct and objective way possible. For this reason, we gather user views and store them in the model. Then, all the knowledge contained in them is captured and systematized in order to maximize the inference capabilities. Within this context, the Relational Model (or rather, a superset of this model) is used to represent and/or manage data. It corresponds to the so-called Internal or Physical Model referred to in the ANSI SPARC report mentioned above. In our theory, the Relational Model is extensively used. Page 9
10 PROCEDURES Transactions, as we've seen, allow us to describe user data views declaratively and store the necessary knowledge to automatically design, generate and maintain the database and programs that users need to manage their own data views. But this is not enough, and batch or similar processes are necessary. These processes can't be inferred from the knowledge provided by data views. In the first GeneXus version released, transactions solved approximately 70% of the problem by generating the corresponding programs and 30% of the programs were missing. To meet 100% of customer needs, we decided to use an object that allowed procedural descriptions: a high-level procedural language. What features should this language have? The best procedural languages in the market were considered and their features analyzed. Interesting constructions were found, such as the well-known For Each... End For that allows working with record sets in a simple, high-level way. However, all languages studied have a serious drawback in that they need references to low-level physical elements such as tables; therefore, when writing a program the developer should know from which table to take each attribute. For GeneXus, this is unacceptable because if an attribute is changed to another table, the description would no longer be valid! In other words, the External Model doesn't include tables, so any specification that contains them may not be orthogonal. That's why we wouldn't be able to ensure automatic application maintenance. The solution was to design a language whose syntax is inspired in the most advanced constructions of the best languages, and whose arguments are always part of the External Model. For example, instead of tables, we'll talk about attribute sets necessary to perform an action. The system, at automatic generation time, determines which tables are involved and, if applicable, which indexes are used and how to do it. With all this, it generates the program. Language syntax won't be dealt with in detail here. It's a procedural language with instructions to manage data that remains valid after structural changes in the database, and that has the logical and arithmetical instructions common in a high-level procedural language. In this way, all cases found have been solved, following GeneXus' guidelines on consistency and orthogonality. What are the strengths and weaknesses of this object? Great strength: Its procedural capabilities complement the declarative capabilities of Transactions, and (as we'll see in other GeneXus objects) they ensure the solution of any business system problem. Page 10
11 Weakness: The task of writing procedural descriptions is less productive than writing declarative descriptions, as in Transactions. To resolve this situation, elements have been and will be added in the near future to lessen the need for procedural descriptions. The most important object types are Transactions and Procedures. They allow a reasonable description of reality even though, with time, a significant set of new objects has been added: some of them to complete our description (GXflow for describing business operations workflow and generating the necessary code to use the specialized management server); others to enable more user friendly dialog boxes as new architectures supported them (Work Panels, Web Panels, GXportal); others to ease knowledge reuse in order to significantly increase productivity (Business Components, Data Providers, Mini-Procs, Patterns); and others to take Database (GXquery) or Data Warehouse (GXplorer) querying to end users. These objects can be combined to meet the most complex needs. Is GeneXus' object type set complete? It can't be demonstrated in theory. In practice, it has proven so. More than 40,000 GeneXus developers worldwide exclusively use GeneXus to develop mission-critical systems, which accurately represents the IT community in general and reality itself. Also, this shows that it completely meets the needs of today's business systems. But it isn't a closed set. In the future, new needs will probably arise that will lead us to introduce new object types in order to meet them. THE RELATIONAL MODEL'S IMPORTANCE IN BUILDING THE KNOWLEDGE BASE It's reasonable to believe that there's only one relational model for a given set of data views. We won't prove it here, but if this statement is true, we can find a reverse engineering process that uses the set of data views to deliver this minimum, relational database schema. Achieving this reverse engineering process was the first research breakthrough of the GeneXus project. Then, building the necessary Knowledge Base over it has been a successful, ongoing work. Page 11
12 EXTERNAL MODEL AND KNOWLEDGE BASE Initially, Knowledge Bases are associated to a set of inference mechanisms and contain general rules [5] which are independent from any particular application. When describing the object user's reality, descriptions are stored in the External Model. The system automatically captures all the knowledge contained in the External Model and systematizes it, adding it to the Knowledge Base. Additionally, over the previous knowledge, the system logically infers a set of results that help make future inferences more efficient. GeneXus works permanently over the Knowledge Base. All the knowledge in the Knowledge Base is equivalent to that of the External Model (a subset of the Knowledge Base), as it consists of the External Model, rules and inference mechanisms which are independent from this External Model, and a set of other elements automatically inferred from it. The developer can change the External Model by changing objects of the user's reality. Those changes will automatically propagate to all the elements that need it, such as other elements of the Knowledge Base, database and application programs. Likewise, the developer can't directly change any element that doesn't belong to the External Model. This work focuses exclusively on the External Model (the What). It refrains from dealing with the Knowledge Base that contains and maintains it, and that constitutes the How, a complex set of mathematical and logical tools which are the basis of the inference processes and intermediate results used to increase the efficiency of these inferences. Knowledge about the How is not necessary to successfully use GeneXus. What's more important: the What or the How? Neither works without the other, but if the What is not correct, everything is wrong. Something very important can be inferred from the above, which goes beyond the current implementation of GeneXus: ALL the knowledge is contained in the External Model and because of that, we could support the Knowledge Base in a completely different way tomorrow, and our customers' knowledge would still be reusable without any problems at all. In sum: The Knowledge Base and associated inference mechanisms constitute a big "inference machine." A good example of this is that given a data view we can automatically infer the necessary program to manage it. Page 12
13 D-DAY AND PROTOTYPING Prototyping is a great resource and if used properly, it avoids an exaggerated dependency on the final test, or even deployment (D-Day, where everything happens and Murphy's Law especially applies). Because of GeneXus' features, everything can be tested at any time and every object can be prototyped at the most convenient time. This resource is very useful and a direct consequence of the GeneXus theory. In particular, it's a consequence of the automatic propagation of changes, an important and exclusive GeneXus feature GENEXUS BASICS OVERVIEW Automatic knowledge management. It's based on a model of the object reality, an External Model that can be automatically managed by computer software. In particular, one that provides: Consistency: This model is always consistent. Orthogonality: External Model objects are independent from each other. Adding, modifying or deleting an object won't require changing any other object. Automatic generation and maintenance of the database and programs. External Model: The essential knowledge corresponds to an External Model that can't contain any physical or internal elements such as files, tables, entities, entity relationships, indexes or anything else that may be automatically inferred from that External Model. Integration: GeneXus has a set of proprietary objects that are conceptually integrated, and in the Rocha version it will provide new levels of integration with other modules or products developed by Artech or third parties. GENEXUS TRENDS GeneXus is constantly evolving and its evolutionary trends can be represented with four dimensions: COMPLETENESS, PRODUCTIVITY, UNIVERSALITY and USABILITY. COMPLETENESS: It measures the percentage of user system code that is automatically generated and maintained by GeneXus. Since 1992, GeneXus' completeness has always been of 100%. This is a fundamental commitment because it involves automatic propagation of changes and, therefore, a dramatic reduction of development and maintenance costs. UNIVERSALITY: It measures the coverage of the most important platforms available. Currently, GeneXus supports all the active platforms, which are those with a large and growing installed base. In this way, GeneXus offers customers a great freedom of choice because they can move their GeneXusdeveloped applications to another supported platform at a low cost. Page 13
14 PRODUCTIVITY: It measures GeneXus development productivity increase compared to manual programming. For many years, GeneXus aimed at increasing development productivity by 500%, which was enough. Today, customers need increasingly complex systems due to new devices and new needs, especially for systems that meet them regardless of complexity, while remaining friendly to users that generally can't be trained. What's more, those systems must be developed in less time because of an ever-shortening time-to-market. In 2004, these objectives were changed to 1000% for version 9 and to 2000% for the new version code-named Rocha. USABILITY: It measures ease of use in general terms. This involves technical users, but also strives to reach more non-technical users and enable them to take advantage of GeneXus. Using GeneXus implies a great increase in usability, as a GeneXus developer can develop outstanding applications for a given platform without expert skills in that platform, which represents a significant basic level of usability. However, the idea is to push the envelope and enable many more users, with less technical prerequisites, to take advantage of GeneXus. The Rocha version will greatly improve usability and this trend will continue in future versions. WHAT'S NEXT? IS OUR MODEL WRITTEN IN STONE? Certainly not; principles are permanent and proven by experience, but we should always look at GeneXus with a critical eye and expand it, so that it lets us describe new cases emerging from reality in the simplest way possible. Page 14
15 BIBLIOGRAPHY AND REFERENCES [1] ANSI-SPARC: Final report of the ANSI/X3/SPARC DBS-SG relational database task group (portal ACM, citation id= ) [2] WARNIER-ORR: Warnier, J.D. Logical Construction of Programs. Van Nostrand Reinhold, N.Y., 1974; Orr, K.T. Structured Systems Analysis. Englewood Cliffs, NJ, 1977: Yourdon Press; Kenneth T. Orr, Structured Systems Development, Prentice Hall PTR, Upper Saddle River, NJ, [3] JACKSON: Jackson, M.A. Principles of Program Design. London: Academic Press, [4] CODD: E. F. Codd, A relational model of data for large shared data banks, Communications of the ACM, v.13 n.6, p , June [5] RULES: Regardless of the particular rules associated to one company's model, there are general rules which are always true and are independent from any application. For example: Need for Consistency: Without going into further detail, it can be said that as in every science, consistency is the main source of rules: we assume that reality is consistent; therefore, any representation of reality should also be consistent. On the basis of this principle, we can infer many general rules that will enrich our model. One simple but important case is that of referential integrity: 1. Given X, a simple or compound attribute which is key of table T1, if X also appears in table T2, given a row of T2 there must be a row of T1 where T1.X = T2.X 2. Given X, a simple or compound attribute which is key of table T1, and Y, a simple or compound attribute such that Y = subtype(x), if Y appears in table T2, given a row of T2 there must be a row of T1 where T1.X = T2.Y 3. Given Y, a simple or compound attribute that appears in table T2 such that there isn't a table T1 where Y is key nor X (such that Y = subtype(x)) is key, Y can only appear in table T2. Page 15
16 CONTENTS INTRODUCTION... 1 NEW PARADIGM: DESCRIBING INSTEAD OF PROGRAMMING... 3 DATA MODEL... 4 REFERENCE FRAMEWORK... 5 DESCRIPTION OF REALITY... 7 EXTERNAL MODEL AND KNOWLEDGE BASE GENEXUS BASICS OVERVIEW GENEXUS TRENDS BIBLIOGRAPHY AND REFERENCES Page 16
A Structured Methodology For Spreadsheet Modelling
A Structured Methodology For Spreadsheet Modelling ABSTRACT Brian Knight, David Chadwick, Kamalesen Rajalingham University of Greenwich, Information Integrity Research Centre, School of Computing and Mathematics,
Data Modeling Basics
Information Technology Standard Commonwealth of Pennsylvania Governor's Office of Administration/Office for Information Technology STD Number: STD-INF003B STD Title: Data Modeling Basics Issued by: Deputy
Dimensional Modeling and E-R Modeling In. Joseph M. Firestone, Ph.D. White Paper No. Eight. June 22, 1998
1 of 9 5/24/02 3:47 PM Dimensional Modeling and E-R Modeling In The Data Warehouse By Joseph M. Firestone, Ph.D. White Paper No. Eight June 22, 1998 Introduction Dimensional Modeling (DM) is a favorite
Efficient database auditing
Topicus Fincare Efficient database auditing And entity reversion Dennis Windhouwer Supervised by: Pim van den Broek, Jasper Laagland and Johan te Winkel 9 April 2014 SUMMARY Topicus wants their current
Deploying Artificial Intelligence Techniques In Software Engineering
Deploying Artificial Intelligence Techniques In Software Engineering Jonathan Onowakpo Goddey Ebbah Department of Computer Science University of Ibadan Ibadan, Nigeria Received March 8, 2002 Accepted March
An Oracle White Paper February 2010. Rapid Bottleneck Identification - A Better Way to do Load Testing
An Oracle White Paper February 2010 Rapid Bottleneck Identification - A Better Way to do Load Testing Introduction You re ready to launch a critical Web application. Ensuring good application performance
Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2009
Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2009 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding
C HAPTER 4 INTRODUCTION. Relational Databases FILE VS. DATABASES FILE VS. DATABASES
INRODUCION C HAPER 4 Relational Databases Questions to be addressed in this chapter: How are s different than file-based legacy systems? Why are s important and what is their advantage? What is the difference
The Principles of the Business Data Lake
The Principles of the Business Data Lake The Business Data Lake Culture eats Strategy for Breakfast, so said Peter Drucker, elegantly making the point that the hardest thing to change in any organization
Agile Techniques for Object Databases
db4o The Open Source Object Database Java and.net Agile Techniques for Object Databases By Scott Ambler 1 Modern software processes such as Rational Unified Process (RUP), Extreme Programming (XP), and
Chapter 1: Introduction. Database Management System (DBMS) University Database Example
This image cannot currently be displayed. Chapter 1: Introduction Database System Concepts, 6 th Ed. See www.db-book.com for conditions on re-use Database Management System (DBMS) DBMS contains information
QaTraq Pro Scripts Manual - Professional Test Scripts Module for QaTraq. QaTraq Pro Scripts. Professional Test Scripts Module for QaTraq
QaTraq Pro Scripts Professional Test Scripts Module for QaTraq QaTraq Professional Modules QaTraq Professional Modules are a range of plug in modules designed to give you even more visibility and control
Semantic Object Language Whitepaper Jason Wells Semantic Research Inc.
Semantic Object Language Whitepaper Jason Wells Semantic Research Inc. Abstract While UML is the accepted visual language for object-oriented system modeling, it lacks a common semantic foundation with
Extending Data Processing Capabilities of Relational Database Management Systems.
Extending Data Processing Capabilities of Relational Database Management Systems. Igor Wojnicki University of Missouri St. Louis Department of Mathematics and Computer Science 8001 Natural Bridge Road
New Generation of Software Development
New Generation of Software Development Terry Hon University of British Columbia 201-2366 Main Mall Vancouver B.C. V6T 1Z4 [email protected] ABSTRACT In this paper, I present a picture of what software development
Chapter 1: Introduction
Chapter 1: Introduction Database System Concepts, 5th Ed. See www.db book.com for conditions on re use Chapter 1: Introduction Purpose of Database Systems View of Data Database Languages Relational Databases
Autonomic computing: strengthening manageability for SOA implementations
Autonomic computing Executive brief Autonomic computing: strengthening manageability for SOA implementations December 2006 First Edition Worldwide, CEOs are not bracing for change; instead, they are embracing
2. Basic Relational Data Model
2. Basic Relational Data Model 2.1 Introduction Basic concepts of information models, their realisation in databases comprising data objects and object relationships, and their management by DBMS s that
programming languages, programming language standards and compiler validation
Software Quality Issues when choosing a Programming Language C.J.Burgess Department of Computer Science, University of Bristol, Bristol, BS8 1TR, England Abstract For high quality software, an important
bitmedia Access 2007 Basics Entry test Database Basics Entry test Basic database terms What is Access 2007? Tables and indexes
bitmedia Access 2007 Basics Databases such as Access are often considered by some to live in the shadows of the Microsoft Office Package. This is, as we hope to demonstrate in the course of this module,
Automating Software License Management
Automating Software License Management Automating license management saves time, resources, and costs. It also consistently produces high quality data and a documentable method for mapping software licenses
Architecture Artifacts Vs Application Development Artifacts
Architecture Artifacts Vs Application Development Artifacts By John A. Zachman Copyright 2000 Zachman International All of a sudden, I have been encountering a lot of confusion between Enterprise Architecture
How To: Choosing the Right Catalog for Software License Management
Software License Management Guide How To: Choosing the Right Catalog for Software License Management Software License Management tools all rely on a catalog to reference and validate data. In this guide
Applying Agile Methods in Rapidly Changing Environments
Applying Agile Methods in Changing Environments 7/23/2002 1 Applying Agile Methods in Rapidly Changing Environments Peter Kutschera IBM Unternehmensberatung GmbH Am Fichtenberg 1, D-71803 Herrenberg Steffen
Introduction to Service Oriented Architectures (SOA)
Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) http://www.eu-orchestra.org - Version from: 26.10.2007 1 Content 1. Introduction
(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
Storage and Database Collaboration
An AppDynamics Business White Paper Storage and Database Collaboration If you don t have a seat at the table, you re probably on the menu Missouri State Senator Jolie L. Justus IT organizations have been
Maximize strategic flexibility by building an open hybrid cloud Gordon Haff
red hat open hybrid cloud Whitepaper Maximize strategic flexibility by building an open hybrid cloud Gordon Haff EXECUTIVE SUMMARY Choosing how to build a cloud is perhaps the biggest strategic decision
A Visual Language Based System for the Efficient Management of the Software Development Process.
A Visual Language Based System for the Efficient Management of the Software Development Process. G. COSTAGLIOLA, G. POLESE, G. TORTORA and P. D AMBROSIO * Dipartimento di Informatica ed Applicazioni, Università
Semantic Business Process Management Lectuer 1 - Introduction
Arbeitsgruppe Semantic Business Process Management Lectuer 1 - Introduction Prof. Dr. Adrian Paschke Corporate Semantic Web (AG-CSW) Institute for Computer Science, Freie Universitaet Berlin [email protected]
Evaluating OO-CASE tools: OO research meets practice
Evaluating OO-CASE tools: OO research meets practice Danny Greefhorst, Matthijs Maat, Rob Maijers {greefhorst, maat, maijers}@serc.nl Software Engineering Research Centre - SERC PO Box 424 3500 AK Utrecht
Enhancing BPM with Business Rules Increase your business agility by separating management of decision logic from mechanics of business process
white paper Enhancing BPM with Business Rules Increase your business agility by separating management of decision logic from mechanics of business process February 2010»» Summary Organizations in many
Figure 1 shows how such decision logic, when embedded within a workflow, can make the workflow very complex.
Keys to Improving Business Processes and Adopting SOA Strategies for Enabling More Effective IT-Business Collaboration with Business Rule Technology By Theresa O Neil, Vice President of Business Development,
INTRUSION PREVENTION AND EXPERT SYSTEMS
INTRUSION PREVENTION AND EXPERT SYSTEMS By Avi Chesla [email protected] Introduction Over the past few years, the market has developed new expectations from the security industry, especially from the intrusion
The use of Trade-offs in the development of Web Applications
The use of Trade-offs in the development of Web Applications Sven Ziemer and Tor Stålhane Department of Computer and Information Science Norwegian University of Technology and Science {svenz, stalhane}@idi.ntnu.no
Modeling Guidelines Manual
Modeling Guidelines Manual [Insert company name here] July 2014 Author: John Doe [email protected] Page 1 of 22 Table of Contents 1. Introduction... 3 2. Business Process Management (BPM)... 4 2.1.
CRM for the Independent Software Developer
CRM for the Independent Software Developer Selling your software If you are a professional software developer who creates and sells your own s, you are always on the lookout for ways to increase your market
Welcome to the Force.com Developer Day
Welcome to the Force.com Developer Day Sign up for a Developer Edition account at: http://developer.force.com/join Nicola Lalla [email protected] n_lalla nlalla26 Safe Harbor Safe harbor statement under
Advanced Software Test Design Techniques Use Cases
Advanced Software Test Design Techniques Use Cases Introduction The following is an excerpt from my recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test
From Private to Hybrid Clouds through Consistency and Portability
Extending IT Governance From Private to Hybrid Clouds through Consistency and Portability Gordon Haff 2 Executive summary 3 beyond information security 3 from private to public and back again 4 consistency
Ethical Policy for the Journals of the London Mathematical Society
Ethical Policy for the Journals of the London Mathematical Society This document is a reference for Authors, Referees, Editors and publishing staff. Part 1 summarises the ethical policy of the journals
Evaluating Data Warehousing Methodologies: Objectives and Criteria
Evaluating Data Warehousing Methodologies: Objectives and Criteria by Dr. James Thomann and David L. Wells With each new technical discipline, Information Technology (IT) practitioners seek guidance for
Structure of Presentation. The Role of Programming in Informatics Curricula. Concepts of Informatics 2. Concepts of Informatics 1
The Role of Programming in Informatics Curricula A. J. Cowling Department of Computer Science University of Sheffield Structure of Presentation Introduction The problem, and the key concepts. Dimensions
An Oracle White Paper October 2013. Oracle Data Integrator 12c New Features Overview
An Oracle White Paper October 2013 Oracle Data Integrator 12c Disclaimer This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should
Data Dictionary and Normalization
Data Dictionary and Normalization Priya Janakiraman About Technowave, Inc. Technowave is a strategic and technical consulting group focused on bringing processes and technology into line with organizational
Technology WHITE PAPER
Technology WHITE PAPER What We Do Neota Logic builds software with which the knowledge of experts can be delivered in an operationally useful form as applications embedded in business systems or consulted
The Role of Requirements Traceability in System Development
The Role of Requirements Traceability in System Development by Dean Leffingwell Software Entrepreneur and Former Rational Software Executive Don Widrig Independent Technical Writer and Consultant In the
Introduction to Software Paradigms & Procedural Programming Paradigm
Introduction & Procedural Programming Sample Courseware Introduction to Software Paradigms & Procedural Programming Paradigm This Lesson introduces main terminology to be used in the whole course. Thus,
Managing the Cloud as an Incremental Step Forward
WP Managing the Cloud as an Incremental Step Forward How brings cloud services into your IT infrastructure in a natural, manageable way white paper [email protected] Table of Contents Accepting the
THE ENSIGHTEN PROMISE. The Power to Collect, Own and Activate Omni-Channel Data
THE ENSIGHTEN PROMISE The Power to Collect, Own and Activate Omni-Channel Data EXECUTIVE SUMMARY Pure client-side or pure server-side tag management systems (TMS) suffer from critical limitations: The
The Sierra Clustered Database Engine, the technology at the heart of
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
Data Quality Assessment
Data Quality Assessment Leo L. Pipino, Yang W. Lee, and Richard Y. Wang How good is a company s data quality? Answering this question requires usable data quality metrics. Currently, most data quality
Microsoft Dynamics NAV. Prepayments. White Paper. Date: May 2011
Microsoft Dynamics NAV Prepayments White Paper Date: May 2011 2 Contents SUMMARY... 4 PREPAYMENTS BASIC FUNCTIONALITY... 5 PREPAYMENTS WORKFLOW...5 SETUP FOR PREPAYMENTS...6 SETUP OF G/L ACCOUNTS FOR PREPAYMENTS...6
ORACLE ENTERPRISE DATA QUALITY PRODUCT FAMILY
ORACLE ENTERPRISE DATA QUALITY PRODUCT FAMILY The Oracle Enterprise Data Quality family of products helps organizations achieve maximum value from their business critical applications by delivering fit
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
A Database Re-engineering Workbench
A Database Re-engineering Workbench A project proposal by Anmol Sharma Abstract Data is not always available in the best form for processing, it is often provided in poor format or in a poor quality data
How To Manage Software License Management With An Aspera Catalog
Software License Management Guide How To: Choosing the Right Catalog for Software License Management Software license management tools all rely on an SKU catalog to reference and validate license data.
WHITE PAPER Linux Management with Red Hat Network Satellite Server: Measuring Business Impact and ROI
WHITE PAPER Linux Management with Red Hat Network Satellite Server: Measuring Business Impact and ROI Sponsored by: Red Hat Tim Grieser Randy Perry October 2009 Eric Hatcher Global Headquarters: 5 Speen
(Refer Slide Time 00:56)
Software Engineering Prof.N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-12 Data Modelling- ER diagrams, Mapping to relational model (Part -II) We will continue
The Service, The Cloud & The Method: The Connection Points
The Service, The Cloud & The Method: The Connection Points Thomas Erl SOA Systems Inc. Prentice Hall Service-Oriented Computing Series Started in 2003 Text Books are an Official Part of the SOACP Curriculum
Team Avanade Project Total 17 Consultants. Accenture Business Services for Utilities Project Total 3 Consultants
CUSTOMER CASE STUDY Avanade helps Accenture Business Services for Utilities build a unique application Automating a manual system gives this company a strategic advantage now and for the future. Creating
Comparative Rating in the Real-time World
Comparative Rating in the Real-time World S p e c i a l R e p o r t In case you haven t noticed, comparative rating is changing. Real-time rating is now a part of WinRater and soon will become the standard.
Systems analysis is the dissection of a system into its component pieces to study how those component pieces interact and work.
SYSTEMS ANALYSIS Systems analysis is the dissection of a system into its component pieces to study how those component pieces interact and work. We do a systems analysis to subsequently perform a systems
Airline Flight and Reservation System. Software Design Document. Name:
Airline Flight and Reservation System Software Design Document Name: Date: 15.01.2011 Table of Contents 1. Introduction... 3 1.1 Purpose...3 1.2 Scope...3 1.3 Overview...3 2. System Overview... 4 3. System
GETTING REAL ABOUT SECURITY MANAGEMENT AND "BIG DATA"
GETTING REAL ABOUT SECURITY MANAGEMENT AND "BIG DATA" A Roadmap for "Big Data" in Security Analytics ESSENTIALS This paper examines: Escalating complexity of the security management environment, from threats
The Benefits of Data Modeling in Data Warehousing
WHITE PAPER: THE BENEFITS OF DATA MODELING IN DATA WAREHOUSING The Benefits of Data Modeling in Data Warehousing NOVEMBER 2008 Table of Contents Executive Summary 1 SECTION 1 2 Introduction 2 SECTION 2
A very short history of networking
A New vision for network architecture David Clark M.I.T. Laboratory for Computer Science September, 2002 V3.0 Abstract This is a proposal for a long-term program in network research, consistent with the
Foundations of Business Intelligence: Databases and Information Management
Foundations of Business Intelligence: Databases and Information Management Content Problems of managing data resources in a traditional file environment Capabilities and value of a database management
Engineering Process Software Qualities Software Architectural Design
Engineering Process We need to understand the steps that take us from an idea to a product. What do we do? In what order do we do it? How do we know when we re finished each step? Production process Typical
Using In-Memory Computing to Simplify Big Data Analytics
SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed
WHITEPAPER. Managing Design Changes in Enterprise SBM Installations
WHITEPAPER Managing Design Changes in Enterprise SBM Installations By Tom Clement Serena Software, Inc. October 2013 Summary This document explains how to organize your SBM maintenance and development
Workflow Automation Solutions that Work
White Paper Workflow Automation Solutions that Work Case Study - Leveraging the Web to Manage Workflow Copyright 2001 ESX Engineering, Inc. All Rights Reserved. Printed in the United States of America.
IBM InfoSphere Optim Test Data Management
IBM InfoSphere Optim Test Data Management Highlights Create referentially intact, right-sized test databases or data warehouses Automate test result comparisons to identify hidden errors and correct defects
Databases in Organizations
The following is an excerpt from a draft chapter of a new enterprise architecture text book that is currently under development entitled Enterprise Architecture: Principles and Practice by Brian Cameron
Chapter 2. Data Model. Database Systems: Design, Implementation, and Management, Sixth Edition, Rob and Coronel
Chapter 2 Data Model Database Systems: Design, Implementation, and Management, Sixth Edition, Rob and Coronel 1 In this chapter, you will learn: Why data models are important About the basic data-modeling
Foundations of Business Intelligence: Databases and Information Management
Chapter 6 Foundations of Business Intelligence: Databases and Information Management 6.1 2010 by Prentice Hall LEARNING OBJECTIVES Describe how the problems of managing data resources in a traditional
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
Cloud CRM. Scalable solutions for enterprise deployment
Cloud CRM Scalable solutions for enterprise deployment Simplicity in a complex world Finding, attracting, winning and retaining customers is the lifeblood of every business. But building a scalable, integrated
Maximizing the ROI Of Visual Rules
Table of Contents Introduction... 3 Decision Management... 3 Decision Discovery... 4 Decision Services... 6 Decision Analysis... 11 Conclusion... 12 About Decision Management Solutions... 12 Acknowledgements
Introducing Formal Methods. Software Engineering and Formal Methods
Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended
16.1 MAPREDUCE. For personal use only, not for distribution. 333
For personal use only, not for distribution. 333 16.1 MAPREDUCE Initially designed by the Google labs and used internally by Google, the MAPREDUCE distributed programming model is now promoted by several
CONTENT STORE SURVIVAL GUIDE
REVISED EDITION CONTENT STORE SURVIVAL GUIDE THE COMPLETE MANUAL TO SURVIVE AND MANAGE THE IBM COGNOS CONTENT STORE CONTENT STORE SURVIVAL GUIDE 2 of 24 Table of Contents EXECUTIVE SUMMARY THE ROLE OF
Excerpts from Chapter 4, Architectural Modeling -- UML for Mere Mortals by Eric J. Naiburg and Robert A. Maksimchuk
Excerpts from Chapter 4, Architectural Modeling -- UML for Mere Mortals by Eric J. Naiburg and Robert A. Maksimchuk Physical Architecture As stated earlier, architecture can be defined at both a logical
Top 3 Reasons To Outsource Product Development By Ralph Paul Director of Product Engineering MPR Product Development
Top 3 Reasons To Outsource Product Development By Ralph Paul Director of Product Engineering MPR Product Development Copyright 2014 MPR Associates, Inc. All Rights Reserved In today s environment, almost
Foundations for Systems Development
Foundations for Systems Development ASSIGNMENT 1 Read this assignment introduction. Then, read Chapter 1, The Systems Development Environment, on pages 2 25 in your textbook. What Is Systems Analysis and
BUSINESS INTELLIGENCE. Keywords: business intelligence, architecture, concepts, dashboards, ETL, data mining
BUSINESS INTELLIGENCE Bogdan Mohor Dumitrita 1 Abstract A Business Intelligence (BI)-driven approach can be very effective in implementing business transformation programs within an enterprise framework.
Ergon Workflow Tool White Paper
Ergon Informatik AG Kleinstrasse 15 CH-8008 Zürich Phone +41 1 268 89 00 Fax +41 1 261 27 50 www.ergon.ch Ergon Workflow Tool White Paper Version 1.1, August 14, 2002 Andreas Fleischmann Copyright 2004,
Enterprise Digital Identity Architecture Roadmap
Enterprise Digital Identity Architecture Roadmap Technical White Paper Author: Radovan Semančík Date: April 2005 (updated September 2005) Version: 1.2 Abstract: This document describes the available digital
