Legacy Software Reengineering. Frank Weil, Ph.D. UniqueSoft, LLC
|
|
|
- Edwin Melton
- 9 years ago
- Views:
Transcription
1 Legacy Software Reengineering Frank Weil, Ph.D. UniqueSoft, LLC
2 UniqueSoft is a provider of next-generation software development tools and services specializing in modernizing legacy software using highly automated tools and techniques to achieve superior results. UniqueSoft helps its clients through the various phases of a major transformation of their legacy systems, including analyzing the full system, formulating a low-risk and high-value modernization strategy, and executing on the elements of that strategy. In this white paper, we describe the UniqueSoft process and toll for reengineering client software systems to create a modern, easy-to-maintain, and high-quality system in a cost-effective manner. Contents 1. Overview... 3 Types of Reengineering Projects... 3 UniqueSoft Legacy Reengineering Capabilities... 5 Competitive Comparison... 7 Example Reengineering Projects The UniqueSoft Reengineering Process... 9 Transformation Instantiation Scalability Summary Page 2 of 21
3 1. Overview UniqueSoft is a provider of next-generation software development tools and services specializing in modernizing legacy software using highly automated tools and techniques to achieve superior results. UniqueSoft helps its clients through the various phases of a major transformation of their legacy systems, including analyzing the full system, formulating a low-risk and high-value modernization strategy, and executing on the elements of that strategy. In a general sense, reengineering legacy software is the process of improving the nonfunctional attributes of legacy software without changing its external behavior. That is, the legacy is made better or more suitable in some way. However, one desired improvement should not come at the expense of making other software attributes markedly worse. For example, migrating from a mainframe to Linux should not make the resulting code unmaintainable, require a mainframe emulation engine, or leave scattered remnants of the code s mainframe heritage. At best, such an approach is a temporary fix that still leaves the deeper problem: the resulting code is not what one would want to maintain. An effective reengineering process must go beyond indiscriminately applying a limited and fixed set of changes to legacy code. Instead, the process must allow the full goals of the project to be achieved. For example, translating COBOL to Java should result in code that is not merely correct, but is also maintainable, uses Java language features (inheritance, encapsulation ), and perhaps more importantly, does not use COBOL idioms (flat memory space, manipulation of data as strings, GOTOs ). Refactoring code to improve maintainability and reduce defects should result in code that is well structured in terms of its features, has dead code eliminated, minimizes replicated code, has a low overall cyclomatic complexity, etc. Extracting business rules from code should result in rules that are logically grouped and that use the business terminology familiar to the business stakeholders as opposed to the often-impenetrable names used in the source code. This white paper discusses how the UniqueSoft modernization process and tool make these goals possible. Types of Reengineering Projects Reengineering projects for legacy software systems typically fall into three broad categories: migration from one platform and/or language to another, correction of defects or deficiencies in the code (e.g., high complexity), or extraction of information such as business rules from the code. Projects often span more than one of these categories, and every legacy-reengineering project has its own unique goals and needs. Categories of projects are summarized in Table 1. Table 1: Reengineering Project Types and Examples Type Project Goal Example Projects Migration Correction Platform migration Database migration Client-server to cloud; mainframe to Linux MySQL to Oracle Language translation COBOL to Java; C to C# Design improvement Code improvement Modularize application to improve maintainability Remove defects and dead code Extraction Business Rule extraction Document insurance application rules as currently implemented Page 3 of 21
4 The overall goal of migration projects is to move the behavior of the software from one enabling technology to another. For example, the legacy system may have a client-server architecture and the goal is to make it cloud enabled, or may be currently on a mainframe and the goal is to move it to a Linux-based server, or may have originally been built with a MySQL database and the goal is to move it to a high-performance Oracle implementation, or may be in an older language such as COBOL and the goal is to move it to more modern language such as Java. The common theme is that the goal is not to change the application behavior, but rather to change how the application is written and deployed. The overall goal of correction projects is to correct (or improve) some aspect of the software. This improvement generally falls into one of two broad groups. The first group covers legacy systems that are functionally correct but need design improvements, such as when the software is difficult to maintain or enhance. Typical improvements are to restructure the code to reduce its complexity, to improve its modularity (e.g., to group the code implementing a feature into a single module), to reduce the coupling between modules, and to replace replicated code with calls to a single function. The second group of corrections covers legacy systems that require quality improvements. While some improvements in this category are straightforward, such as the removal of dead code, others require complex analysis of the software to discover locations in the code that are potentially incorrect. This is not a solvable problem in general because a tool can t know what the software is supposed to do. A tool can, however, point out two type of suspicious coding. First, analysis can locate anti-patterns in the code. For example, potentially incorrect code may include decision statements in which some values are not covered by branches, calls to external library functions which do not check the return value, or use of unsafe functions such as the C strcat routine. Even the presence of dead code may in itself indicate an error in the code. Second, a tool can statically examine the control and data flows of the system to locate potentially incorrect behavior. For example, analysis can locate use of a variable before it is assigned a value, data that is created and then discarded, or places where an index variable has a value outside of the array bounds. The overall goal of extraction projects involves uncovering information in the legacy system. Two kinds of information can be extracted: intrinsic and extrinsic. Intrinsic information about legacy software relates to what can be extracted purely from the code itself. That is, it is independent of what the code is supposed to do or why. Examples of intrinsic information about legacy software include number of lines of code, number of files, the directory structure, what calls are made to external libraries, what databases are accessed, etc. Extrinsic information about legacy software relates to what the code does in the terminology of the stakeholders. For example, one may want to understand what features are implemented in a system, where in the code those features are implemented, what business rules are realized, and how those business rules relate to each other. All three types of reengineering project are inherently difficult, and that difficulty grows with the size of the legacy software. For example, consider migrating a legacy application to the cloud. Cloud migration poses challenges because the cloud applications must be stateless, must not depend on local storage, should scale dynamically, require monitoring and management tools that are cloud-capable, must be in a supported language, and must have additional cybersecurity considerations. Properly migrating local file access, for instance, requires not just knowing where in the code the file accesses are (which is simple to find), but also why each local file access is being done: to store temporary transaction information, to store permanent Page 4 of 21
5 information, to log the transaction, or for some other purpose. Knowing the intent of the file access provides the information on how the access should be transformed. Similar knowledge is also needed for the other types of reengineering projects. For example, one cannot create feature-based modules unless one understands what features are implemented in the code and where they are implemented, and one cannot extract business rules that are meaningful to the business analysts unless the extracted rules use the analysts terminology. Once this software intent is uncovered, how that information is used is equally important. One use is for the system analyst to quickly and interactively build an understanding of the important aspects of the system. For example, software maps can be used to graphically represent the information in a compact manner, allowing the analyst to see how features are distributed across the code files, which parts of the code have high complexity, which external resources are used by which applications, etc. The analyst should be able to navigate through these maps to see the information at various levels of abstraction from the application level down to the relevant individual lines of code. Another use for the software intent is to drive the transformation of the system. Ideally, one would know exactly where and how to make changes to the system to achieve the project goals. The changes that are made to the system should be driven by program intent, quantitative metrics, and knowledge of the source and target environments. For example, a naïve translation from COBOL to Java typically produces code that is inefficient, bloated, littered with COBOLisms, and completely unmaintainable. Even though the code is in Java syntax, the code will look unnatural to a Java programmer and therefore will be difficult to maintain. Instead, the reengineering process should translate the intent in the COBOL code so that the resulting Java contains appropriate class hierarchies, uses encapsulation, does not rely on manipulation of a flat COBOL-like data space, and uses appropriate native Java libraries. This discovery and use of program intent requires human expertise and has traditionally been available only through manual rewriting. For large code bases, this option is prohibitively expensive and error prone. As a better solution, a human expert should have automated tool support act as a knowledgeable assistant and provide the scalability required to handle very large legacy systems. The tool should be able to do both mundane tasks such as computing the code complexity as well as complex tasks such as suggesting places in the code that may relate to a feature and applying a selected transformation across relevant parts of the system. UniqueSoft s legacy reengineering tool acts as this knowledgeable assistant. The capabilities of this tool are described in the rest of this paper. UniqueSoft Legacy Reengineering Capabilities UniqueSoft applies a high degree of automation throughout the legacy modernization process. The flexibility of the UniqueSoft tool enables the customization of the results to the needs of the project instead of forcing the solution to fit the tool. The tool is scalable in terms of the size of the legacy system to be modernized and the variation in languages and environments. The UniqueSoft reengineering process described here separates reengineering into three phases: Discovery, Transformation, and Integration (See Figure 1 on the next page). Page 5 of 21
6 Instantiation Transformation Discovery Discovery provides to the stakeholders the information they need about a legacy system in order to make informed transformation decisions. Transformation refactors the code to meet the project goals. Finally, the transformed system is instantiated for the new operating environment, cloud framework, client middleware libraries, language, etc. The UniqueSoft tool suite has at its core a powerful and customizable rules engine built to automate this process. Understand the current state of the legacy code from the perspectives of both its intent and its structure to determine what should be changed. Iteratively transform the legacy code to make measurable improvements based on the discovery results and on quantitative metrics. The UniqueSoft reengineering process and tool is applicable across a spectrum of legacy reengineering projects spanning both embedded and IT system: Transform COBOL (or other languages such as SQL, PL/I, KShell,) to maintainable code in a modern language such as Java without being locked into the impenetrable and Output new code from the transformed system so that it uses the native mechanisms of the target environment. Figure 1: High-level overview of the UniqueSoft reengineering process inefficient code that uses proprietary libraries produced by other translation tools, or to create code that is easy to understand and maintain from that originally translated code. Extract business rules from legacy code (for documentation or for modernizing to a new system) in a way that reflects the project s actual business processes and terminology. Migrate a system to a new technology platform (cloud, JEE, Hadoop, Linux, etc.) while taking full native advantage of the target platform s capabilities. Significantly reduce maintenance costs through modularization, clone elimination, dead code elimination, etc. Produce test suites that can be used to demonstrate functional equivalence of the legacy software and the modernized software. Visualize and explore the legacy software from multiple perspectives (features, metrics, dependencies, data flow, etc.) in terms of the concepts natural to the stakeholders. Create a modernization plan and drive the reengineering process based on quantitative metrics instead of basing the work on guesses as to what should be done. Reconcile out-of-date documentation with the true functionality of the legacy code. Merge code bases to determine what functionality is duplicated, unique, or modified. Translate from or to proprietary languages, middleware layers, or libraries for which no off-the-shelf tools exist. Page 6 of 21
7 Competitive Comparison There are multiple ways to approach sustaining and modernizing legacy software systems. The most common options, and the issues they address, are summarized in Table 2, organized from left to right roughly in the order of increasing cost. The issues range from those more typically associated with mainframe software (e.g., obsolete languages) to more universal ones (e.g., costly maintenance). For example, if the only issue is the interoperability with external systems, then wrapping mainframe data APIs can be an option. Hardware virtualization emulates the current environment but does not meet any other sustainment goals. Tools for direct translation of the legacy code exist, but use of these tools typically results in unmaintainable software still containing the idioms and style of the legacy language. The system can also be manually rewritten, but that option is almost always forbiddingly expensive and is the start of a new cycle of legacy software. Table 3 lists additional disadvantages of these options. Table 2: Comparison of Legacy System Options Hardware Virtualization Scraping / Wrapping Rehosting Direct Translation UniqueSoft Reengineerin g Manual Rewrite Legacy Issue Obsolete hardware Obsolete language Lack of external access (web/mobile) Lack of features Obsolete architecture Obsolete platform Lack of documentation Lack of test cases Costly maintenance Lack of Software Assurance Table 3: Legacy System Option Disadvantages Legacy System Option Hardware Virtualization Scraping / Wrapping Rehosting Direct Translation UniqueSoft Reengineering Manual Rewrite Disadvantage The software is not touched at all, so sustainment and modification costs are not reduced. The legacy system is largely untouched, so sustainment costs increase (original system plus new software). Data is replicated. The software is not touched at all, so sustainment and modification costs are not reduced. The generated code is typically unmaintainable and do not use the paradigms of the target language. Additional software licenses are often involved for proprietary runtime environments. Requires some involvement from SMEs. Labor intensive and expensive. Defects are introduced, and the new code is now the legacy code (incorrect documentation, poor design choices, etc.). Page 7 of 21
8 There are no proprietary runtime libraries or other code required for the code produced using the UniqueSoft tool, and there is no inherent reliance on UniqueSoft services after the reengineering is complete. The client retains 100% ownership of all intellectual property resulting from the reengineering. Example Reengineering Projects The UniqueSoft reengineering approach described here has been deployed on projects that cover a variety of the legacy reengineering purposes described above. Reengineer a set of applications that process trouble tickets. The applications totaled approximately 175,000 lines of code written in C++, Visual Basic, and SQL stored procedures. The applications were modernized to use Java, SQL, and rules using the FICO Blaze Advisor Rule Engine. Business rules were automatically extracted from the legacy code, rule logic was simplified, dead was removed, and replicated code segments were consolidated. Modernize a financial application from POJO Java to the EMC vfabric framework, including the vfabric Web Server, tc Server, Spring, GemFire, RabbitMQ, SQLFire, PostgresSQL, and Hyperic. Automatically migrate the Java-based OpenAM software (Authentication, Authorization, Entitlement and Federation) to use Amazon Web Services S3 (AWS S3) as a Cloudenabled persistence mechanism. Reengineer the code for an insurance suite from Java that was automatically translated from the original COBOL to maintainable Java using the native Java constructs. This suite of applications consisted of over 1 million lines of code. The code was reengineered by refactoring the code to use object orientation, eliminating code clones, translating COBOL-like constructs to more natural Java equivalents, and various other improvements in the code such as removing the execution framework imposed by the original translation tool. Reengineer a cellular GPRS Allocator module from C-like C++ to Object-Oriented C++. The legacy code made extensive use of preprocessor directives and used almost no OO features. The original code was approximately 60,000 lines of C++. The code was refactored to remove dead code; remove cloned code; improve modularity, coupling, cohesion, and testability; and fully use native C++ constructs. Reengineer a networking component for maintainability and performance. The legacy code was approximately 110,000 lines of C code. The code was refactored to remove dead code; remove cloned code; and improve modularity, coupling, cohesion, and testability. This project was successfully delivered in approximately three months. Reengineer the ETL (Extract-Transform-Load) code for a back-office customer management system. The legacy system consists of multiple millions of lines of code written in several languages. The code was refactored to reduce code, optimize the processing time, and target a different ETL solution. The details of the UniqueSoft reengineering process are described in the next section. Page 8 of 21
9 Instantiation Transformation Discovery 2. The UniqueSoft Reengineering Process The UniqueSoft reengineering process is made up of three main phases: Discovery, Transformation, and Instantiation. Each phase is comprised of a number of steps tailored to the needs of the client. The steps in the overall reengineering process are shown in Figure 2. An overview of the process is provided in this section, and each phase is described in more detail in subsequent sections. The Discovery phase is comprised of four main steps. A Domain Model is created that decomposes the system into project-centric features and groups of interacting features (also called a feature model). Model Extraction parses the legacy code and converts it into a model that captures its behavior and structure. Many common languages such as COBOL, Java, C++, C, SQL, PL/I, JCL, Assembler, Visual Basic, UML, and Kshell are currently supported, and new or custom languages can be readily added. Architecture Extraction derives a formal architectural representation of the system from the Domain Model Model Extraction Test-Case Generation Architecture Model Fully automatic Architecture Extraction Quality Analysis Validation Deployment Model Semi-automatic Feature Extraction Diagnosis Refactoring Code Generation Modeling/Manual Figure 2: Detailed view of the UniqueSoft reengineering process legacy artifacts. Feature Extraction annotates the derived model according to the features identified in the Domain Model. At the end of the Discovery phase, the intent of the software has been uncovered through both the intrinsic and extrinsic properties of the software. The Transformation phase is comprised of five main steps. Quality Analysis computes quality and modularity metrics such as feature diffusion, complexity, identification of dead or cloned code, and entity coupling to provide quantitative guidance to the modernization. Diagnosis of the Quality Analysis results produces a plan for the refactoring to ensure that the transformations result in the desired metrics improvements. Refactoring transformations perform these changes to the design and architecture, leading to better modularity and alignment with the operating environment. Validation tests the extracted and refactored model. This sub-process is repeated until the system meets the project goals. Test-Case Generation generates a regression test suite from requirements or from the legacy system itself. The metrics computed in Quality Analysis have the additional benefit of providing guiding information for the automatic refactoring done during the Transformation phase. For example, identifying modules with high complexity or features with low modularity provides information on specific areas where the UniqueSoft tool can apply its refactoring transformations. That is, the transformation efforts can be applied where they will demonstrably have the greatest effect. At Page 9 of 21
10 the end of the Transformation phase, the system has been changed to have the characteristics that meet the goals of the project. The Instantiation phase creates the code for the target environment from the transformed model. An Architecture Model is built to realize non-functional requirements such as reliability and performance. A Deployment Model captures how that code is then deployed across the components of the target system. Finally, code generation produces the target code. At the end of the Instantiation phase, one has maintainable code that meets the goals of the project and can be deployed natively in the target framework. This full process can be tailored to meet the needs of each project. For example, detailed code metrics would typically not be produced for a project whose goal is to replace the local file storage in the legacy code with cloud APIs, and creating a new architecture model may not be needed for a project whose goal is to transform business rules from one language to another. To illustrate the UniqueSoft process and its reengineering tool, screen shots from the steps of a small and relatively simple reengineering project will be shown. The purpose of this project is to translate a video rental application written in COBOL for the mainframe CICS environment to Java that uses the Spring framework. The original application is about 5000 lines of COBOL that uses an IBM 3270 green screen GUI and has its interface logic intermingled with its business logic. The project goals are to have an application with a modern look and feel using a browserbased front end, persistent storage through the use of Hibernate, and maintainable Java code free from COBOL-isms. To achieve the goals, the business logic will be identified, the code for the user interface will be extracted, the code for the data access will be extracted, and the code will be transformed into Java that uses the expected native interfaces and constructs. Discovery The main purpose of the Discovery phase is to uncover the intent, features, architecture, design, and structure of the legacy system and represent that information in an internal model to be used by the tool. Several steps are performed as part of Discovery. First, the legacy source code is parsed to create an initial model that captures the behavior and structure of the original system. It is this internal model that is transformed by the rest of the reengineering process. This model is also used as the basis for the software maps that are created, such as shown in Figure 3 and Figure 4. Figure 3: Example Software Map Showing Code Structure The software map is shown in the upper center of the screen, with the code on the right and various metrics below. The outer circle represents the whole application, with the COBOL Copy Books in the left circle and the application code in the right circles. The third-level circles are the directories, and the smallest circles are the individual files in the application. The user can interactively navigate to the code by clicking within the software map. Page 10 of 21
11 The meaningful concepts in the legacy code are also captured as a domain model. What is meaningful to capture is dependent on the goals of the project. For example, if one of the goals of the project is to transform a client-server application into a cloud application, then local file storage and the category of stored information are important features. For an order-entry application, inventory control and customer support levels may be important features. An example domain model is shown in Figure 5. A small initial domain model is typically built based on analysis of requirement documents, application documentation, test cases, developer interviews, user interviews, etc. The tool helps the domain expert expand the domain model, so a minimal domain model is initially created covering only those features that are known to be of interest. Additional features are added to the domain model once the analyst sees what parts of the code are still left uncovered and what the tool suggests as new features. The distribution of the features across the code base is also visualized as a software map, as is shown in Figure 6. The tool uses data mining and topic analysis to discover concepts that may be used as features. Artificial intelligence techniques such as clustering and proximity analysis extend the initial model for greater coverage of the concepts. This process is iterated until the domain model is sufficiently detailed and the code is sufficiently covered for the purposes of the project. Examples of the use of clustering to provide high-level information are shown in Figure 10 and Figure 9. (Next Page.) In addition to understanding the features that are in the system, an analyst will want to identify architectural elements by Figure 4: Software Map Showing Identifiers by File This software map shows how the important identifiers (function names, variable names, etc.) in the system are distributed across the files. The identifiers are important because they correspond to the features captured in the domain model. The table in the lower middle shows the identifiers sorted from highest to lowest frequency of use. Clicking on a table row will show the corresponding code on the right. Clicking in the diagram will show all identifiers used within the selected file. Figure 5: Example Domain Model (Feature Model) A domain model is shown for the example project. The feature map is in the center window, showing the hierarchy of important features in the software (in this example, the features are data structure, platform, and use cases). On the right is the feature editor that allows the analyst to enter the important concepts. Figure 6: Feature Distribution over the Code Features are color coded, and any feature or set of features can be displayed in the software map by selecting (or deselecting) it from the list of features. By selecting a file in the diagram, the code associated with the feature is displayed with colored highlighting. Page 11 of 21
12 understanding the relationship between components and modules in the system. Data flow diagrams, control flow diagrams (see Figure 8), and call sequence diagrams (see Figure 7) are created automatically that express the architectural information. These diagrams rely on sophisticated code analysis techniques such as partial evaluation and abstract interpretation to understand the behavior of the code. Actual execution of the code is not required. Once entry points into the system are identified, the tool can also provide use cases across the system. Figure 10: Software Map showing Clustering of Identifiers This software map shows the clustering of different identifiers in the system. The size and positioning of the color-coded circles provide a direct view of the symmetries in the system. For example, the two circles representing the copybooks in the upper left of the diagram indicate that they have a large degree of similarity. Figure 9: Using a Software Map to Discover Structure in Related Applications This example shows the scalability of the analysis. The software map on the left shows the relative sizes of a set of related applications spanning over 1500 files and 1 million lines of code. The software map on the right shows the files grouped by applying clustering analysis. Figure 8: Example Control Flow Diagram The full Control Flow diagram of a legacy system can be displayed. This diagram is an efficient representation of all possible flows through the system. The diagram is interactive: an analyst can examine all flows related to an entry point and can drill down into varying levels of detail. The relations of the flows to the functions are highlighted in the detailed views. One can also view all points in the flow that read from selected database tables or write to them. Figure 7: Zoomed View of a Call Graph A complete call graph can be generated for an application or set of applications. This example shows a portion of the complete call graph. The entry points into the system (that is, the starting points of the call graph) are automatically identified by the tool. Page 12 of 21
13 Other representations, such as resource structure matrices, express dependencies between components as well as between input and outputs. The dependencies between external resources and code modules can be explored using software maps, as is shown in Figure 13. Based on the domain model and its mapping to the code, code segments focused on selected concepts (that is, the implementation of features of the system) can be selectively displayed. This allows the analyst to visualize how the features are distributed across the system. If it is appropriate for a project, other types of diagrams can also be displayed. Examples of such diagrams are given below. Class and component diagrams present a high-level overview of the data structures and architecture extracted from the legacy code and the allocations of functionality to packages or files (see Figure 13). Activity diagrams present application flows, data dependencies, and control dependencies between key components of the legacy code (see Figure 13). For example, the sequence of reads and writes from and to databases in an ETL system allows the detection of anti-patterns and helps in performance optimizations of the legacy code. Use case diagrams highlight the identified independent interactions with the legacy system as derived from its entry points. Activity diagrams and Use Case Map (UCM) diagrams represent the identified control flow and its abstractions to serve as the starting point of automated test generation for the legacy system. Sequence diagrams highlight the details of interactions between components of the legacy system or the low-level functionality of a Figure 13: Information Overlaid onto a Software Map This software map allows interactive exploration of what database tables are used by what files. The file overview is represented by the bubbles in the center, and the outer ring shows all database tables used by the system. When the cursor hovers over a table, all files that use that table are shown via connected lines. Hovering over a file shows database tables used by it. Similar diagrams are used to explore other types of resource usage (e.g., external libraries). Figure 13: Class Diagram Derived from the Code As part of the architecture discovery, class diagrams can be produced. This example shows the data classes of one file. The relationships between and contents of the classes are an abstracted view of the COBOL data definitions. Figure 13: Example Activity Diagram This Activity Diagram shows the application flow between key components of the legacy code. The diagram can be used to represent the identified control flow and its abstractions. These high-level flows can serve as the starting point of test generation for the legacy system. Page 13 of 21
14 fragment of the legacy system (see Figure 16). Design Structure Matrices (DSMs) show the modules of the system on both axes. The value in the cell corresponds to the number of writeread dependencies between modules through shared resources. The DSM can be sorted according to the number of dependencies. The scalability of the DSM representation is enabled by the ability to drill down the dependencies from the top-level modules to low-level dependencies (see Figure 16). File dependencies are shown from the perspective of data types, function calls, and global variables (see Figure 16). Call Graphs are computed from all potential entry points. Resource utilization charts show how different parts of the system use different external resources such as databases. Figure 16: Example Dependency Graph The dependency graph shows the number of dependencies (e.g., read-write or write-write access of data structure fields) between files in the system. In this diagram, one can see that the file in the lower left has three dependencies on other code files, but no other code files depend on it. Figure 16: Example Design Structure Matrices Design Structure Matrices (DSMs) show the dependencies between code modules. The rows and columns of the DSM list the modules in the system, and each cell shows the number of dependencies between the two modules. By selecting a cell (highlighted in the center), the DSM is shown just for the files in that module. Selecting one of those cells shows the code for the two modules, with the code for the dependencies highlighted. Figure 16: Example Message Sequence Chart This diagram is a portion of a Message Sequence Chart (also called a Sequence Diagram). These diagrams are used to highlight the details of interactions between system components or the low-level functionality of a fragment of the system. These diagrams can be used as the basis for test cases at various levels (system, component, etc.). Page 14 of 21
15 Transformation The main goal of the Transformation phase is to develop a new system that has been improved in some way. What that improvement goal is depends on the individual project, but it could include improved quality attributes (higher modularity, lower complexity, less replicated code), rearchitecting for the cloud, etc. As part of this phase, the quality metrics must be determined both at the beginning of the transformation and then throughout the iterative transformation steps. In this way, it can be verified that the transformations are making the desired improvements. As a whole, metrics improvements typically involves making trade-offs. For example, one may be able to perform additional reductions in code replication at the expense of modularity. The quantitative metrics provide the data needed to make informed decisions. Figure 17: Example Software Map Showing Complexity Various derived metrics can be shown in the software maps for the system. In this example, the cyclomatic complexity of the files is shown. A darker bubble indicates a higher complexity for the code file. The complexity in tabular form is shown in the lower center. Statistical complexity information, including highlighted comparisons to reference values, is shown in the lower right. Having concrete metrics data allows the Transformation phase to be driven by a quantitative, data-centric strategy. The metrics are compared to reference values and to those from previous iterations to identify the main problem areas in the system. There are many types of metrics that can be computed during the quality analysis and diagnosis steps. These metrics provide views of the code base from several perspectives that are relevant to the reengineering project. Which metrics are computed is based on the specifics of each individual project, but they typically include size and complexity metrics (indicators of how hard a module is to understand, test, and maintain see Figure 17), dependency metrics (indicators of how much a module is resilient to external changes (afferent Figure 18: Software Map Showing Code Replication Software maps can be used to view the replicated (cloned) code within a system. In the screen shot on top, the red line between the files in this example shows that there is code replication between the two connected files. The tabular views on the bottom provide a list of all code clones and a summary of the replication. The software map on the bottom shows (for a different system of approximately 1 million lines of code spread over 1500 files) the large amount of replication. This system had approximately 40% of its code replicated, leading to duplicated effort, increased defects, and higher maintenance costs. coupling) or how much a change will affect other modules (efferent coupling)), modularity metrics (indicators of the complexity of the implementation and the likelihood of defects in a module), and redundancy metrics (indicators of how hard a module is to maintain). In addition to the quality metrics, various other detailed information is derived from the model. The UniqueSoft tool visualizes the results obtained from analyzing the legacy code in a variety Page 15 of 21
16 of formats. A number of impact, dependency, and cross-reference views are created based on the specifics of each individual project. Typical views are listed below. For the quantitative metrics data, the raw values are provided in tabular form, and a diagnosis is provided which highlights values that fall outside of target reference values. Replicated code metrics along with where the code is replicated and the amount of the replicated code in each instance. The UniqueSoft tool identifies code that is a direct clone as well as code that is a parameterized clone of other code. See Figure 18 for examples of metrics for replicated code. Modularity metrics from a variety of perspectives, including the diffusion of features across the code and the cohesion of the files and methods with respect to a given feature. Dependency metrics showing the coupling dependencies between functions through read-write and write-write access of data structure fields. Dead code metrics along with location and size. Two types of dead code can be detected. First, code that can never be executed under any combination of inputs. Second, code that is dead because the system no longer uses certain features, entry points, or input values. For example, code may be dead because a product variant is no longer supported but all related code was not previously removed. Based on the improvement plan created during diagnosis of the metrics, the tool applies the refactoring transformations selected from the reusable refactoring transformation toolbox. Additional Figure19: Example Software Refactoring The refactoring editor and dashboard allow the analyst to make systematic changes across all or portions of the software system. The dashboard (upper center) is used to apply individual transformations. The script editor (upper right) allows for the creation of scripts that apply sequences of transformations. The overlaid shell window allows the user to view the progress of the transformations as they are applied across the code base. Figure 19: Version Control of Refactoring Transformations Changes to the source code made by the refactoring transformations are logged into a version control system. The top screen shot above shows the history of executed transformations. The bottom screen shows how each change can be inspected down to the level of additions, deletions, and modifications made to individual lines of code. All changes can be automatically rolled back if desired. transformations are also created to handle project-specific changes, and the transformations can be combined using a scripting language. An example refactoring is shown in Figure. As the transformations are applied, a history of specific transformations performed on the system is recorded. Any change, down to those made to individual lines of code, can be examined in detail, and any change can be rolled back if desired (see Figure 19). Page 16 of 21
17 Examples of types of refactoring transformations available in the toolbox include isolating the functionality implementing identified features using partial evaluation, reducing coupling by separating functionality annotated by different features, reducing diffusion by moving code fragments annotated with the same feature close to each other in the program, encapsulating data and behavior implementing the same feature into separate classes, and building class hierarchy by consolidating common behavior. Feature-based refactoring transformations automatically apply to all the locations in the legacy code annotated with the selected features. Legacy code often does not have an associated test suite or, if it does, the test suite is neither comprehensive nor updated to reflect changes since the program s creation. A viable solution to this issue is possible by taking a different approach to test creation. UniqueSoft can use its reengineering tool to extract the execution traces from the legacy code. These traces can then be automatically transformed into test scenarios that cover the functionality of the legacy code. While this technique does not guarantee correspondence to requirements (such as may exist), it has significant advantages over creating new test scenarios by hand. First, a comprehensive set of test scenarios is created. Second, it is easier to verify that the scenarios are correct than it is to create them in the first place. In addition, if the goal of the testing is to show that no functionality has been changed, then no further test development needs to be done. Based on the available test suite, the validation step in transformation is used to ensure that no undesirable changes have been made to the code. For example, transformations can be automatically applied that reduce the cyclomatic complexity of the code. However, it is possible that the transformed code will have different performance characteristics that may be unacceptable in a performance-critical region of the code. In this case, the changes would be backed out. The same transformations could then be applied to a more restricted region of the code. This cycle of measure-plan-refactor-test gives a high degree of control of the transformation process and allows the user to tailor the results to the specific needs of a project. Instantiation The main goal of the Instantiation phase is to generate code for a target system which has different design or technology characteristics (e.g., new language or platform). UniqueSoft s code generation system will output code that properly uses the target environment and language. This code generation scheme can be used for a wide variety of transformation projects, such as generating optimized code for a system from a design model (e.g., UML to C++), transforming legacy code to a new language (e.g., COBOL to Java), or moving code from one environment or paradigm to another (e.g., client-server Java using local file storage to cloud-based Java using Amazon AWS S3 persistent storage APIs). The UniqueSoft code generation tools provide a much more powerful framework than one would get with direct translation tools. In direct translation, a language such as COBOL is translated to a new language such as Java by applying a template-based process. That is, every construct (statement, data type, etc.) in the legacy language has a set way of being directly translated into some construct in the new language. For example, a MOVE in COBOL is translated into a very complex copy operation in Java. While this produces syntactically correct target code that has the desired functionality, it has many problems which make the new code almost impossible to maintain. Using direct translation to create Java from COBOL, for example, will create code that does not take advantage of OO features such as class hierarchies, encapsulation, interfaces, exception handling, polymorphism, abstract data types, etc. In addition, the code will have Page 17 of 21
18 remnants of the original language that are considered poor programming practice in the new language, such as string comparisons of literal Y / N values instead of using native Booleans. In contrast, the UniqueSoft code generation system has powerful and extensible analysis and transformation capabilities. The UniqueSoft system is based on a rules engine that applies correctness-preserving rules that transform not just the language syntax, but also the structure of the program from the legacy language to the new language paradigm. For example, in transforming COBOL to Java, the UniqueSoft code generator can move embedded SQL into Hibernate calls, refactor of fixed-length character strings into Java idioms, refactor fixed-length numeric data to account for differing precisions (e.g., for currencies with different base units), enhance character handling to account for internationalization (e.g., i18n characters), eliminate GOTO statements, refactor fixed-position parameter passing to use standard Java mechanisms, and refactor code to make use of external library calls. The code generated in this way will be natural to an engineer familiar with the target language. For example, when transforming COBOL legacy code to modernized Java, the native Java types and object hierarchies will be created to reflect characteristics of the domain. In contrast, the JOBOL created from typical translation tools emulate COBOL types by creating a representation of the COBOL memory space in Java, which is an unnatural representation that is almost completely unmaintainable. A code generation system must also take into account the underlying hardware, operating system, middleware layers isolating the application from the operating system, inter-process communication mechanisms, process management mechanisms, security frameworks, logging and monitoring frameworks, etc. When targeting a web-based system, the target platform would also contain the app server, databases, persistence layer, data access layer, network communication, messaging, and server and client presentation technologies. Usually, there are a number of choices along each dimension, and often more than one mechanism is required for a given target platform. Other model-based engineering environments typically provide either library calls that connect the application to the underlying platform or predefined platform targets (usually for a vanilla operating system), possibly allowing a user to customize the targeting by providing expansions for predefined macros. The UniqueSoft transformation system generates code for a target platform using technology and topology mappings. Technology mappings generate code based on specific implementation technologies, while the topology mappings generate code for specific configurations of those implementation technologies. Examples of these types of mappings are shown in Figure 20 and Figure 21. (See next page) Page 18 of 21
19 Figure 20: Example Topology Mappings Figure 21: Example Topology Mappings Since the mappings are applied automatically by the UniqueSoft code generator, one is not locked into a specific choice of target platform. The code generator automatically creates the corresponding integration code, deployment descriptors, and build artifacts. The code generator can be specialized as required for custom platforms. Page 19 of 21
20 3. Scalability Legacy reengineering and modernization projects tend to be large, often comprised of many millions of lines of code. In addition, the code tends to be in a variety of programming languages or dialects, including the many variations of COBOL, scripting languages, database query languages embedded in different ways in the different languages, and processor-specific assembly languages. These languages often exhibit vendor-specific variations and inconsistencies (e.g., Microfocus vs. IBM PL/I, or Basic SQL vs. Oracle PL/SQL vs. Sybase SQL/isql vs. SQLPlus). UniqueSoft's tool has been designed with scalability in mind, both with respect to the amount of code to be analyzed and manipulated as well as with respect to the variation in source and/or target languages to be manipulated. The analysis algorithms and refactoring transformations extract the information about the analyzed program into two aspects: global information and local information. Global information affects all parts of the system examined, while local information affects only an individual component of the system. In general, a system is structured into a number of components, and only global information needs to be shared about all components. Even for very large systems, individual components are of limited size. UniqueSoft's tool is structured to allow systems to be analyzed and transformed component by component, typically in a number of iterations, such that each individual component together with the global information about the system can easily be held in memory on modern computer systems. Updating and propagating the global information incrementally from one component to another is efficient. In this manner, the UniqueSoft tool is not limited by the size of the system being analyzed or transformed and therefore can be applied to systems consisting of >100 million lines of code. Scalability with respect to large systems also applies to the visualization of the system and the results of the analysis performed. It is impossible for users to stare at tables with several thousand rows and columns and see meaningful information. For large systems, it is essential that the results are presented in a manner that is understandable by the user. UniqueSoft has strived to make data visualization scalable by presenting results in an interactive manner that allows users to drill down into the information along various dimensions while maintaining the context of the analysis so that the user will not get lost in the details. 4. Summary The UniqueSoft reengineering process and tool makes it possible to transform a legacy software system into a modern asset. Advanced automation and a powerful transformation system enable the advantages of a manual rewrite at a fraction of the cost while avoiding the pitfalls of one-size-fits-all translation tools. Reengineering can be applied across a wide variety of domains to accomplish project goals, including migration of legacy system, correction of software problems, and extraction of information such as business rules from legacy code. These techniques and tool readily scale to massive projects comprised of many millions of lines of legacy code. The analysis capabilities provide visibility into the legacy system and allow for a low-risk reengineering plan to be created based on quantitative metrics. Page 20 of 21
21 UniqueSoft, LLC Corporate Headquarters 1530 E. Dundee Road Suite 100 Palatine, IL USA Page 21 of 21
Modernized and Maintainable Code. Frank Weil, Ph.D. UniqueSoft, LLC
Modernized and Maintainable Code Frank Weil, Ph.D. UniqueSoft, LLC UniqueSoft is a provider of next-generation software development tools and services specializing in modernizing legacy software using
Baseline Code Analysis Using McCabe IQ
White Paper Table of Contents What is Baseline Code Analysis?.....2 Importance of Baseline Code Analysis...2 The Objectives of Baseline Code Analysis...4 Best Practices for Baseline Code Analysis...4 Challenges
White Paper April 2006
White Paper April 2006 Table of Contents 1. Executive Summary...4 1.1 Scorecards...4 1.2 Alerts...4 1.3 Data Collection Agents...4 1.4 Self Tuning Caching System...4 2. Business Intelligence Model...5
Fourth generation techniques (4GT)
Fourth generation techniques (4GT) The term fourth generation techniques (4GT) encompasses a broad array of software tools that have one thing in common. Each enables the software engineer to specify some
Toad for Oracle 8.6 SQL Tuning
Quick User Guide for Toad for Oracle 8.6 SQL Tuning SQL Tuning Version 6.1.1 SQL Tuning definitively solves SQL bottlenecks through a unique methodology that scans code, without executing programs, to
CA Repository for z/os r7.2
PRODUCT SHEET CA Repository for z/os CA Repository for z/os r7.2 CA Repository for z/os is a powerful metadata management tool that helps organizations to identify, understand, manage and leverage enterprise-wide
Visionet IT Modernization Empowering Change
Visionet IT Modernization A Visionet Systems White Paper September 2009 Visionet Systems Inc. 3 Cedar Brook Dr. Cranbury, NJ 08512 Tel: 609 360-0501 Table of Contents 1 Executive Summary... 4 2 Introduction...
BCS THE CHARTERED INSTITUTE FOR IT. BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 6 Professional Graduate Diploma in IT SOFTWARE ENGINEERING 2
BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 6 Professional Graduate Diploma in IT SOFTWARE ENGINEERING 2 EXAMINERS REPORT Friday 2 nd October 2015 Answer any THREE
Improved Software Testing Using McCabe IQ Coverage Analysis
White Paper Table of Contents Introduction...1 What is Coverage Analysis?...2 The McCabe IQ Approach to Coverage Analysis...3 The Importance of Coverage Analysis...4 Where Coverage Analysis Fits into your
Extend the value of your core business systems.
Legacy systems renovation to SOA September 2006 Extend the value of your core business systems. Transforming legacy applications into an SOA framework Page 2 Contents 2 Unshackling your core business systems
Optimizing Service Levels in Public Cloud Deployments
WHITE PAPER OCTOBER 2014 Optimizing Service Levels in Public Cloud Deployments Keys to Effective Service Management 2 WHITE PAPER: OPTIMIZING SERVICE LEVELS IN PUBLIC CLOUD DEPLOYMENTS ca.com Table of
Pattern Insight Clone Detection
Pattern Insight Clone Detection TM The fastest, most effective way to discover all similar code segments What is Clone Detection? Pattern Insight Clone Detection is a powerful pattern discovery technology
Source Code Translation
Source Code Translation Everyone who writes computer software eventually faces the requirement of converting a large code base from one programming language to another. That requirement is sometimes driven
Applying 4+1 View Architecture with UML 2. White Paper
Applying 4+1 View Architecture with UML 2 White Paper Copyright 2007 FCGSS, all rights reserved. www.fcgss.com Introduction Unified Modeling Language (UML) has been available since 1997, and UML 2 was
ETPL Extract, Transform, Predict and Load
ETPL Extract, Transform, Predict and Load An Oracle White Paper March 2006 ETPL Extract, Transform, Predict and Load. Executive summary... 2 Why Extract, transform, predict and load?... 4 Basic requirements
Parsing Technology and its role in Legacy Modernization. A Metaware White Paper
Parsing Technology and its role in Legacy Modernization A Metaware White Paper 1 INTRODUCTION In the two last decades there has been an explosion of interest in software tools that can automate key tasks
1-04-10 Configuration Management: An Object-Based Method Barbara Dumas
1-04-10 Configuration Management: An Object-Based Method Barbara Dumas Payoff Configuration management (CM) helps an organization maintain an inventory of its software assets. In traditional CM systems,
JBoss EntErprisE ApplicAtion platform migration guidelines www.jboss.com
JBoss Enterprise Application Platform Migration Guidelines This document is intended to provide insight into the considerations and processes required to move an enterprise application from a JavaEE-based
Realizing the Benefits of Data Modernization
February 2015 Perspective Realizing the Benefits of How to overcome legacy data challenges with innovative technologies and a seamless data modernization roadmap. Companies born into the digital world
A Technology Based Solution to Move Client Server Applications to Java /.NET in Native 3-Tier Web Code Structures
A Technology Based Solution to Move Client Server Applications to Java /.NET in Native 3-Tier Web Code Structures Accelerated Application Modernization (AAM) Page 1 of 16 Table of Contents TABLE OF CONTENTS...
A Standards-Based Approach to Extracting Business Rules
A Standards-Based Approach to Extracting Business Rules Ira Baxter Semantic Designs, Inc. Stan Hendryx Hendryx & Associates 1 Who are the presenters? Semantic Designs Automated Analysis and Enhancement
Utilizing Domain-Specific Modelling for Software Testing
Utilizing Domain-Specific Modelling for Software Testing Olli-Pekka Puolitaival, Teemu Kanstrén VTT Technical Research Centre of Finland Oulu, Finland {olli-pekka.puolitaival, teemu.kanstren}@vtt.fi Abstract
Chapter 1. Dr. Chris Irwin Davis Email: [email protected] Phone: (972) 883-3574 Office: ECSS 4.705. CS-4337 Organization of Programming Languages
Chapter 1 CS-4337 Organization of Programming Languages Dr. Chris Irwin Davis Email: [email protected] Phone: (972) 883-3574 Office: ECSS 4.705 Chapter 1 Topics Reasons for Studying Concepts of Programming
CSC408H Lecture Notes
CSC408H Lecture Notes These lecture notes are provided for the personal use of students taking Software Engineering course in the Summer term 2005 at the University of Toronto. Copying for purposes other
ORACLE OLAP. Oracle OLAP is embedded in the Oracle Database kernel and runs in the same database process
ORACLE OLAP KEY FEATURES AND BENEFITS FAST ANSWERS TO TOUGH QUESTIONS EASILY KEY FEATURES & BENEFITS World class analytic engine Superior query performance Simple SQL access to advanced analytics Enhanced
Smarter Balanced Assessment Consortium. Recommendation
Smarter Balanced Assessment Consortium Recommendation Smarter Balanced Quality Assurance Approach Recommendation for the Smarter Balanced Assessment Consortium 20 July 2012 Summary When this document was
Database Programming with PL/SQL: Learning Objectives
Database Programming with PL/SQL: Learning Objectives This course covers PL/SQL, a procedural language extension to SQL. Through an innovative project-based approach, students learn procedural logic constructs
What is a database? COSC 304 Introduction to Database Systems. Database Introduction. Example Problem. Databases in the Real-World
COSC 304 Introduction to Systems Introduction Dr. Ramon Lawrence University of British Columbia Okanagan [email protected] What is a database? A database is a collection of logically related data for
Chapter 13: Program Development and Programming Languages
Understanding Computers Today and Tomorrow 12 th Edition Chapter 13: Program Development and Programming Languages Learning Objectives Understand the differences between structured programming, object-oriented
COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters
COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly
OWB Users, Enter The New ODI World
OWB Users, Enter The New ODI World Kulvinder Hari Oracle Introduction Oracle Data Integrator (ODI) is a best-of-breed data integration platform focused on fast bulk data movement and handling complex data
How To Choose A Business Intelligence Toolkit
Background Current Reporting Challenges: Difficulty extracting various levels of data from AgLearn Limited ability to translate data into presentable formats Complex reporting requires the technical staff
Finding Business Rules in COBOL Systems
Finding Business Rules in COBOL Systems Use Case for evolveit Scott Hesser 6/1/2013 pg. 1 blackboxit.com Understanding the 1 st step to Modernization Large commercial organizations have been developing
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Software Engineering. So(ware Evolu1on
Software Engineering So(ware Evolu1on 1 Software change Software change is inevitable New requirements emerge when the software is used; The business environment changes; Errors must be repaired; New computers
Building Effective Dashboard Views Using OMEGAMON and the Tivoli Enterprise Portal
1 IBM Software Group Tivoli Software Building Effective Dashboard Views Using OMEGAMON and the Tivoli Enterprise Portal Ed Woods IBM Corporation 2011 IBM Corporation IBM s Integrated Service Management
Paper 064-2014. Robert Bonham, Gregory A. Smith, SAS Institute Inc., Cary NC
Paper 064-2014 Log entries, Events, Performance Measures, and SLAs: Understanding and Managing your SAS Deployment by Leveraging the SAS Environment Manager Data Mart ABSTRACT Robert Bonham, Gregory A.
CACHÉ: FLEXIBLE, HIGH-PERFORMANCE PERSISTENCE FOR JAVA APPLICATIONS
CACHÉ: FLEXIBLE, HIGH-PERFORMANCE PERSISTENCE FOR JAVA APPLICATIONS A technical white paper by: InterSystems Corporation Introduction Java is indisputably one of the workhorse technologies for application
Cisco Data Preparation
Data Sheet Cisco Data Preparation Unleash your business analysts to develop the insights that drive better business outcomes, sooner, from all your data. As self-service business intelligence (BI) and
IBM Tivoli Composite Application Manager for WebSphere
Meet the challenges of managing composite applications IBM Tivoli Composite Application Manager for WebSphere Highlights Simplify management throughout the Create reports that deliver insight into life
CONDIS. IT Service Management and CMDB
CONDIS IT Service and CMDB 2/17 Table of contents 1. Executive Summary... 3 2. ITIL Overview... 4 2.1 How CONDIS supports ITIL processes... 5 2.1.1 Incident... 5 2.1.2 Problem... 5 2.1.3 Configuration...
Sisense. Product Highlights. www.sisense.com
Sisense Product Highlights Introduction Sisense is a business intelligence solution that simplifies analytics for complex data by offering an end-to-end platform that lets users easily prepare and analyze
Is ETL Becoming Obsolete?
Is ETL Becoming Obsolete? Why a Business-Rules-Driven E-LT Architecture is Better Sunopsis. All rights reserved. The information contained in this document does not constitute a contractual agreement with
ORACLE DATABASE 10G ENTERPRISE EDITION
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
Chapter 9 Software Evolution
Chapter 9 Software Evolution Summary 1 Topics covered Evolution processes Change processes for software systems Program evolution dynamics Understanding software evolution Software maintenance Making changes
Product Guide. Sawmill Analytics, Swindon SN4 9LZ UK [email protected] tel: +44 845 250 4470
Product Guide What is Sawmill Sawmill is a highly sophisticated and flexible analysis and reporting tool. It can read text log files from over 800 different sources and analyse their content. Once analyzed
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,
When to consider OLAP?
When to consider OLAP? Author: Prakash Kewalramani Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 03/10/08 Email: [email protected] Abstract: Do you need an OLAP
Data Integration Checklist
The need for data integration tools exists in every company, small to large. Whether it is extracting data that exists in spreadsheets, packaged applications, databases, sensor networks or social media
The role of integrated requirements management in software delivery.
Software development White paper October 2007 The role of integrated requirements Jim Heumann, requirements evangelist, IBM Rational 2 Contents 2 Introduction 2 What is integrated requirements management?
A Business Process Driven Approach for Generating Software Modules
A Business Process Driven Approach for Generating Software Modules Xulin Zhao, Ying Zou Dept. of Electrical and Computer Engineering, Queen s University, Kingston, ON, Canada SUMMARY Business processes
Software Design. Design (I) Software Design Data Design. Relationships between the Analysis Model and the Design Model
Software Design Design (I) Software Design is a process through which requirements are translated into a representation of software. Peter Lo CS213 Peter Lo 2005 1 CS213 Peter Lo 2005 2 Relationships between
zen Platform technical white paper
zen Platform technical white paper The zen Platform as Strategic Business Platform The increasing use of application servers as standard paradigm for the development of business critical applications meant
The power of IBM SPSS Statistics and R together
IBM Software Business Analytics SPSS Statistics The power of IBM SPSS Statistics and R together 2 Business Analytics Contents 2 Executive summary 2 Why integrate SPSS Statistics and R? 4 Integrating R
Data Migration through an Information Development Approach An Executive Overview
Data Migration through an Approach An Executive Overview Introducing MIKE2.0 An Open Source Methodology for http://www.openmethodology.org Management and Technology Consultants Data Migration through an
The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators
Automated Business Process Discovery & Validation The Worksoft Suite Worksoft Differentiators The industry s only platform for automated business process discovery & validation A track record of success,
Mary E. Shacklett President Transworld Data
Transworld Data Mary E. Shacklett President Transworld Data For twenty-five years, Transworld Data has performed technology analytics, market research and IT consulting on every world continent, including
Contents. Introduction and System Engineering 1. Introduction 2. Software Process and Methodology 16. System Engineering 53
Preface xvi Part I Introduction and System Engineering 1 Chapter 1 Introduction 2 1.1 What Is Software Engineering? 2 1.2 Why Software Engineering? 3 1.3 Software Life-Cycle Activities 4 1.3.1 Software
Qlik UKI Consulting Services Catalogue
Qlik UKI Consulting Services Catalogue The key to a successful Qlik project lies in the right people, the right skills, and the right activities in the right order www.qlik.co.uk Table of Contents Introduction
Base One's Rich Client Architecture
Base One's Rich Client Architecture Base One provides a unique approach for developing Internet-enabled applications, combining both efficiency and ease of programming through its "Rich Client" architecture.
Migrating Lotus Notes Applications to Google Apps
Migrating Lotus Notes Applications to Google Apps Introduction.................................................... 3 Assessment..................................................... 3 Usage.........................................................
The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud.
White Paper 021313-3 Page 1 : A Software Framework for Parallel Programming* The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. ABSTRACT Programming for Multicore,
IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform:
Creating an Integrated, Optimized, and Secure Enterprise Data Platform: IBM PureData System for Transactions with SafeNet s ProtectDB and DataSecure Table of contents 1. Data, Data, Everywhere... 3 2.
P16_IBM_WebSphere_Business_Monitor_V602.ppt. Page 1 of 17
Welcome to the IBM WebSphere Business Monitor presentation as part of the SAP integration workshop. This presentation will give you an introduction to the WebSphere Business Monitor and monitoring over
Getting started with API testing
Technical white paper Getting started with API testing Test all layers of your composite applications, not just the GUI Table of contents Executive summary... 3 Introduction... 3 Who should read this document?...
HP Cloud Service Automation Concepts Guide
HP Cloud Service Automation Concepts Guide Software Version: 4.00 Table of Contents Addressing cloud service management challenges with HP CSA... 2 Requesting cloud services... 2 Designing cloud services...
see >analyze >control >align < WhitePaper > planningit: alfabet s Logical IT Inventory
see >analyze >control >align < WhitePaper > planningit: alfabet s Logical IT Inventory planningit: alfabet s Logical IT Inventory 2 A transparent IT Landscape IT planning takes place in a rapidly changing
Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com
Formal Software Testing Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com Scope of Testing Find defects early Remove defects prior to production Identify Risks Unbiased opinion When Should Testing
Standard Glossary of Terms Used in Software Testing. Version 3.01
Standard Glossary of Terms Used in Software Testing Version 3.01 Terms Used in the Expert Level Test Automation - Engineer Syllabus International Software Testing Qualifications Board Copyright International
P6 Analytics Reference Manual
P6 Analytics Reference Manual Release 3.2 October 2013 Contents Getting Started... 7 About P6 Analytics... 7 Prerequisites to Use Analytics... 8 About Analyses... 9 About... 9 About Dashboards... 10 Logging
Cache Database: Introduction to a New Generation Database
Cache Database: Introduction to a New Generation Database Amrita Bhatnagar Department of Computer Science and Engineering, Birla Institute of Technology, A 7, Sector 1, Noida 201301 UP [email protected]
SAP Certified Development Professional - ABAP with SAP NetWeaver 7.0
SAP EDUCATION SAMPLE QUESTIONS: P_ABAP_70 SAP Certified Development Professional - ABAP with SAP NetWeaver 7.0 Disclaimer: These sample questions are for self-evaluation purposes only and do not appear
Introducing CXAIR. E development and performance
Search Powered Business Analytics Introducing CXAIR CXAIR has been built specifically as a next generation BI tool. The product utilises the raw power of search technology in order to assemble data for
Language Evaluation Criteria. Evaluation Criteria: Readability. Evaluation Criteria: Writability. ICOM 4036 Programming Languages
ICOM 4036 Programming Languages Preliminaries Dr. Amirhossein Chinaei Dept. of Electrical & Computer Engineering UPRM Spring 2010 Language Evaluation Criteria Readability: the ease with which programs
Data Validation and Data Management Solutions
FRONTIER TECHNOLOGY, INC. Advanced Technology for Superior Solutions. and Solutions Abstract Within the performance evaluation and calibration communities, test programs are driven by requirements, test
Solutions for Quality Management in a Agile and Mobile World
Solutions for Quality Management in a Agile and Mobile World with IBM Rational Quality Management Solutions Realities can stall software-driven innovation Complexities in software delivery compounded by
Database Management. Chapter Objectives
3 Database Management Chapter Objectives When actually using a database, administrative processes maintaining data integrity and security, recovery from failures, etc. are required. A database management
Prophix and Business Intelligence. A white paper prepared by Prophix Software 2012
A white paper prepared by Prophix Software 2012 Overview The term Business Intelligence (BI) is often ambiguous. In popular contexts such as mainstream media, it can simply mean knowing something about
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult
CS Standards Crosswalk: CSTA K-12 Computer Science Standards and Oracle Java Programming (2014)
CS Standards Crosswalk: CSTA K-12 Computer Science Standards and Oracle Java Programming (2014) CSTA Website Oracle Website Oracle Contact http://csta.acm.org/curriculum/sub/k12standards.html https://academy.oracle.com/oa-web-introcs-curriculum.html
TIBCO Spotfire Guided Analytics. Transferring Best Practice Analytics from Experts to Everyone
TIBCO Spotfire Guided Analytics Transferring Best Practice Analytics from Experts to Everyone Introduction Business professionals need powerful and easy-to-use data analysis applications in order to make
MS-40074: Microsoft SQL Server 2014 for Oracle DBAs
MS-40074: Microsoft SQL Server 2014 for Oracle DBAs Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills and experience as an Oracle
Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation
Objectives Distributed Databases and Client/Server Architecture IT354 @ Peter Lo 2005 1 Understand the advantages and disadvantages of distributed databases Know the design issues involved in distributed
Analytic Modeling in Python
Analytic Modeling in Python Why Choose Python for Analytic Modeling A White Paper by Visual Numerics August 2009 www.vni.com Analytic Modeling in Python Why Choose Python for Analytic Modeling by Visual
Planning a Successful Visual Basic 6.0 to.net Migration: 8 Proven Tips
Planning a Successful Visual Basic 6.0 to.net Migration: 8 Proven Tips Jose A. Aguilar January 2009 Introduction Companies currently using Visual Basic 6.0 for application development are faced with the
A standards-based approach to application integration
A standards-based approach to application integration An introduction to IBM s WebSphere ESB product Jim MacNair Senior Consulting IT Specialist [email protected] Copyright IBM Corporation 2005. All rights
How To Build A Connector On A Website (For A Nonprogrammer)
Index Data's MasterKey Connect Product Description MasterKey Connect is an innovative technology that makes it easy to automate access to services on the web. It allows nonprogrammers to create 'connectors'
IT EFFICIENCY 25 MARCH 2016. Mainframe Downsizing. Fabrizio Di Peppo Delivery Manager
IT EFFICIENCY 25 MARCH 2016 Mainframe Downsizing Fabrizio Di Peppo Delivery Manager DISCLAIMER Use of this presentation Remarks: This document is a general company presentation. It is for general purposes
Please Note: Temporary Graduate 485 skills assessments applicants should only apply for ANZSCO codes listed in the Skilled Occupation List above.
ANZSCO Descriptions This ANZSCO description document has been created to assist applicants in nominating an occupation for an ICT skill assessment application. The document lists all the ANZSCO codes that
Algorithms, Flowcharts & Program Design. ComPro
Algorithms, Flowcharts & Program Design ComPro Definition Algorithm: o sequence of steps to be performed in order to solve a problem by the computer. Flowchart: o graphical or symbolic representation of
Chapter 8 Software Testing
Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is
How To Retire A Legacy System From Healthcare With A Flatirons Eas Application Retirement Solution
EAS Application Retirement Case Study: Health Insurance Introduction A major health insurance organization contracted with Flatirons Solutions to assist them in retiring a number of aged applications that
Oracle Data Integrator 11g New Features & OBIEE Integration. Presented by: Arun K. Chaturvedi Business Intelligence Consultant/Architect
Oracle Data Integrator 11g New Features & OBIEE Integration Presented by: Arun K. Chaturvedi Business Intelligence Consultant/Architect Agenda 01. Overview & The Architecture 02. New Features Productivity,
Team Members: Christopher Copper Philip Eittreim Jeremiah Jekich Andrew Reisdorph. Client: Brian Krzys
Team Members: Christopher Copper Philip Eittreim Jeremiah Jekich Andrew Reisdorph Client: Brian Krzys June 17, 2014 Introduction Newmont Mining is a resource extraction company with a research and development
1 File Processing Systems
COMP 378 Database Systems Notes for Chapter 1 of Database System Concepts Introduction A database management system (DBMS) is a collection of data and an integrated set of programs that access that data.
IBM INFORMATION MANAGEMENT SYSTEMS (IMS ) MIGRATION AND MODERNIZATION - CONVERSION OF HIERARCHICAL DL/1 STRUCTURES TO RDBMS
IBM INFORMATION MANAGEMENT SYSTEMS (IMS ) MIGRATION AND MODERNIZATION - CONVERSION OF HIERARCHICAL DL/1 STRUCTURES TO RDBMS Leverage the technology and operational advantages inherent within the modern
Manage Software Development in LabVIEW with Professional Tools
Manage Software Development in LabVIEW with Professional Tools Introduction For many years, National Instruments LabVIEW software has been known as an easy-to-use development tool for building data acquisition
Karunya University Dept. of Information Technology
PART A Questions 1. Mention any two software process models. 2. Define risk management. 3. What is a module? 4. What do you mean by requirement process? 5. Define integration testing. 6. State the main
FileMaker 14. ODBC and JDBC Guide
FileMaker 14 ODBC and JDBC Guide 2004 2015 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and FileMaker Go are trademarks of FileMaker,
