1 Incremental Migration from to SQL: The Database Gateway Solution Dr. Richard Carr, Ph.D. Carr Scott Software Incorporated Cupertino, California Published in Volume 18, Number 4, of the Tandem Connection, the official magazine of the International Tandem Users' Group. Copyright 1997, International Tandem Users' Group. T he file system continues to be one of the most reliable and efficient database systems for enterprise computing, but the typical user has a strong desire to move to a more modern database technology. The primary drawbacks of are its lack of openness, poor data accessibility, and limits on the size and manageability of the database. Openness and Standards. is clearly limited to the Tandem NSK platform; there are no easy methods for porting applications between and other platforms. There are no existing tools or systems that permit the straightforward combination of data between and other database systems. Database Accessibility. For many years, the only application-level tools for were Enform and Enable. The Enform query processor does not support database updates. Both are limited to 6530 terminal interfaces. Although there are a few GUI 3rd-party database tools available, they pale in comparison with workstation-based tools that operate on SQL/ODBC databases. Manageability and Limits. Typical database maintenance operations, such as adding a field to a record is a major undertaking; re-partitioning a file requires making the data unavailable for a lengthy operation The maximum of 16 partitions limits both maximum file size and scalability. The use of for massive data storage would be impractical. A legacy information system is one that significantly resists modification and change. (Brodie, M., and Stonebraker, M., 1995) Most -based applications can be described as legacy information systems. NonStop SQL/MP For most owners of -based systems, the obvious migration target is Tandem s NonStop SQL/MP, which has improved on s faulttolerance, scalability, efficiency and reliability. Migrating to NonStop SQL/MP requires much less reengineering and organizational chaos than any other alternative. Data Access. Data in a SQL database is universally accessible through ODBC and non-programmer tools such as PC/Mac spreadsheets and wordprocessors. SQL is the universal solution to the data access problem. Data Management. Databases must change to reflect changing reality. SQL supports adding columns to a table without changing the existing data or affecting existing programs. Programs that access or update a table operate on specific columns, not records; other columns are ignored for access, and are preserved through updates. Dr. Richard Carr has been a designer and developer of Tandembased system software since As head architect of NSK / Guardian development in the early 1980 s, Dr. Carr was author of the memory manager, expedited messages, parallel reloading, DSAP and DCOM. He received Tandem s first software patent for his work on the global update protocol. In later years at Tandem, at the High Performance Research Center and as Technical Director at Tandem Labs, he mixed advanced development with large customer consulting and competitive benchmarking. His last project was RDF/MP, a.k.a. Turbo RDF. In 1995, Dr. Carr founded Carr Scott Software to build the database gateway described in the accompanying article. A recent Tandem Connection article (Alleman, 1997) describes the considerable effort required to add a few fields to an file with minimum outage (about an hour); the equivalent SQL operation is accomplished in seconds. Page 1
2 SQL also supports on-line move and split partition, which are causes of lengthy file outages. Data Manipulation. Programming complex database operations is considerably easier in SQL than in legacy databases. SQL is deservedly famous for its adhoc query and update capability. Dealing with massive amounts of data (decision support, data mining, etc.) is a job well-suited for SQL. NonStop SQL/MP performs query optimization and then employs parallelism, hash joins, large data transfers, multidimensional data access, etc., to reduce the query times by orders of magnitudes, without complex programming. Data Validation. SQL guarantees that each value in the database in consistent with the defined data type and any constraints. Each column in a table may require a data value, have a default value, or default to the NULL value. Standards-based Programming. It is difficult to obtain programming skills for any legacy system. Those who have such skills are concerned about career development; attracting and training competent programmers is a challenge. On the other hand, talented SQL programmers are relatively easy to hire and retain. Traditional Database Migration The traditional migration from a legacy system to SQL requires examining each program that operates on the database, understanding the logic that performs database operations, and replacing that logic with SQL data access statements. Once the programs are rewritten, and tested, the legacy files are converted to SQL tables and, all at once, the new SQL-based applications are placed into production and the old legacy applications are retired. The process may not be technically difficult and the projected cost of programming may be affordable, but many users have studied and abandoned plans to migrate in this fashion, due to the unacceptable risk and impact it would have on the enterprise. The following sections outline some of the problems with a traditional migration. High Conversion Cost. There are no known methods of reliably converting source programs from to SQL using mechanical translators. Studies of other legacy program migration (e.g., Year 2000) estimate the cost of manual conversion at dollars per line of original source code; SQL conversion is more complex than Year 2000 conversion. It is often the case that a large percentage of programs are functionally stable, and converting them to SQL would not enhance the users ability to do business. The traditional approach, however, demands converting every program that might be needed after the database conversion. High Conversion Risk. The legacy system was built incrementally; new programs or upgrades were placed in service individually; bugs found after introduction could be quickly corrected without major impact. Placing hundreds or thousands of rewritten programs in service all at the same time, even if they are well tested, is an incredibly risky step for any business. Even if the bug rate is quite low, the number of problems may quickly overwhelm the ability of the organization to deal with them. It may be possible to decompose the application into separate collections of files and the programs that operate on them, and convert one collection at a time, but few applications were designed to permit such a decomposition. Large Project Risks. A large and risky migration project will not be approved if it proposes a complete rewrite of the existing system with no immediate return on investment. The project must promise new function or significant cost saving. Thus, the project must incorporate broad logic changes to the application, further increasing the cost and risk. Management of a large project over a long time is hard. Business conditions will change. Since there may be no absolute requirement to migrate the applications, the organization must constantly rejustify why money is being spent for a long-term goal. Also, large migration projects have a host of additional problems: they require cooperation of people with an intellectual investment in the legacy system; they open up opportunities to reexamine the entire design of the legacy system; their true cost and schedule are hard to estimate accurately. These perils of the large migration project are described in Brodie's and Stonebraker's book, Migrating Legacy Systems. Impact on Critical Development. Any major reengineering project will require careful and costly coordination with ongoing development required by the business. The legacy application must continue to serve the needs of the business, requiring the senior architects to split their attention between ongoing development and the re-engineering process. Lost Opportunities. If the primary goal of the migration is to provide better access to the data, or to program new application function in SQL, realizing the goal is delayed until the re-engineering project is completed. Database Gateways A database gateway is an agent that insulates the existing application programs from changes that are made to the database. A simple illustration of a database gateway appears in Figure 1. Page 2
3 Legacy System Legacy DB Legacy DB Figure 1. Database Gateway Legacy System Database Gateway SQL DB The gateway supports conversion of the physical data from the legacy database to the desired target SQL database. When legacy programs are executed, the gateway emulates the legacy API using data from the SQL database. New programming and standards-based tools can access the SQL database directly. The gateway should support the ability to add new function and to modernize the database structure (such as adding or altering field definitions and normalizing files), but without affecting the existing legacy programs. Later sections will describe the construction, operation, and use of an -to-sql gateway, but first let us consider its advantages over traditional migration. Reduced cost and risk. Although the gateway can eliminate most reprogramming, this may be a minor consideration. Even if one reprograms all legacy programs, the gateway can reduce overall project costs. Once the database is converted, any reprogramming effort can be concentrated in the most fruitful area first. New development or performance-critical programs can be addressed first, followed by the other programs in priority order. The user can continually reassess the value of reprogramming the entire application vs. leaving seldom-executed programs as-is. Improved productivity. One can form a SWAT team to focus solely on the migration task; such a team can concentrate on a few programs at a time, reducing the need to coordinate with ongoing development. They will develop skills and methods from the first few programs, which can accelerate the entire process. As parts of the application are reprogrammed and tested, they can be installed on the production system gradually, reducing the immediate risk to a more acceptable level. Accelerated return on investment. A large portion of the value of a SQL database is its increased accessibility with modern tools and by nonprogrammers. The gateway converts the database immediately; its cost is often justified by the accelerated availability of the data. Page 3 SQL SQL is a database gateway that migrates application programs and files to NonStop SQL/MP. The gateway has three primary facilities: table mapping, data migration, and emulation. Table mapping uses the available information about files, typically DDL records and the existing file attributes; the user may specify additional instructions to restructure the target tables, including altering data types, specifying data scrubbing, Y2000 date widening, or proprietary data conversion, and normalizing the file. The table and indexes, if any, are created, and the mapping information is stored for later use. Data migration accesses the table mapping information and converts the data records to load the target SQL tables. emulation intercepts all operations on SQL tables, accesses the table mapping, and performs equivalent SQL operations. Gateway Usage Overview The following figures illustrate the typical steps to use the gateway. CI ESCORT RUNTIME Start Create Figure 2. Components Map DB $ZESC ( Monitor) Figure 2. shows the product components after the initial installation, which is a simple restore of the product files. The CI is the command interface to manage the gateway; it is used to create the map database that holds the table mapping information. The runtime is a user library that performs emulation. The monitor is a NonStop process pair that performs coordination and caching; it is configured and started by the CI. The gateway components are non-privileged, use only standard Tandem interfaces, do not require a SYSGEN or SUPER.SUPER access, and conform to both standard or SafeGuard security.
4 Legacy Application Languag Runtime File Sys Legacy Application Languag E scort Ru nt ime untim e File Sys Figure 3. Preparation SQL File Sys Application preparation is shown in Figure 3. For each application program, an automated process inserts the gateway runtime between the program and. This one-time operation takes a few seconds for each program. Prepared programs do not require any special handling; they do not need to be SQLCOMPiled, even if they are duplicated, moved, renamed, or copied to another system. DDL File CI Figure 4. Table Creation and Mapping SQL Table Map DB Figure 4 illustrates processing the DDL to create one or more tables and indexes from an record definition. The user can specify a large number of options to customize and restructure the target database, including data type conversion and normalization. Physical attributes (extent sizes, partitioning, etc.) may be specified from the existing database. The mapping of the original record to the SQL table(s) is stored in the mapping database; the DDL is no longer needed by the gateway. We have created a mapped table, which can be opened by a prepared program. File Map DB Loader Load Figure 5. Data Migration SQL Table The loader, shown in Figure 5., uses the information in the mapping database to migrate the data in the file to the SQL tables. It automatically performs all the customized data type transformation and normalization specified in the table creation. Finally, the prepared applications may be executed; the runtime passes through all operations for ordinary files, but emulates operations for mapped SQL tables. Product Features Multi-language support. Since emulates the full file system, it naturally supports all Tandem languages: COBOL, C, TAL, PASCAL. No source is required, but the programs must not be stripped. Enform and Enable can access the SQL database with no changes to the Tandem-supplied object code or user queries. After preparation, the logic of the application is unchanged. Even when debugging at the lowest level, the same machine instructions are executed with the same results. Memory allocation is unchanged; requires no primary, secondary, or extended global memory. SQL Table Creation and Mapping. DDL descriptions of data are processed to create SQL tables and to map files to those tables. A simple conversion can be performed rapidly, but major restructuring will require design and experimentation. The gateway converts structured files: entry-sequenced, relative, and key-sequenced. Alternate key files are converted to SQL indexes. Legacy Data Mapping. Most databases have legacy data formats, such as YYMMDD dates. Built-in data types automatically convert legacy data fields to standard SQL data types. The built-in data types can be extended with custom, user-written, data types. Non-Relational Record Mapping. Existing data records that cannot be described by DDL are transformed to a relational form with a custom record mapping. Thus, various forms of compressed data can be mapped to SQL. Normalization. One-to-many table mappings convert a legacy file to a normalized collection of SQL tables. This is commonly used to map the DDL REDEFINES or OCCURS clauses. Data Migration. Although SQL provides an efficient bulk data loading facility, it does not support the data transformations and normalization supported by the gateway. Thus, provides an enhanced facility to load the new SQL tables from the files. When an file has been normalized, many SQL tables are loaded in parallel from a single file. For fallback and data distribution requirements, files may be reverse loaded from the mapped SQL tables. Page 4
5 Application Execution. In prepared applications, the original object code is unchanged and the program executes in the same manner as before. Pathway application startup procedures and batch job execution require no significant change. Enform and Enable continue to operate as before, but the underlying database may be mapped SQL tables. Hybrid Programming. Performance-critical programs can be modified to use advanced SQL. Native SQL and I/O for the same table can coexist in a program. Hybrid programs need to be SQLCOMPiled as any other SQL program. Compatibility The emulation of is nearly complete; all 58 relevant procedures are intercepted. Only file checkpointing procedures are omitted. All SQL errors are converted to appropriate error codes; the application should see only the relevant error codes. Data validation. Since SQL performs validation on each column value, operations that update the database with invalid data will be rejected; gateway operations that supply invalid data are rejected with an error code. The user may choose to deal with data validation errors in two ways: (1) allow the applications to fail and force the programmers to eliminate the source of invalid data, or (2) configure either built-in or custom data-scrubbing routines to convert invalid data to valid data during application execution. analyzes invalid updates and indicates the particular fields in error. Unstructured access and FAST-IO. Although unstructured I/O is not supported, will correctly execute COBOL and CRE programs that request FAST- IO, but with normal efficiency. Database Management. Neither FUP nor database management can alter file partitioning and alternate key files on a mapped SQL table. The user must use SQLCI or other SQL-based tools to perform such operations. BACKUP and RESTORE have special options and requirements for SQL tables. Performance Excellent performance is a primary criteria for the gateway, as it is designed to be used for migrating highperformance -based OLTP applications to SQL. Since is one of the most efficient databases in existence, this is not an easy task. Considerable effort has been expended to optimize the emulator; emulation data structures and algorithms are all designed for the utmost speed. There remains, however, a need for the user to measure and manage the performance of the migrated application. The performance effect can be separated into three categories: Page 5 Operations not requiring emulation. When operating on files that have not been migrated, the performance effect of the gateway is negligible. Each ordinary operation is passed-through with no significant overhead. Initial emulated operations. The gateway compiles dynamic SQL statements when the application program makes requests on a SQL table, but these compilations are cached and reused for all subsequent operations that are similar. Typical statement compilations times are 0.1 to 0.3 seconds; typical programs compile one or two statements for each file open. Mainline emulated operations. Gateway performance is equivalent to a direct translation of the program to hand-coded, compiled, SQL. Dynamic SQL statements are just as efficient as pre-compiled SQL statements. Once compiled, dynamic and embedded SQL statements are executed in the same fashion. The requirement to manage performance is the direct result of two basic facts: (1) The gateway must emulate individual operations with rather simple SQL statements; it uses sequential cursors to enhance buffering, but does not employ joins or complex set operations. (2) Simple SQL statements use more cpu than equivalent operations; while performs a simple move of the record, SQL performs column-by-column data transformation and validation. In order to significantly improve on the gateway s performance, one cannot just replace simple operations with simple SQL statements; some algorithm redesign is required to make use of efficient joins, set operations and blind updates. As noted above, the gateway supports hybrid programs, so only the performance-critical parts of performance-critical programs need to be altered. Even when performance is an issue, the gateway requires a minor amount of reprogramming. Optimizations When an application accesses SQL tables through the gateway, the gateway employs sophisticated techniques to reduce overhead: Passthrough Mode. Due to a proprietary stack management scheme, the basic overhead for executing an -prepared program is negligible. Applications that access only files through the gateway will see insignificant increased resource usage. Likewise, has no effect on the execution of compiled SQL. Map caching. When the gateway runtime requires map information stored in the map database, the actual database fetch is eliminated by caching the map information in the monitor process. An OPEN is processed with a single interprocess message, and no actual database I/O.
6 Statement caching. maintains a compiled SQL statement cache with a proprietary tagging algorithm to efficiently eliminate unnecessary recompilation. Cursor management. carefully manages cursors to obtain the best possible buffering performance from SQL. It also provides methods for the user to customize buffering, without changing the application program. Data compression. Numeric PIC 9 fields and fixedlength ASCII text can be mapped to NUMERIC (i.e., binary) and VARCHAR, respectively. The user cannot detect the difference, but the resulting SQL table may be much smaller than the file. This greatly reduces the amount of disk I/O and allows more data to be cached in memory. Incremental Performance Management A complete -to-sql migration, using either gateway or traditional technology, may result in a significant increase in cpu utilization, depending, of course, on how intensively the application accesses the database. If an application spends only 10 to 20% of its time performing database operations, the increase may not be significant. With a gateway, one should take full advantage of incremental migration to manage risk, data migration outage, as well as performance. Migrate a single file and measure utilization and response time; make the necessary adjustments for that file before proceeding to the next. The effect of each small change can be monitored and addressed before overall system performance is affected. Incremental migration also permits the user to make reasonable tradeoffs between hardware and software investment. The simplest solution to increased cpu utilization may be to add cpu resources in the form of a hardware upgrade, but another approach may be to reprogram some parts of the application. Where an program may need to read and examine every record in order to update a small subset, a SQL set operation is performed efficiently at the disk process level. SQL also allows the blind update operation, making it unnecessary to read a record before updating it. With, this sort of performance optimization is made considerably easier because one can reprogram very small parts of a legacy program. Performancecritical routines can be hand coded with compiled SQL, while the others can continue to access the data using calls through the gateway. The Gateway and Year 2000 The past two issues of Tandem Connection presented comprehensive articles on the Year 2000 century rollover problem (Highleyman, Allen). The gateway can make Year 2000 re-engineering simpler and less risky, and provide a useful database technology upgrade at the same time. First, built-in datatypes convert legacy dates in the data to millennium-compliant dates in SQL. During the conversion process, the user simply designates certain fields as having, for example, a YYMMDD format. The corresponding SQL field will be of type DATE including a century. As existing applications read or write the data, it will automatically be converted using a 100-year window algorithm. The user may choose different 100-year windows for each field. For example, a birth date might have a window, while a mortgage maturity date might have a window. Thus, the underlying database is made millennium compliant but the legacy programs execute as before; this does not re-engineer the programming logic. The gateway approach allows the user to define an alternate mapping on each database table. The alternate mapping converts dates with centuries (YYYYMMDD). Programs can be converted, one-byone, to use the new DDL and perform date calculations with centuries. Although this approach does not eliminate all reprogramming, it has three important advantages: (1) It immediately upgrades the database to be millennium compliant and does not wait for all existing applications to deal with YYYYMMDD dates. (2) It allows programs that do not perform calculations on dates to be left asis; this may be 80 to 90% of all programs. (3) The remaining programs can be converted incrementally; the most critical can be converted immediately. Conclusion Migrating a legacy database to SQL has many advantages, but high cost and risk have prevented the majority of database users from making the transition. With the availability of an -to-sql database gateway, Tandem users have a new option that reduces those costs and risks to an acceptable level. This gateway can reduce the migration time from years to months and allows the advantages of SQL to be realized immediately. Bibliography Alleman, R., Performance Challenges in File Conversions, in Tandem Connection Vol. 18 No. 3, 1997 Allen, L., Testing the Year 2000 Fix, in Tandem Connection Vol. 18 No. 3, 1997 Brodie, M., and Stonebraker M. Migrating Legacy Systems: Gateways, Interfaces & the Incremental Approach, Morgan Kaufmann, San Francisco, 1995 Highleyman, B., The Year 2000 Virus and Its 10-Step Cure, in Tandem Connection Vol. 18 No. 2, 1997 Comments or questions about this article may be addressed to the author at Page 6
Auditing Non TMF Applications Escort AutoTMF Product Overview www.carrscott.com Carr Scott Software Inc. Founded in 1995 as Tandem spin-off company Founders: Dr. Richard Carr and Harry Scott R&D and support
Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system Chapter 2: Operating-System Structures
data sheet Database Connectors Executive Overview Database Connectors are designed to bridge the worlds of COBOL and Structured Query Language (SQL). There are three Database Connector interfaces: Database
Data Management Library Introduction to NonStop SQL/MP Abstract Part Number 113425 Edition This manual introduces the basic features of NonStop SQL/MP and is intended for nontechnical and technical readers.
In Memory Accelerator for MongoDB Yakov Zhdanov, Director R&D GridGain Systems GridGain: In Memory Computing Leader 5 years in production 100s of customers & users Starts every 10 secs worldwide Over 15,000,000
the Availability Digest Raima s High-Availability Embedded Database December 2011 Embedded processing systems are everywhere. You probably cannot go a day without interacting with dozens of these powerful
+ Raima Database Manager Version 14.0 In-memory Database Engine By Jeffrey R. Parsons, Senior Engineer January 2016 Abstract Raima Database Manager (RDM) v14.0 contains an all new data storage engine optimized
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware
C H A P T E R 4 Software: Systems and Application Software Software and Hardware Software can represent 75% or more of the total cost of an IS. Less costly hdwr. More complex sftwr. Expensive developers
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly
DiskBoss - Data Management Solution DiskBoss provides a large number of advanced data management and analysis operations including disk space usage analysis, file search, file classification and policy-based
ICOM 6005 Database Management Systems Design Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 August 23, 2001 Readings Read Chapter 1 of text book ICOM 6005 Dr. Manuel
TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called
MS-40074: Microsoft SQL Server 2014 for Oracle DBAs Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills and experience as an Oracle
CACHÉ: FLEXIBLE, HIGH-PERFORMANCE PERSISTENCE FOR JAVA APPLICATIONS A technical white paper by: InterSystems Corporation Introduction Java is indisputably one of the workhorse technologies for application
Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction
The EMSX Platform A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks A White Paper November 2002 Abstract: The EMSX Platform is a set of components that together provide
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
OpenDemand Systems, Inc. Load Testing and Monitoring Web Applications in a Windows Environment Introduction An often overlooked step in the development and deployment of Web applications on the Windows
PeopleTools Tables: The Application Repository in the Database by David Kurtz, Go-Faster Consultancy Ltd. Since their takeover of PeopleSoft, Oracle has announced project Fusion, an initiative for a new
Data Management for Portable Media Players Table of Contents Introduction...2 The New Role of Database...3 Design Considerations...3 Hardware Limitations...3 Value of a Lightweight Relational Database...4
Oracle9i Release 2 Database Architecture on Windows An Oracle Technical White Paper April 2003 Oracle9i Release 2 Database Architecture on Windows Executive Overview... 3 Introduction... 3 Oracle9i Release
An Oracle White Paper November 2012 Hybrid Columnar Compression (HCC) on Exadata Introduction... 3 Hybrid Columnar Compression: Technology Overview... 4 Warehouse Compression... 5 Archive Compression...
16 Example of Standard API System Call Implementation Typically, a number associated with each system call System call interface maintains a table indexed according to these numbers The system call interface
Is there any alternative to Exadata X5? March 2015 Contents 1 About Benchware Ltd. 2 Licensing 3 Scalability 4 Exadata Specifics 5 Performance 6 Costs 7 Myths 8 Conclusion copyright 2015 by benchware.ch
How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
Data Compression in Blackbaud CRM Databases Len Wyatt Enterprise Performance Team Executive Summary... 1 Compression in SQL Server... 2 Perform Compression in Blackbaud CRM Databases... 3 Initial Compression...
CMPT-354-Han-95.3 Lecture Notes September 10, 1995 Chapter 1 Introduction 1.0 Database Management Systems 1. A database management system èdbmsè, or simply a database system èdbsè, consists of æ A collection
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
Product Guide What is Sawmill Sawmill is a highly sophisticated and flexible analysis and reporting tool. It can read text log files from over 800 different sources and analyse their content. Once analyzed
Is your database application experiencing poor response time, scalability problems, and too many deadlocks or poor application performance? One or a combination of zparms, database design and application
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory
Performance Tuning for the Teradata Database Matthew W Froemsdorf Teradata Partner Engineering and Technical Consulting - i - Document Changes Rev. Date Section Comment 1.0 2010-10-26 All Initial document
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,
Migrate AS 400 Applications to Linux Infinite Corporation White Paper date March 2011 Abstract: This paper is a discussion of how to create platform independence by executing i OS (AS/400) applications
PRODUCT sheet: CA IDMS SERVER r17 CA IDMS Server r17 CA IDMS Server helps enable secure, open access to CA IDMS mainframe data and applications from the Web, Web services, PCs and other distributed platforms.
Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High
The Real Challenges of Configuration Management McCabe & Associates Table of Contents The Real Challenges of CM 3 Introduction 3 Parallel Development 3 Maintaining Multiple Releases 3 Rapid Development
The Comeback of Batch Tuning By Avi Kohn, Time Machine Software Introduction A lot of attention is given today by data centers to online systems, client/server, data mining, and, more recently, the Internet.
Postgres Plus Advanced Server An Updated Performance Benchmark An EnterpriseDB White Paper For DBAs, Application Developers & Enterprise Architects June 2013 Table of Contents Executive Summary...3 Benchmark
INTERSYSTEMS CACHÉ AS AN ALTERNATIVE TO IN-MEMORY DATABASES David Kaaret InterSystems Corporation INTERSYSTEMS CACHÉ AS AN ALTERNATIVE TO IN-MEMORY DATABASES Introduction To overcome the performance limitations
WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
SQL Server 2014 Performance Tuning and Optimization 55144; 5 Days; Instructor-led Course Description This course is designed to give the right amount of Internals knowledge, and wealth of practical tuning
Appendix M INFORMATION TECHNOLOGY (IT) YOUTH APPRENTICESHIP PROGRAMMING & SOFTWARE DEVELOPMENT AND INFORMATION SUPPORT & SERVICES PATHWAY SOFTWARE UNIT UNIT 5 Programming & and Support & s: (Unit 5) PAGE
Chapter 6: Physical Database Design and Performance Modern Database Management 6 th Edition Jeffrey A. Hoffer, Mary B. Prescott, Fred R. McFadden Robert C. Nickerson ISYS 464 Spring 2003 Topic 23 Database
Course 55144B: SQL Server 2014 Performance Tuning and Optimization Course Outline Module 1: Course Overview This module explains how the class will be structured and introduces course materials and additional
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
COSC 304 Introduction to Systems Introduction Dr. Ramon Lawrence University of British Columbia Okanagan firstname.lastname@example.org What is a database? A database is a collection of logically related data for
Cloud Service Model Selecting a cloud service model Different cloud service models within the enterprise Single cloud provider AWS for IaaS Azure for PaaS Force fit all solutions into the cloud service
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides
Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada email@example.com Micaela Serra
PRODUCT SHEET CA Datacom Server CA Datacom Server Version 14.0 CA Datacom Server provides web applications and other distributed applications with open access to CA Datacom /DB Version 14.0 data by providing
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
Actian Vector in Hadoop Industrialized, High-Performance SQL in Hadoop A Technical Overview Contents Introduction...3 Actian Vector in Hadoop - Uniquely Fast...5 Exploiting the CPU...5 Exploiting Single
Transworld Data Mary E. Shacklett President Transworld Data For twenty-five years, Transworld Data has performed technology analytics, market research and IT consulting on every world continent, including
Software: Systems and Application Software Computer Software Operating System Popular Operating Systems Language Translators Utility Programs Applications Programs Types of Application Software Personal
Throwing Hardware at SQL Server Performance problems? Think again, there s a better way! Written By: Jason Strate, Pragmatic Works Roger Wolter, Pragmatic Works Bradley Ball, Pragmatic Works Contents Contents
Introduction to Virtual Machines Introduction Abstraction and interfaces Virtualization Computer system architecture Process virtual machines System virtual machines 1 Abstraction Mechanism to manage complexity
Advanced Engineering Forum Vols. 6-7 (2012) pp 82-87 Online: 2012-09-26 (2012) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/aef.6-7.82 Research on Clustering Analysis of Big Data
Virtual Tape Systems for IBM Mainframes A comparative analysis Virtual Tape concepts for IBM Mainframes Mainframe Virtual Tape is typically defined as magnetic tape file images stored on disk. In reality
Review from last time CS 537 Lecture 3 OS Structure What HW structures are used by the OS? What is a system call? Michael Swift Remzi Arpaci-Dussea, Michael Swift 1 Remzi Arpaci-Dussea, Michael Swift 2
CSE 120 Principles of Operating Systems Fall 2000 Lecture 3: Operating System Modules, Interfaces, and Structure Geoffrey M. Voelker Modules, Interfaces, Structure We roughly defined an OS as the layer
Oracle9i Data Warehouse Review Robert F. Edwards Dulcian, Inc. Agenda Oracle9i Server OLAP Server Analytical SQL Data Mining ETL Warehouse Builder 3i Oracle 9i Server Overview 9i Server = Data Warehouse
Operating Systems: Internals and Design Principles Chapter 12 File Management Seventh Edition By William Stallings Operating Systems: Internals and Design Principles If there is one singular characteristic
OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni Agenda Database trends for the past 10 years Era of Big Data and Cloud Challenges and Options Upcoming database trends Q&A Scope
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
Performance Counters Technical Data Sheet Microsoft SQL Overview: Key Features and Benefits: Key Definitions: Performance counters are used by the Operations Management Architecture (OMA) to collect data