Patterns for Data Migration Projects
|
|
|
- Hortense Wilkins
- 10 years ago
- Views:
Transcription
1 Martin Wagner Tim Wellhausen March 17, 2011 Introduction Data migration is a common operation in enterprise computing. Whenever a new system is introduced and after each merger and acquisition, existing data has to be moved from a legacy system to some hopefully more suitable target system. A successful data migration project needs to manage several key challenges that appear in most such projects: First, you don t know what exactly is in the legacy system. The older the legacy system is, the less likely it is that its original developers are still members of the development team or that they are at least available to support the data migration project. Additionally, even if the legacy system has been well documented in the beginning, you often cannot rely on the correctness and completeness of the documentation after years of change. Second, data quality might be an issue. The quality constraints on the data in the old system may be lower than the constraints in the target system. Inconsistent or missing data entries that the legacy system somehow copes with (or ignores) might cause severe problems in the target system. In addition, the data migration itself might corrupt the data in a way that is not visible to the software developers but only to business users. Third, it is difficult to get decisions from the business side. Business experts typically already struggle to get their daily work done while, at the same time, working on the requirements for the new system. During the development of the data migration code, many decisions need to be made that may affect the quality and consistency of the migrated data. Fourth, development time is restricted. The development of the data migration project cannot start before the target system s domain model is well-defined and must be finished 1
2 before the go-live of the new system. As the main focus typically is on the development of the target system, the data migration project often does not get as many resources as needed. Fifth, run time is restricted. Large collections of legacy data may take a long time to migrate. During the development of the migration code, a long round trip cycle makes it difficult to test the data migration code regularly. At go-live it may not be feasible to stop the legacy system long enough to ensure that the legacy data to migrate is both constant and consistent during the complete migration process. This paper presents several patterns that deal with these issues: Develop with Production Data: Use real data from the production system for tests during the development of the migration code. Migrate along Domain Partitions: Divide and conquer the migration effort by migrating largely independent parts of the domain model one after another. Measure Migration Quality: Implement code that collects and stores all sorts of information about the outcome of the migration during every run. Daily Quality Reports: Generate detailed reports about the measured quality of the migrated data and make it available to all affected stake holders. In parallel to this paper, Andreas Rüping wrote a collection of patterns with the title Transform! [5]. His patterns cope with the same forces as described here. Among these patterns are: Robust Processing: To prevent the migration process to halt from unexpected failure, apply extensive exception handling to cope with all kinds of problematic input data. Data Cleansing: To prevent the new application from being swamped with useless data right from the start, enhance your transformation processes with data cleansing mechanisms. Incremental Transformation: Perform an initial data migration a while before the new application goes live. Migrate data that has changed since then immediately before the new application is launched. The intended audience for this paper are project leads, both on the technical and the business side, and software developers in general who need to get a data migration project done successfully. Test infrastructure A data migration effort needs a supporting infrastructure. How you set up such an infrastructure and what alternatives you have is out of the scope of this paper. Rather, the paper assumes the existence of an infrastructure as outlined in figure 1: 2
3 Figure 1: Data flow between productive and test systems during data migrations The final data migration takes place from the legacy system into the new system in the production environment. Once this migration has been successfully done, the project has succeeded. To perform test runs of the data migration at development time, test environments for both the legacy and the new system exist and are available to test the migration code. To set up the tests, legacy data from the production system needs to be copied into the corresponding test system. After a migration test run, the new system can be checked for the outcome of the test. Running example Throughout the paper, we will refer to an exemplary migration project to which the patterns are applied. The task at hand is to migrate data about customers and tickets from a legacy system to a new system with a similar yet somewhat different domain model. The legacy system contains just one large database with all information about the customers and the tickets that they bought. For repeated ticket sells to existing customers, new entries were created. The new system splits both customer and ticket data into several tables. Customers are now explicit entities and may have several addresses. For tickets, current and historical information are now kept separately. 3
4 Develop with Production Data Context You need to migrate data from a legacy system to a new system whose capabilities and domain model differ significantly from the legacy system. The exact semantics of the legacy data are not documented. Problem Once the data migration has started on the production environment, the migration code should run mostly flawless to avoid migrating inconsistent data into the target system. How do you ensure that the migration code covers all cases of the legacy data? Forces The following aspects make the problem difficult: The legacy system s exact behavior is lost in history. Its developers may not be available anymore or do not remember the exact requirements determining the legacy system s behavior. All existing states in the data must be revealed. During the development of the migration code many decisions must be made how to handle special cases. Missing knowledge about some cases data must not go unnoticed for a long time. At least you should know which cases exist that you may safely ignore. Wrong assumptions. Unit tests are a good means to verify assumptions about how the migration code should work. Such tests cannot guarantee, however, that these assumptions are correct. Insufficient test data. Testing the migration code with hand-crafted test data does not guarantee that all corner cases are covered because some corner cases might just not be known. Solution From the beginning on, retrieve snapshots of the legacy data from the production system and develop and test the migration code using real data. Try to get a complete snapshot of the legacy data from the production system, or, if that is not feasible because of the data size, try to get a significantly large amount of legacy data to cover most corner cases. If the legacy data changes during the development, 4
5 regularly update your snapshot and always develop against the most current available data. All developers should use the same data set to avoid any inconsistencies. Whenever your code makes wrong assumptions, the migration is likely to fail. Even though you still have no documentation at hand for the exact semantics of the legacy system, you are informed early on about gaps in your knowledge and therefore in the migration code. If the amount of legacy data is too huge to work with on a daily base, try to define a working set that most likely covers all cases. You could reduce the amount of data, for example, by copying only those data sets that have been used (read/modified) in the production system lately. Production data often contains sensitive information that must not be available to everyone because of legal reasons, business reasons or privacy protection. Examples are credit card information from customers or data that has a high business value such as financial trade data. In such cases, production data needs to be modified (e.g. anonymized) before the development team is allowed to use it. If you test your migration code on production data, you get a good feeling for how long the migration might take at go-live. Consequences Applying this pattern has the following advantages: Detecting corner cases. When production data is fed into the migration code from the beginning on, it is more likely that all boundary conditions are detected during test runs. Increasing the understanding of data semantics. Developers improve their understanding of the implicit meaning and the quality of the existing legacy data. If they are forced to run their code with real data, they cannot decide to silently ignore corner cases. You need also to consider the following liabilities: Slow progress in the beginning. If there are many corner cases to handle, the development efforts may slow down significantly in the beginning because you cannot as easily concentrate on the most common cases at first. More effort. Taking snapshots of legacy data might involve extra effort for extracting it from the legacy system and loading it into a test system. Setting up such a test system might be a time-consuming task on its own. Too much data. The complete production data may be too voluminous to be used in test runs. In that case, you need to find out whether you can restrict testing to a subset of the production data. If you Migrate along Domain Partitions, you may be able to restrict test runs to subsets of the full legacy data. 5
6 Example In our example CRM system, we set up jobs triggered by a continuous integration system. The jobs perform the following task at the push of a button or every night: Dump the legacy database to a file storage and restore the database dump to a test legacy system. Set up a clean test installation of the new system with an empty database only filled with master data. Run the current version of the migration code on the test systems. 6
7 Migrate along Domain Partitions Context You want to migrate data from a legacy system to a new system. The legacy system s capabilities cannot be matched one-to-one onto the new system but the domain of the target system resembles the legacy system s domain. Problem The migration of a data entry of the legacy system may affect multiple domains of the target system. Moreover, even multiple business units may be affected. The complexity of migration projects can be overwhelming. How can you reduce the complexity of a migration effort? Forces The following aspects make the problem difficult: The legacy system s domain model may be ill-defined. This may be the result of inadequate design, or of uncontrolled growth within the system s domain model. It is unclear how data found in the legacy system is structured and interrelated. Different users have contradicting views on the legacy system. Every business unit employing the legacy system may have a different conceptual model of what the legacy system does. It becomes very hard to analyze the legacy system in a way that provides a consistent view on its domain model. The technical view on the legacy system is often counter-intuitive. The development team more often than not does only have a view on database tables or similar artifacts of the legacy system. Semantics associated by business users with special states in raw data is not visible. However, a successful migration relies on eliciting the intricacies of the legacy system. Solution Migrate the data along sub-domains defined by the target system. Introducing a new system allows for defining a consistent domain model that is used across business units [2]. This model can then be partitioned along meaningful boundaries, e.g. along parts used primarily by different business units. The actual migration then takes place sub-domain by sub-domain. For example, in an order management system there may be individual customers, each having a number of 7
8 orders, and with each order there may be tickets associated with it. It thus makes sense to first migrate the customers, then the orders, and finally the tickets. For each sub-domain, all necessary data has to be assembled in the legacy system, regardless of the legacy system s domain model. For example, the legacy system could have been modeled such that there was no explicit customer, instead each order contains customer information. Then in the first migration step, only the customer information should be taken from the legacy system s order domain. In the second migration step, the remaining legacy order domain data should be migrated. Consequences To Migrate along Domain Partitions offers the following advantages: The data migration becomes manageable. Due to much smaller building blocks it becomes easier to spot problems. Furthermore, individual migration steps can be handled by more people simultaneously. Each step of the data migration is testable in isolation. This reduces the overall test effort and may increase the quality of tests if additional integration tests are provided. Verifying the migration results can start earlier. As soon as a single sub-domain migration is ready, business units affected by this sub-domain can verify the migration results. It is no longer necessary to wait until all migration code is ready. To Migrate along Domain Partitions also has the following liabilities and shortcomings: The migration run takes longer. It is very likely that data from the legacy system has to be read and processed multiple times. Errors in migrations propagate. If there is some problem with the migration of a sub-domain performed in the beginning, many problems in dependent sub-domain migrations will likely result. Thus, the earlier the migration of a sub-domain occurs, the higher its quality must be. This necessitates an iterative approach to implementation and testing. Redundant information might be introduced. If data of the legacy system is read multiple times, there is the possibility of introducing the same information into the target system multiple times. This may be necessary if distinct artifacts have had the same designator in the legacy system, but may also be highly problematic. Care has to be taken to avoid any unwanted redundancies in the target system. Example In our example CRM system, the following data partitions might be possible: 8
9 Customer base data Customer addresses Ticket data Ticket history The data migration process first processes all entries to generate the base customer entities in the new system. Then, it runs once more over all entries of the legacy system to create address entries as necessary. In the next step, all current information about sold tickets are migrated with references to the already migrated customer entries. As last step, historical ticket information is retrieved from the legacy table and written into the new history table. 9
10 Measure Migration Quality Context Patterns for Data Migration Projects You need to migrate data from a legacy system to a new system. The legacy system s capabilities cannot be matched one-to-one onto the new system. A staging system with production data is available so that you can Develop with Production Data at any time. The migration code is regularly tested against the production data. Problem Resources are limited. Thus, there is often a trade-off between migration quality and costs, leading to potential risks when deploying the migration code. How do you prevent the migration code from transforming data wrongly unnoticed? Forces The following aspects make the problem difficult: Some changes cannot be rolled back. Although you may have data backups of the target system available, it may not be easy or possible at all to roll back all changes to the target system. You therefore need to know the risk of migrating corrupted data. Down-times need to be avoided. Even if it is possible to roll back all changes, a migration run often requires all users of both the source and the target system to stop using them. Such down-times may be very costly and therefore need to be avoided. Know when to stop improving the quality The rule applies to many migration efforts. Business may not want to achieve 100% quality because it might be cheaper to correct some broken data entries manually or even leave some data in an inconsistent state. Small technical changes in the migration code often lead to significant changes in the migration quality. Yet, the people designing and implementing the migration code are usually not able to fully assess the implications of their changes. Solution In collaboration with business, define metrics that measure the quality of the migrated data and make sure that these metrics are calculated regularly. 10
11 Integrate well-designed logging facilities into the migration code. Make sure that the result of each migration step is easily accessible in the target system, e.g. as special database table. For example, there may be an entry in the database table for each migrated data set, indicating if it was migrated without any problems, or with warnings or if some error occurred that prevented the migration. Add validation tasks to your migration code suite, providing quantitative and qualitative sanity checks of the migration run. For example, check that the number of data sets in the legacy system matches the number of data sets in the new system. Make sure that at least the most significant aggregated results of each migration test run are archived to allow for comparing the effects of changes in the migration code. Consequences To Measure Migration Quality offers the following advantages: Results of migration become visible. By checking the logged information, you can quickly see what the migration actually did. Quick feedback cycles. The effects of each change in the migration code upon the migration quality metrics are visible immediately after the next migration test run. This allows the development team to quickly verify the results of their decisions. Implicit assumptions on structures in the legacy system are revealed. Whenever a developer makes implicit assumptions on some properties of the legacy system that are not correct, it is very likely that the migration will produce errors or warnings for at least some data sets. Potential for legacy data clean-up is revealed. Problems that are not coming from the migration code itself, but from data quality issues in the legacy system, will be revealed before the actual migration in the production system. This provides the unique opportunity to clean up the data quality in a holistic fashion. To Measure Migration Quality also has the following liabilities and shortcomings: Additional up-front effort necessary. Setting up test suites to measure the quality of the outcome requires additional effort from the beginning on. Challenge to find suitable metrics. You have to be careful to create meaningful metrics. If there is a warning for every single data set, it is very likely that people will ignore all warnings. Too much transparency Not everyone may appreciate data quality problems detected by thorough quality analysis in the legacy system. Pointing people to required efforts might cause pressure against the overall migration project, along the lines of if we have to do this much data cleaning to migrate, it is not worth the effort. 11
12 Example In our example legacy CRM system, there are no database foreign key relationships between a ticket and a customer the ticket refers to. Instead, user agents were required to fill in the customer s ID in a String field of the ticket table. In the target system, referential integrity is enforced. The results of the migration are stored in a logging table, having the following columns: Source value designates the primary key of an entry in the legacy system. Target value designates the corresponding primary key of an entry in the target system. Result is either OK or ERROR. Message gives additional explanations about the migration described by the log entry. As part of writing the migration code, we check if the target system s database throws an error about a missing foreign key relationship whenever we try to add a migrated ticket entity to it. If it does, we write an ERROR entry along with a Message telling Ticket entity misses foreign key to customer. Customer value in legacy system is XYZ. into the migration log table, otherwise we write an OK entry. 12
13 Daily Quality Reports Context Patterns for Data Migration Projects You need to migrate data from a legacy system to a new system. You Measure Migration Quality to get results on your current state of the migration after every test run and are therefore able to control the risk of the data migration. Migration tests are run regularly, e.g. every night in an effort of Continuous Integration [1]. You want to involve business experts into testing the outcome of the data migration early on. Problem IT cannot fathom the quality attributes of the domain. Therefore, the decisions defining the trade-offs are made by business units and not the IT department implementing the migration. However, it is not always easy to get feedback on the effect of some code change onto the migration quality. How can you closely involve business into the data migration effort? Forces The following aspects make the problem difficult: People are lazy. If there is any effort involved in controlling the current state of the migration quality metrics, it is likely that business won t notice any significant changes. Business has little time to spare. You cannot expect business to analyze and understand measured data about the migration quality in detail. Solution After every migration test run, generate a detailed report about the state of the migration quality and send it to dedicated business experts. To make the reports easily accessible to business, provide aggregated views of the migration results to allow for trend analysis. For example, provide key statistics indicating the number of successfully migrated data sets, the number of data sets migrated with warnings, the number of data sets that could not be migrated due to problems of data quality or the migration code and the number of those not migrated due to technical errors. It is important to start sending daily quality reports early enough to be able to quickly incorporate any feedback you get. It is also important not to start these reports too early to avoid spamming the receivers with mails that they tend to ignore if the reports 13
14 are not meaningful to them. It also crucial to find business experts that are willing and interested in checking preliminary migration results. Make sure that the migrated data is written into a database that is part of a test environment of the new system. Business experts who receive quality reports may then easily check the migrated data in new system themselves. Consequences Daily Quality Reports offer the following advantages: The state of the migration code becomes highly transparent. If the results of each migration test run are made accessible to business units, they can closely track the progress of the development team charged with the migration. Easy to obtain feedback. By triggering an automatic migration run on the test environment, it becomes comparatively easy to receive immediate feedback on any changes to the migration code. Risk is delegated to business. By receiving feedback early and often, business experts are able to evaluate the risk of the data migration effort. If necessary, managers can be involved to decide, for example, whether some quality issues should ignored or resolved even if that influences the project plan. Daily Quality Reports also have the following liabilities and shortcomings: Efforts necessary for generating reports. Additional up-front effort is required to set up the reporting structures. To ensure comparability between different test runs, the reporting structures have to be defined before the actual migration code gets done. Political issues. The high degree of transparency might lead to political issues in environments that are not used to it. If corporate culture dictates projects to report no problems, it might be a bad idea to let people from outside the migration development team peek into the current, potentially problematic state. Possible delays because of data volume. If there is much data to be migrated, the execution time of a full migration test run may be very long. This prevents tight feedback loops. Regular updates of legacy data may be needed. If the data in the legacy production system changes regularly (for example because business users clean up the existing legacy data), you may need to also automate or at least simplify transfering production data into the migration test system so that business is able to check the effect of their data cleaning on the new system. 14
15 Example To provide daily quality reports in our example system, we add some code at the end of a migration test run that aggregates the number of OK and ERROR entries and reports the numbers such that trend reports can be built using Excel and provided to business people for reporting purposes. If necessary, the generation of high-level trend reports can be implemented as part of the migration code, and a continuous integration server s mail capabilities are used to send the reports to business. Also, lists are generated that show all source values leading to ERROR entries. These lists are given to user agents to manually correct them and to the IT development team to come up with algorithms that cleverly match lexicographically similar references. After each subsequent test run, the procedure is repeated until a sufficient level of quality is reached. 15
16 Related Work Regardless the importance of data migration projects, there is currently surprisingly few literature on this topic. The pattern collection Transform! by Andreas Rüping [5] contains several patterns for data migration projects that nicely complement the pattern collection presented here. The technical details of data migration efforts have also been described in pattern form by Microsoft [6]. Also, Haller gives a thorough overview of the variants in executing data migrations [4] and the project management tasks associated with it [3]. Acknowledments The authors are very thankful to Hugo Sereno Ferreira who provided valuable feedback during the shepherding process. The workshop participants at EuroPLoP 2010 also provided lots of ideas and suggestions for improvements, most of which are incorporated into the current version. Special thanks also go to Andreas Rüping for his close cooperation after we discovered that we are working on the same topic at the same time. References [1] Paul M. Duvall, Steve Matyas, and Andrew Glover. Continuous Integration: Improving Software Quality and Reducing Risk. Addison-Wesley Longman, [2] Eric J. Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-Wesley Longman, [3] Klaus Haller. Data migration project management and standard software experiences in avaloq implementation projects. In Proceedings of the DW2008 Conference, [4] Klaus Haller. Towards the industrialization of data migration: Concepts and patterns for standard software implementation projects. In Proceedings of CAiSE 2009, LNCS 5565, pages 63 78, [5] Andreas Rüping. Transform! - patterns for data migration. In EuroPLoP Proceedings of the 15th European Conference on Pattern Languages of Programming, [6] Philip Teale, Christopher Etz, Michael Kiel, and Carsten Zeitz. Data patterns. Technical report, Microsoft Corporation,
www.dotnetsparkles.wordpress.com
Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.
Custom Software Development Approach
Custom Software Development Approach Our approach to custom software development combines benefits from several standard development process models. We tend to have a well-defined, predictable and highly
Are Mailboxes Enough?
Forensically Sound Preservation and Processing of Exchange Databases Microsoft Exchange server is the communication hub for most organizations. Crucial email flows through this database continually, day
The Microsoft Large Mailbox Vision
WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more e mail has many advantages. Large mailboxes
A very short history of networking
A New vision for network architecture David Clark M.I.T. Laboratory for Computer Science September, 2002 V3.0 Abstract This is a proposal for a long-term program in network research, consistent with the
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2008 Vol. 7, No. 8, November-December 2008 What s Your Information Agenda? Mahesh H. Dodani,
User Stories Applied
User Stories Applied for Agile Software Development Mike Cohn Boston San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City Chapter 2 Writing Stories
Integrated Data Management: Discovering what you may not know
Integrated Data Management: Discovering what you may not know Eric Naiburg [email protected] Agenda Discovering existing data assets is hard What is Discovery Discovery and archiving Discovery, test
META DATA QUALITY CONTROL ARCHITECTURE IN DATA WAREHOUSING
META DATA QUALITY CONTROL ARCHITECTURE IN DATA WAREHOUSING Ramesh Babu Palepu 1, Dr K V Sambasiva Rao 2 Dept of IT, Amrita Sai Institute of Science & Technology 1 MVR College of Engineering 2 [email protected]
8 Critical Success Factors When Planning a CMS Data Migration
8 Critical Success Factors When Planning a CMS Data Migration Executive Summary The first step to success. This paper is loaded with critical information that will promote the likelihood of your success
Streamlining Patch Testing and Deployment
Streamlining Patch Testing and Deployment Using VMware GSX Server with LANDesk Management Suite to improve patch deployment speed and reliability Executive Summary As corporate IT departments work to keep
Effective Release Management for HPOM Monitoring
Whitepaper Effective Release Management for HPOM Monitoring Implementing high-quality ITIL-compliant release management processes for HPOM-based monitoring Content Overview... 3 Release Management... 4
Moving the TRITON Reporting Databases
Moving the TRITON Reporting Databases Topic 50530 Web, Data, and Email Security Versions 7.7.x, 7.8.x Updated 06-Nov-2013 If you need to move your Microsoft SQL Server database to a new location (directory,
Resolving Active Directory Backup and Recovery Requirements with Quest Software
Resolving Active Directory Backup and Recovery Requirements with Quest Software By Mike Danseglio Sponsored by Table of Contents Backing Up Effectively... 1 Identifying an Incident... 2 Recovering from
Critical Success Factors of CAD Data Migrations
Critical Success Factors of CAD Data Migrations Executive Summary Organizations implement PLM systems with several goals in mind: to better manage corporate assets, bring products to market faster, meet
Windows IT Pro. Storage Optimization for. SharePoint. by David Chernicoff. sponsored by. Brought to you by AvePoint and Windows IT Pro
Windows IT Pro Storage Optimization for SharePoint by David Chernicoff sponsored by Tech Advisor AvePoint p. 2 Contents Chapter 1 Dealing with Existing and Legacy Data page 3 Chapter 2 Optimizing SharePoint
FIREWALL CLEANUP WHITE PAPER
FIREWALL CLEANUP WHITE PAPER Firewall Cleanup Recommendations Considerations for Improved Firewall Efficiency, Better Security, and Reduced Policy Complexity Table of Contents Executive Summary... 3 The
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
Database Replication with MySQL and PostgreSQL
Database Replication with MySQL and PostgreSQL Fabian Mauchle Software and Systems University of Applied Sciences Rapperswil, Switzerland www.hsr.ch/mse Abstract Databases are used very often in business
Connectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance Access 7.0 Database Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Database Loss Business Impact... 6 2.2 Database Recovery
A Framework for Data Migration between Various Types of Relational Database Management Systems
A Framework for Data Migration between Various Types of Relational Database Management Systems Ahlam Mohammad Al Balushi Sultanate of Oman, International Maritime College Oman ABSTRACT Data Migration is
5 barriers to database source control and how you can get around them
WHITEPAPER DATABASE CHANGE MANAGEMENT 5 barriers to database source control and how you can get around them 91% of Fortune 100 companies use Red Gate Content Introduction We have backups of our databases,
Connectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance 7.0 Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Loss Business Impact... 6 2.2 Recovery Tools... 8 3 Manual Recovery Method...
HOSTING SERVICES AGREEMENT
HOSTING SERVICES AGREEMENT 1 Introduction 1.1 Usage. This Schedule is an addition to and forms an integral part of the General Terms and Conditions, hereafter referred as the "Main Agreement". This Schedule
Gladinet Cloud Backup V3.0 User Guide
Gladinet Cloud Backup V3.0 User Guide Foreword The Gladinet User Guide gives step-by-step instructions for end users. Revision History Gladinet User Guide Date Description Version 8/20/2010 Draft Gladinet
Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur
Module 2 Software Life Cycle Model Lesson 4 Prototyping and Spiral Life Cycle Models Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what a prototype is.
Hosting Users Guide 2011
Hosting Users Guide 2011 eofficemgr technology support for small business Celebrating a decade of providing innovative cloud computing services to small business. Table of Contents Overview... 3 Configure
Bocada White Paper Series: Improving Backup and Recovery Success with Bocada Enterprise. Benefits of Backup Policy Management
Bocada White Paper Series: Improving Backup and Recovery Success with Bocada Enterprise Why Policy Management Matters... 3 Data Protection Service Management: An Overview... 3 Policy Management s Role
Data Migration for Legacy System Retirement
September 2012 Data Migration for Legacy System Retirement A discussion of best practices in legacy data migration and conversion. (415) 449-0565 www.gainesolutions.com TABLE OF CONTENTS The Importance
Test Driven Development with Continuous Integration: A Literature Review
Test Driven Development with Continuous Integration: A Literature Review Sheikh Fahad Ahmad Deptt. of Computer Science & Engg. Mohd. Rizwan Beg Deptt. of Computer Science & Engg. Mohd. Haleem Deptt. of
Measuring and Monitoring the Quality of Master Data By Thomas Ravn and Martin Høedholt, November 2008
Measuring and Monitoring the Quality of Master Data By Thomas Ravn and Martin Høedholt, November 2008 Introduction We ve all heard about the importance of data quality in our IT-systems and how the data
Data Mining in the Swamp
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
POLAR IT SERVICES. Business Intelligence Project Methodology
POLAR IT SERVICES Business Intelligence Project Methodology Table of Contents 1. Overview... 2 2. Visualize... 3 3. Planning and Architecture... 4 3.1 Define Requirements... 4 3.1.1 Define Attributes...
Moving the Web Security Log Database
Moving the Web Security Log Database Topic 50530 Web Security Solutions Version 7.7.x, 7.8.x Updated 22-Oct-2013 Version 7.8 introduces support for the Web Security Log Database on Microsoft SQL Server
Using Microsoft SharePoint for Project Management
ASPE IT Training Using Microsoft SharePoint for Project Management A WHITE PAPER PRESENTED BY ASPE www.aspe-it.com 877-800-5221 Using Microsoft SharePoint for Project Management A modern Project Management
Techniques for implementing & running robust and reliable DB-centric Grid Applications
Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust
The Basics of Scrum An introduction to the framework
The Basics of Scrum An introduction to the framework Introduction Scrum, the most widely practiced Agile process, has been successfully used in software development for the last 20 years. While Scrum has
Software Process for QA
Software Process for QA Basic approaches & alternatives CIS 610, W98 / M Young 1/7/98 1 This introduction and overview is intended to provide some basic background on software process (sometimes called
Critical Steps for a Successful Data Conversion Moving Legacy Data to Your Case Management System
White Paper Critical Steps for a Successful Data Conversion Moving Legacy Data to Your Case Management System Author: Matt Ryan, Senior Consultant Contents Executive Summary... 1 Find The Right Location...
Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
Server Consolidation with SQL Server 2008
Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations
The Business Case for Data Governance
Contents of This White Paper Data Governance...1 Why Today s Solutions Fall Short...2 Use Cases...3 Reviewing Data Permissions... 3 Reviewing Data Permissions with Varonis... 3 Reviewing User and Group
Five Core Principles of Successful Business Architecture. STA Group, LLC Revised: May 2013
Five Core Principles of Successful Business Architecture STA Group, LLC Revised: May 2013 Executive Summary This whitepaper will provide readers with important principles and insights on business architecture
Designing Global Applications: Requirements and Challenges
Designing Global Applications: Requirements and Challenges Sourav Mazumder Abstract This paper explores various business drivers for globalization and examines the nature of globalization requirements
VMware Mirage Web Manager Guide
Mirage 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,
Data Migration in SAP environments
Framework for Data Migration in SAP environments Does this scenario seem familiar? Want to save 50% in migration costs? Data migration is about far more than just moving data into a new application or
Government Business Intelligence (BI): Solving Your Top 5 Reporting Challenges
Government Business Intelligence (BI): Solving Your Top 5 Reporting Challenges Creating One Version of the Truth Enabling Information Self-Service Creating Meaningful Data Rollups for Users Effortlessly
Die Mobiliar Insurance Company AG, Switzerland Adaptability and Agile Business Practices
Die Mobiliar Insurance Company AG, Switzerland Adaptability and Agile Business Practices Nominated by ISIS Papyrus Software 1. EXECUTIVE SUMMARY / ABSTRACT The Swiss insurance company Die Mobiliar is the
Configuring, Customizing, and Troubleshooting Outlook Express
3 Configuring, Customizing, and Troubleshooting Outlook Express............................................... Terms you ll need to understand: Outlook Express Newsgroups Address book Email Preview pane
The ConTract Model. Helmut Wächter, Andreas Reuter. November 9, 1999
The ConTract Model Helmut Wächter, Andreas Reuter November 9, 1999 Overview In Ahmed K. Elmagarmid: Database Transaction Models for Advanced Applications First in Andreas Reuter: ConTracts: A Means for
One step login. Solutions:
Many Lotus customers use Lotus messaging and/or applications on Windows and manage Microsoft server/client environment via Microsoft Active Directory. There are two important business requirements in this
Data Modeling Basics
Information Technology Standard Commonwealth of Pennsylvania Governor's Office of Administration/Office for Information Technology STD Number: STD-INF003B STD Title: Data Modeling Basics Issued by: Deputy
Extending Legacy Applications to Consume Web Services. OpenSpan White Paper Series: Extending Legacy Applications to Consume Web Services
OpenSpan White Paper Series: Extending Legacy Applications to Consume Web Services Extending Legacy Applications to Consume Web Services Achieving SOA Now p.2 OpenSpan White Paper Series: Extending Legacy
Test Data Management Concepts
Test Data Management Concepts BIZDATAX IS AN EKOBIT BRAND Executive Summary Test Data Management (TDM), as a part of the quality assurance (QA) process is more than ever in the focus among IT organizations
Customer Evaluation Report On Incident.MOOG
WHITE PAPER Customer Evaluation Report On Incident.MOOG (Real Data Provided by a Fortune 100 Company) For information about Moogsoft and Incident.MOOG, visit www.moogsoft.com. http://moogsoft.com 2011-2015
Microsoft Outlook 2010 Hints & Tips
IT Services Microsoft Outlook 2010 Hints & Tips Contents Introduction... 1 What Outlook Starts Up In... 1 Sending Email Hints... 2 Tracking a Message... 2 Saving a Sent Item... 3 Delay Delivery of a Single
The Databarracks guide to Managing your backup storage levels
The Databarracks guide to Managing your backup storage levels The Databarracks guide to managing your backup storage levels 2 Too much of a good thing? Storage growth is the death and taxes of enterprise
NetIQ Advanced Authentication Framework. Maintenance Guide. Version 5.1.0
NetIQ Advanced Authentication Framework Maintenance Guide Version 5.1.0 Table of Contents 1 Table of Contents 2 Introduction 3 About This Document 3 Purposes of Maintenance 3 Difficulties of Maintenance
Load Balancing & High Availability
Load Balancing & High Availability 0 Optimizing System Resources through Effective Load Balancing An IceWarp White Paper October 2008 www.icewarp.com 1 Background Every server is finite. Regardless of
SAP Digital CRM. Getting Started Guide. All-in-one customer engagement built for teams. Run Simple
SAP Digital CRM Getting Started Guide All-in-one customer engagement built for teams Run Simple 3 Powerful Tools at Your Fingertips 4 Get Started Now Log on Choose your features Explore your home page
Auto Archiving Folders in Outlook XP
Auto Archiving Folders in Outlook XP Your Outlook email account on the Exchange server is allotted 50 megabytes of storage space on the server. Items in the Inbox, Calendar, Sent Items, Deleted Items,
Data Warehouse and Business Intelligence Testing: Challenges, Best Practices & the Solution
Warehouse and Business Intelligence : Challenges, Best Practices & the Solution Prepared by datagaps http://www.datagaps.com http://www.youtube.com/datagaps http://www.twitter.com/datagaps Contact [email protected]
Beyond Data Migration Best Practices
Beyond Data Migration Best Practices Table of Contents Executive Summary...2 Planning -Before Migration...2 Migration Sizing...4 Data Volumes...5 Item Counts...5 Effective Performance...8 Calculating Migration
IBM InfoSphere Discovery: The Power of Smarter Data Discovery
IBM InfoSphere Discovery: The Power of Smarter Data Discovery Gerald Johnson IBM Client Technical Professional [email protected] 2010 IBM Corporation Objectives To obtain a basic understanding of the
Optimizing Performance. Training Division New Delhi
Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,
Patterns to Introduce Continuous Integration to Organizations
Patterns to Introduce Continuous Integration to Organizations Kenichiro Ota Shift inc. Tokyo Japan [email protected] [email protected] Hiroko Tamagawa Shift inc. Tokyo Japan [email protected]
Five best practices for deploying a successful service-oriented architecture
IBM Global Services April 2008 Five best practices for deploying a successful service-oriented architecture Leveraging lessons learned from the IBM Academy of Technology Executive Summary Today s innovative
Chapter 25 Backup and Restore
System 800xA Training Chapter 25 Backup and Restore TABLE OF CONTENTS Chapter 25 Backup and Restore... 1 25.1 General Information... 2 25.1.1 Objectives... 2 25.1.2 Legend... 2 25.1.3 Reference Documentation...
Reduce and manage operating costs and improve efficiency. Support better business decisions based on availability of real-time information
Data Management Solutions Horizon Software Solution s Data Management Solutions provide organisations with confidence in control of their data as they change systems and implement new solutions. Data is
Online Transaction Processing in SQL Server 2008
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
Off-the-Shelf Software: A Broader Picture By Bryan Chojnowski, Reglera Director of Quality
Off-the-Shelf Software: A Broader Picture By Bryan Chojnowski, Reglera Director of Quality In the past decade, there has been a sea change in the business software domain. Many companies are no longer
My DevOps Journey by Billy Foss, Engineering Services Architect, CA Technologies
About the author My DevOps Journey by Billy Foss, Engineering Services Architect, CA Technologies I am going to take you through the journey that my team embarked on as we looked for ways to automate processes,
Why you need an Automated Asset Management Solution
solution white paper Why you need an Automated Asset Management Solution By Nicolas Renard, Support and Professional Services Manager, BMC France Table of Contents 1 OVERVIEW Automated Asset Discovery
ORACLE ENTERPRISE DATA QUALITY PRODUCT FAMILY
ORACLE ENTERPRISE DATA QUALITY PRODUCT FAMILY The Oracle Enterprise Data Quality family of products helps organizations achieve maximum value from their business critical applications by delivering fit
IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux
WRITTEN BY Greg Robidoux Top SQL Server Backup Mistakes and How to Avoid Them INTRODUCTION Backing up SQL Server databases is one of the most important tasks DBAs perform in their SQL Server environments
White paper: Developing agile project task and team management practices
White paper: Developing agile project task and team management practices By Vidas Vasiliauskas Product Manager of Eylean Board 2014 The case Every one of us seeks for perfection in daily routines and personal
Implementing a Davton CRM system
Implementing a Davton CRM system Summary The implementation of a CRM system is much more than a purely technical project to install and make available the software.there are three elements which must be
IBM Software Five steps to successful application consolidation and retirement
Five steps to successful application consolidation and retirement Streamline your application infrastructure with good information governance Contents 2 Why consolidate or retire applications? Data explosion:
Framework for Data warehouse architectural components
Framework for Data warehouse architectural components Author: Jim Wendt Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 04/08/11 Email: [email protected] Abstract:
Global Software Change Management for PVCS Version Manager
Global Software Change Management for PVCS Version Manager... www.ikanalm.com Summary PVCS Version Manager is considered as one of the leading versioning tools that offers complete versioning control.
Parallels Virtuozzo Containers 4.6 for Windows
Parallels Parallels Virtuozzo Containers 4.6 for Windows Upgrade Guide Copyright 1999-2010 Parallels Holdings, Ltd. and its affiliates. All rights reserved. Parallels Holdings, Ltd. c/o Parallels International
Enhancing SQL Server Performance
Enhancing SQL Server Performance Bradley Ball, Jason Strate and Roger Wolter In the ever-evolving data world, improving database performance is a constant challenge for administrators. End user satisfaction
Data Quality Improvement and the Open Mapping Tools
Improving Data Quality with Open Mapping Tools February 2011 Robert Worden Open Mapping Software Ltd 2011 Open Mapping Software Contents 1. Introduction: The Business Problem 2 2. Initial Assessment: Understanding
Construction Junction. Inventory Management Software Requirements Specification
Construction Junction Inventory Management Software Requirements Specification Version 2.0 Summa Technologies October 1st, 2009 Summa Technologies, Inc. 925 Liberty Avenue 6 th Floor Pittsburgh, PA 15222
Database Backup and Recovery Guide
Scout Diagnostics Database Backup and Recovery Guide P H 803. 358. 3600 F A X 803. 358. 3636 WWW.AVTECINC.COM 100 I N N O VAT I O N P L ACE, L E X I N G T O N SC 29072 Copyright 2013 by Avtec, Inc. All
How To Restore From A Backup In Sharepoint
Recovering Microsoft Office SharePoint Server Data Granular search and recovery means time and cost savings. An Altegrity Company 2 Introduction 3 Recovering from Data Loss 5 Introducing Ontrack PowerControls
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
CHAPTER - 5 CONCLUSIONS / IMP. FINDINGS
CHAPTER - 5 CONCLUSIONS / IMP. FINDINGS In today's scenario data warehouse plays a crucial role in order to perform important operations. Different indexing techniques has been used and analyzed using
ETL-EXTRACT, TRANSFORM & LOAD TESTING
ETL-EXTRACT, TRANSFORM & LOAD TESTING Rajesh Popli Manager (Quality), Nagarro Software Pvt. Ltd., Gurgaon, INDIA [email protected] ABSTRACT Data is most important part in any organization. Data
Business Process Validation: What it is, how to do it, and how to automate it
Business Process Validation: What it is, how to do it, and how to automate it Automated business process validation is the best way to ensure that your company s business processes continue to work as
SMART PREPARATION FOR DATA CENTER MIGRATION
SMART PREPARATION FOR DATA CENTER MIGRATION There are several common reasons why a data center migration could become necessary. As existing infrastructure ages and service contracts expire, companies
Continuous Integration: Aspects in Automation and Configuration Management
Context Continuous Integration: Aspects in and Configuration Management Christian Rehn TU Kaiserslautern January 9, 2012 1 / 34 Overview Context 1 Context 2 3 4 2 / 34 Questions Context How to do integration
A Modern Approach to Monitoring Performance in Production
An AppDynamics Business White Paper WHEN LOGGING ISN T ENOUGH A Modern Approach to Monitoring Performance in Production Ten years ago, the standard way to troubleshoot an application issue was to look
