Best Practices for DB2 on z/os Schema Management. By Klaas Brant and BMC Software

Size: px
Start display at page:

Download "Best Practices for DB2 on z/os Schema Management. By Klaas Brant and BMC Software"

Transcription

1 Best Practices for DB2 on z/os Schema Management By Klaas Brant and BMC Software

2 Page 2

3 CONTENTS Chapter 1: Introduction to change management... 9 Why do we need change?... 9 Change drivers... 9 What can we change? Infrastructure/software upgrades Migrating to a new release of DB2 consider DB Applying DB2 maintenance Availability and performance improvements Access paths How BMC can help Indexes Virtual indexes Clustering indexes How BMC can help Tablespace How BMC can help Locking Utilities Business application enhancements and fixes Application errors (bug fixing) Data pollution and data corruption How BMC can help New applications and retired applications New or changed business logic Page 3

4 Compliance The impact and risk of making changes Incorrect implementation Incorrect backout Why is it so hard? Compliance with corporate standards Change management Governance and audit Ability to reproduce old data Chapter 2: Managing application changes in DB Lifecycle management The First Law System upgrades a conceptual best practice Independent from application Versioning and fallback Release policy Implement and backout Phased vs. big bang How BMC can help Test your changes How BMC can help Problems for system changes Chapter 3: The complexity of DB2 structure changes The three type of changes Type 1: Simple change minimal to medium impact Type 2: Online change - initiates DB2 object versioning Page 4

5 Type 3: Complex change Unload, Drop, Create and Load (UDCL) How BMC can help Unavailability during the deployment How BMC can help Keep your data healthy How BMC can help Risk of altering scripts How BMC can help Deployment and impact of changes A special undo scenario Is an audit needed? What s new in DB How BMC can help Chapter 4: Managing DB2 security Object qualifier CATMAINT for updating schemas System administrator Trusted context and roles How BMC can help Chapter 5: Managing the DB2 catalog Catalog REORG Verify catalog consistency DBDs and SYSCOPY SYSIBM.SYSUTILX Invalid or inoperative packages Package versions Page 5

6 DB2 plan stability View regeneration errors How BMC can help Catalog queries Chapter 6: Schema management What can we change and how does it affect us? Data sharing Anatomy of a change Application changes: a closer look Program only changes, no data structure changes DB2 data structure changes with minimal impact DB2 data structure changes with impact: online change The high impact change The impact of change The risk of implementing a change DB2 10 improvements Conclusion Chapter 7: Simplify DB2 for z/os schema management Change management - a necessary evil BMC CHANGE MANAGER for DB Manage changes Migrate data structures Migrate data structure changes only Recover data structures Record and control changes Feed back changes Page 6

7 BMC CHANGE MANAGER for DB2 components Baseline Compare CM/PILOT CDL and DDL files The power of parallel performance Exploiting hardware and software capabilities Summary Page 7

8 Page 8

9 Chapter 1: Introduction to change management Most businesses consider data to be the lifeblood of their organization. Its availability and integrity are considered an imperative to making decisions and providing service to internal and external customers. The 21st century has brought dynamic and wide scale changes to how business is conducted. With data having achieved asset status, the agility of managing one s data and responding to change at the same time is a contributing factor to profitability and growth. Why do we need change? Before answering that question, let s answer another what is change? The American Heritage Dictionary defines change in its noun form as: The act, process, or result of altering or modifying and the replacing of one thing for another. Every business strives to succeed in an environment of constant change. Success can be measured by how well a business handles change - in its management, in its market, in its products, and as a result in its business applications. Which brings us to the purpose of this book: to help DB2 for z/os database administrators plan and implement change with a minimum of risk and a maximum of success. That success can be measured by managing change in a manner that maintains Data integrity and object synchronicity With a minimum of downtime Within the requested time period Change drivers An organization that has chosen DB2 for z/os as its database management system has made a prudent choice when it comes to ensuring data integrity and availability. Supporting DB2 data structures and objects (tables, indexes, data sets, and so on) often requires change. The tasks required to implement changes and the magnitude of their impact can vary from minimal to very complex, carrying with them the burden of application downtime, risk of data loss (in case of errors) and loss of customer satisfaction. Changes are driven by the business and sometimes technology. Here are some examples: Page 9

10 Business driven o o o o Changes to product requirements or business rules Mergers and acquisitions Regulatory compliance Errors in business application logic Technology driven o o o System upgrade Discontinued features of DB2 DB2 features (such as data sharing) Most changes to production applications are planned outages, except emergency fixes. Most organizations have implemented some form of change management process that includes procedures, documentation, and approvals. A good process includes well-defined implementation and backout plans. A change becomes an unplanned outage when things go wrong during the implementation. Therefore, a change management process needs an escalation plan as well. Unplanned outages occur because of hardware failures and data corruption, the latter sometimes as a result of an application or data related change (such as logic error in program). They can also be caused by an error when making a change to a DB2 object. To address and implement a change, a DBA may need to take any number of actions, such as: adding views, dropping an index, changing a data type, changing buffer pools, adding new columns, and so on. While some changes can be made in an ad hoc manner, maintaining data integrity and availability requires a solid change management process. Not investing in a solid change process is asking for trouble. Only with the right procedures and tools can you mitigate the risks of change. What can we change? One thing is certain: you will always have things to change in DB2 for z/os. Change is influenced by logical and physical structures, operating systems and physical states, DB2 object changes, and SQL statement constructs, as well as business application coding and process. Let s look at the main areas for change in DB2: Infrastructure/software upgrades Page 10

11 Availability/performance improvements Business application enhancements/fixes Infrastructure/software upgrades Whenever we introduce new hardware or software to the system infrastructure, we introduce change. Many of these changes, like a new controller or re-routing a network, usually go unnoticed or have minimal impact on DB2 and limited impact on your business applications. However, the introduction of a DB2 data sharing environment or a new release of DB2 for z/os can have a huge impact and trigger a significant amount of change. Migrating to a new release of DB2 consider DB2 9 The implications of change when migrating to a new release of DB2 can be daunting. You must first understand all stages of the migration and each stage s impact on the DB2 infrastructure and the business application. Each stage has actions that impose limitations for exploiting new functions and limitations for fall-back. Catalog changes play a significant part in any release migration. As with previous releases of DB2, additional columns are always added to existing catalog tables, columns are deleted, and new catalog tablespaces and tables are created. The chart below depicts the changes over past releases. Note the number of changes for DB2 9 - and that does not include the XML schema repository that resides outside the DB2 catalog! DB2 Version Tablespaces Tables Indexes Columns Table Check Constraint V N/A V N/A V V V V V V10 extreme restructuring Page 11

12 Many DBAs have established queries, procedures, and home-grown tools that rely or access on the DB2 catalog. All of these must be reviewed with each new release for accuracy and changed if needed. Even if you only use vendor tools, you must install a new version of the tool to support the new release. With every migration, some facilities and functions go away. For example, DB2 9 removed DB2- managed stored procedures and the ability to create simple tablespaces. IBM provides a list of withdrawn features, so you have plenty of time to change your structures and procedures. DB2 9 introduced over 80 feature enhancements in the areas of availability, stored procedures, SQL enhancements, security, application enhancements, performance and scalability, utilities, XML and data sharing, including: RENAME COLUMN Universal table spaces Native SQL stored procedures New data types New built- in functions New security roles Optimistic concurrency control MERGE SQL statement Operational changes in 15 utilities Implementing even one of these features introduces change to DB2 objects and/or application programs. Realistically, very few organizations could envision adopting all of the new features in a release of DB2, but almost every organization will implement at least some. Initially, you can limit or prohibit the use of new functionality until the environment is stable. Ultimately, there will be an impact to at least one of your business applications because business owners want to exploit new features to deliver better service to customers. During the migration period your development and test environments are temporarily out of sync; perhaps your test system is already on DB2 9 New Function Mode (ready with migration) and the production system is still in Conversion Mode (first phase of the migration). This condition can lead to unwanted results in the change migration of applications from test to production. For example, defaults of the Page 12

13 new release (test) are not accepted by the old release (production). This situation forces you to complete the release migration in a short period of time. IBM allows for a migration period in which they will charge only for a single release. If you go beyond this period, you must pay for both the old and new release. So many things can push you to complete the upgrade quickly, which is not an easy task when you have many DB2 subsystems. When you rush things or take shortcuts, the possibility of an error increases. Remember - no migration is ever completely transparent to your business; it will cause downtime and can put your data at risk. Applying DB2 maintenance Like every database management system, DB2 needs routine maintenance. IBM provides program temporary fixes (PTFs) to fix errors, patch security breaches, or apply tolerance for an upcoming release. Unfortunately IBM also provides new functionality with some PTFs (often pushed by partner companies). At times, this new functionality has had a negative impact on applications. Declining the new functionality is not an option because PTFs are usually batched and must be processed together. Another side-effect of these fixes is that sometimes you must make a change to your system or application for the fix to work; for example, you many need to rebind packages or run utilities. It is not an option to decline the maintenance entirely, because it is better to fix an error before it hits you. On the other hand, you do not want to be the first to apply new PTFs, unless it s your problem they are fixing, because the fix could cause an error that requires yet another PTF to fix it. Yes, many of us have had this experience. A nice strategy is to apply PTFs that are six months old or older, two times per year. Whatever your strategy is, always read the IBM hold-data (extra information IBM thinks you should read like special instructions) and monitor IBM s hypers (PTFs IBM thinks you should not delay because they address major problems like data corruption or data loss). Release upgrades and DB2 maintenance should always be performed by skilled personnel. Not having enough system programming knowledge is, again, asking for trouble. Availability and performance improvements Keeping your data in a DB2 database means that someone, usually a DBA, monitors its health and performance to ensure that your applications stay in good shape. This is something you might want to document in a Service Level Agreement (SLA). When users/customers complain about response Page 13

14 time and how long it takes to do their work, recent changes to DB2 resources are often the root cause. Sometimes, these changes can even result in downtime. In most companies, business units are not informed of these changes unless a production outage is scheduled. Even then most users don t know what, if any impact there might be to their application and data. When things go wrong, these situations fall into the I never asked for it category. Application performance degradation can be caused by something that has changed like physical characteristics, growth, or new catalog statistics. As a result the access path chosen at bind/rebind time may not provide optimum access to the data. Resolving these issues can be as simple as performing a reorganization or running RUNSTATS, or as intrusive as dropping and recreating a tablespace and its dependencies. Let s look at some of the things we can proactively change to improve access paths and reduce the risks of making a change that delivers poor results. Access paths Plan stability was introduced in DB2 9. With plan stability, DB2 retains backup versions of your programs access paths, providing a safe way to rebind, which is very helpful for fallback. A rebind can improve access paths, but sometimes it does not and this can be a challenge to undo. Plan stability is available for packages only; you can control the level of the stability, which controls the number of copies retained. You will incur additional storage (in the DB2 directory, which often does not have a lot of free space) and additional CPU cycles for a rebind (10% to 40%). How BMC can help When the access path of SQL statements change, the changes can cause performance degradations. The Workload Compare Advisor in BMC SQL Performance for DB2 enables you to compare whole workloads (SQL statements) before a rebind so that you can evaluate the impact of changes before making them effective in the new environment. This comparison can prevent problems when the optimizer chooses a different access path, you make a structure change, object statistics change, or when you migrate to a new version of DB2. Indexes You can influence access path selection by adding an index to a table and running RUNSTATS. For static SQL, you need to rebind the plan or package; for dynamic SQL you need to invalidate the Page 14

15 dynamic statement cache (if used). Adding the new index will not cause any downtime, although the rebind can introduce contention on a busy system. However, it is unwise to play this what if game in a production environment. Virtual indexes In DB2 V8, IBM introduced the concept of virtual indexes. To use this feature, create a virtual index table (SYSIBM.DSN_VIRTUAL_INDEXES) and manually populate it with the appropriate values of the indexes you plan to create. When you execute EXPLAIN, these virtual indexes are considered for the potential access path selection. This allows you try various conditions without risk. Just remember to EXPLAIN all dependent plans and determine any dynamic SQL usage before dropping old indexes or creating new ones. A fix to one package could be a disaster for another. Keep in mind that extra indexes cause extra overhead in insert, update, and delete statements as well in utilities. Changes in indexes can have a negative effect, for example the number of matching columns in the access path decreases because the column order changed. Clustering indexes A DB2 table can only have one clustering index. By default, every table that has one or more indexes has a clustering index. If no indexes are defined as clustering, the REORG utility regards the oldest living index as the clustering index. This is not necessarily the index with the lowest object number (low object numbers get re-used eventually). The REORG utility sorts the data in the tablespace according to the clustering index. During new inserts, DB2 attempts the keep the data in clustering order, but this is possible only when free space permits. Clustering is very important for performance. DB2 9 allows you to alter an index from normal to clustering, but there can be only one clustering index. When you change the clustering index it is likely that the data is not clustered according to the new index. You need to reorganize to reestablish the new clustering order. How BMC can help The BMC SQL Performance for DB2 Workload Index Advisor ensures that existing indexes are optimally structured. It automatically collects and displays actual access counts for each unique SQL statement (table and index, and predicate usage frequencies). For example, you can generate a report that shows which statements access non-indexed columns or how changes to existing indexes affect other SQL statements. Other table and index reports provide quick access to listings of the most used objects based on get page volume or index access ratio. The Workload Index Advisor extends the capability of the Common Explain function within the BMC Availability and performance Page 15

16 improvements solution by comparing access paths after making changes to simulated indexes in a cloned database. A what-if index analysis lets you model changes to indexes. The what-if capability removes the guess work involved when fixing access path problems. The BMC SQL Performance for DB2 Workload Index Advisor provides an automated process to create the best possible indexes for a given SQL workload. Once you have defined the workload (dynamic and static SQL is supported), the Workload Index Advisor will suggest possible new indexes that are currently not defined and will also verify your existing indexes in comparison to the suggested ones. A comprehensive online report provides you a complete overview on all index suggestions ranked by effectiveness. Tablespace The physical data set structure can affect the performance and availability of application data. DB2 9 allows you to create and change three types of tablespaces: partitioned, segmented, and universal (partitioned by range or partitioned by growth). Selecting the appropriate type depends upon the size and growth pattern or clustering requirements of your data. In DB2 9, making a change to the tablespace type always requires a DROP/CREATE scenario. DB2 10 allows you to convert a single table segmented tablespace to a universal tablespace (partitioned by growth) or a normal partitioned tablespace to a universal tablespace (partitioned by range). How BMC can help If you are running DB2 V8 or DB2 9 and would like to migrate to universal tablespace (UTS) without using the UNLOAD/DROP/CREATE/LOAD process and causing a production outage, BMC Recovery Management for DB2 delivers a high speed structure change (HSSC). HSSC transforms the tablespace pages to the new UTS format. This process is much faster than the row level unload/load process. Supported by the unique Online Consistent Copy (a consistent DB2 backup which does not cause any outage no QUIESCE, DRAIN/CLAIM or RO status), HSSC gives you a high availability solution to migrate to UTS, but also to change DSSIZE, SEGSIZE, or the LARGE attribute of a tablespace. Locking Data availability is not only measured by downtime; it is also measured by its concurrency. A transaction that ends in a deadlock or time-out is regarded by the end-user as a failure because the request did not complete. DB2 provides many parameters for utilities and SQL statements and Page 16

17 BIND commands to control the lock duration and sharing of data. Changing concurrency options of an application always requires a rebind of the packages having an impact. Therefore, it is important to carefully select the proper parameters during development and perform tests regarding concurrency. Utilities Making schema changes to tables usually involves one or more DB2 utilities. The parameters used for utility operations affect utility performance and influence data availability. Over the years, IBM has made many utility enhancements that provide online operations and improved performance. But sometimes the online behavior of utilities has unacceptable side effects. The most famous one is probably that IMAGECOPY SHRLEVEL CHANGE cannot be used for RECOVER TOCOPY. DB2 V8 introduced a LOAD SHRLEVEL CHANGE which acts like an SQL INSERT and is very different from SHRLEVEL NONE. Always read the implications of SHRLEVEL CHANGE (the online version) in the utilities manual very carefully. Business application enhancements and fixes Over time most business applications and vendor software products drive change. Behind these changes are requirements to better serve customers, to reduce operating costs, or to take advantage of technology advances and facilities. Responding to these requirements sometimes means modifying or developing application code and altering or creating DB2 objects. Let s reviews some of the many reasons to make changes to your DB2 environment: Application errors (bug fixing) Program logic reflects business rules. When the programs are designed, the analyst or programmer can misinterpret specifications, a specification may be unclear or incomplete, or a programmer can simply make a mistake in implementing the logic. Any of these will result in so-called bugs. Programmers fix the code and test it, and then a new version of the program is ready to go into production, causing a change. This also applies to off-the-shelf application software; the fix and test is done by the software vendor and the result is a new version of the program or a fix (like the PTFs for DB2). Page 17

18 Data pollution and data corruption Data quality is a topic that is heavily discussed. 100% accurate data is probably a utopia, and a certain amount of pollution is always present (for example, end-users discover that they can enter incorrect values on data entry screens and use it to their advantage). When data must be transferred to a data warehouse application, we often talk about cleansing of the data, getting rid of all the anomalies. There may be a need to fix these anomalies in the data in the production environment to avoid propagating bad data throughout the application. Sometimes this can be done in real-time (batch program), but there are times the data needs to be taken offline for this change (using utilities). Of a greater magnitude is data corruption. A program bug can destroy or damage data. Depending on how long this corruption has been going on and how often it occurred, it can be very difficult to fix the bug. It might even involve restoring old copies of the data to further investigate or fix it. Applications are often offline while programmers, DBAs, and business users investigate the damage. Of course, you need an emergency fix for the offending program. How BMC can help DB2 data sometimes gets corrupted by erroneous application logic or human errors. BMC Log Master for DB2 generates SQL to undo the erroneous changes (delete or re-insert the data). You can remove just the transactions in error instead of recovering an entire database, thus saving significant time and money. This can significantly increase the availability of DB2 applications in error situations. New applications and retired applications When new applications are being developed, changes to the program logic and data structures are continuous. Well-designed development and test systems reflect the complexity of the production; therefore, making a data structure change in these systems is as complex as making a data structure change in production. We need a vehicle to ensure that we collect all changes and implement them correctly in the next system (change migration). Often, test and acceptance environments are different from the production environments. Naming conventions and physical storage are not the same (for example, more tables are concentrated in a single tablespace), making it impossible to use the same migration script throughout the development cycle. New applications also have impacts on existing applications when applications interact. Page 18

19 Retirement of applications is something that is usually done the quick and dirty way. In most cases, there is no budget to clean-up old applications so people simply stop the scheduling, dummy out data sets in JCL, or do nothing. You must be very careful with this kind of application pollution. Years later, no one remembers what is what, and no one wants to touch parts of the application because there is a risk they might break something that is important. New or changed business logic Changes to business logic are similar in scope to developing new applications or bug fixing. In fact, these are often bundled into a single (big) change or release level. Compliance Government regulations address privacy issues, data integrity, and security. Certain applications require high levels of accountability. DB2 ensures a compliant and secure infrastructure through trusted context, roles, auditing features, and encryption. Consider making these objects and features an integral part of your application, and make them part of the change management cycle. Compliance, and the associated audit, becomes more and more important. Auditors need tools to compare data structures before and after a change. If your existing change management tools do not have any audit facilities, consider incorporating audit facilities in your applications or change process. The impact and risk of making changes Whenever you implement change in DB2, follow these seven basic tenants of change to ensure that you bring your business applications and/or DB2 infrastructure to the desired post-change state: Communicate to your users and customers before, during, and after you make changes, especially when things are going wrong. Have a documented plan of action that is approved in accordance with your business governance. Ensure that your change incurs the least amount of application downtime as possible or at least meet your production schedule. Preserve the integrity of all data. Adequately test application changes. Page 19

20 Have a back-out plan available just in case. Document what you did, and preserve artifacts for future use and audit. If you leave out or compromise on any of these, you have just increased your risk that something will go wrong. To meet business time frames, a systematic approach is a must. We already touched briefly some potential risks when implementing DB2 changes. Now, let s take a look at a few when things go wrong scenarios. Incorrect implementation It is easy for mistakes to go unnoticed, but the impact of the mistake could become a disaster. Consider a simple-to-fix scenario: You forgot an index. It was dropped and never recreated. After you rebind packages, some transactions have bad response times. You have grumpy end-users and you must debug and fix the situation, but at least no data is lost. But what if you forget a trigger? Because triggers are part of the business logic, that essential logic is now missing. Before the error is detected and fixed, many transactions could have been processed and data might have been corrupted. The mitigation could take days and adversely affect your customers. In scenarios like these, there is no SQL error given or return code set to tell you there is a problem. Simple mistakes can turn into big disasters. Change management tools can prevent mistakes like these and eliminate the risk of costly mistakes. Incorrect backout But even with tools you still have serious risks. Rebinds can create new access paths. How do you ensure that you backed out all the changes from an unsuccessful change implementation (including any updates done to the data during the test)? Let s start with rebind. When you rebind an application, DB2 can opt for a different access path. Many things influence access path selection. Even if you have not made any mistakes in re-creating the objects and have run RUNSTATS correctly, it is possible that DB2 will take a different access path. Because in 99% or more of all the cases the new access path performs as good as or better than the original one, most DBAs consider it an acceptable risk. You can further mitigate this risk with software that provides access path comparisons. Page 20

21 A failed change implementation has a more difficult resolution. If you implement a change and the implementation is rejected during the test or after the application is used in production, then you must back out the entire change completely. Many DBAs don t create a backout scenario in advance. They will try to fix the error on the fly. When this proves to be impossible, they start to back out the change manually without a guarantee that the first attempt will be successful. This is a risky business because it is never practiced. Even when structure changes are undone, in many cases, no one looks after the data. You may have no knowledge of the business processes that affect the data in the application. Has it been corrupted during the test or been updated by users? The only way of undoing everything is to go back to a time when the subsystem and application had a common sync point. The only way to do this quickly is to use snapshot technology for all disks involved. Each type of application changes can have an impact on the business environment in the form of downtime and the potential for data corruption and loss. The more complex the change is, the more risky it becomes. As the implementation scripts needed become more complex, they also become more error prone. If the scripts are written manually, it s easy to make a mistake. The impact of an error can be major. With this same complexity comes the requirement for a higher level of knowledge and skills to prepare and execute changes. Let s look at four application change types: Program-only changes, no DB2 data structure changes required, such as SQL statements and business logic Data structure changes with minimal impact, such as adding a new index Data structure changes using online change, such as altering columns The high-impact change using unload, drop, create, load, and so on, such as deleting columns Why is it so hard? The book Database Administration: The Complete Guide to Practices and Procedures by Craig Mullins (ISBN ) is a masterpiece on database administration and a must-have for every DBA. Craig writes, The DBA is the custodian of database changes. Usually, the DBA is not the one to request a change; that is typically done by the application owner or business user. But there are Page 21

22 times, too, when the DBA will request changes, for example, to address performance reasons or to utilize new features or technologies. At any rate, regardless of who requests the change, the DBA is charged with carrying out the database changes. The DBA must have a process that includes proactive monitoring of the DB2 environment, planning and impact analysis of all changes, assets to implement the change (scripts and tools), a comprehensive test plan and a strategy to back-out the changes. One quick example demonstrates why: DB2 today does not allow a column to be added to the middle of an existing row. To do so, you must drop the table and recreate with the new column in the middle. But what about the data? When the table is dropped the data is deleted, unless unloaded the data first. But what about the indexes on the table? Well, they were dropped when the table was dropped, so unless you know this and recreated the indexes too, performance will suffer. The same is true for database security when the table was dropped all security for the table was also dropped. Other types of database change that are required from time to time include: Changing the name of a database, table, view, or column Modifying a stored procedure, trigger, or user-defined function Changing or adding relationships using referential integrity features Changing or adding database partitioning Moving a table from one database or tablespace to another Rearranging the order of column in a table Changing a column's data type or length Adding or removing columns from a table Changing the primary key without dropping and adding the primary key Adding or removing columns from a view Changing the SELECT statement on which a view is based Changing the columns used for indexing Changing the uniqueness specification of an index Clustering the table data by a different index Changing the order of an index (ascending or descending) Page 22

23 This list is compiled by Craig Mullins and he writes This is a very incomplete listing. Adding to the dilemma is the fact that most organizations have at least two, and sometime more, copies of each database. At the very least, a test and production version will exist. But there may be multiple testing environments for example, to support simultaneous development, quality assurance, unit testing, and integration testing. And each database change will need to be made to each of these copies, as well as, eventually, the production copy. So, you can see how database change quickly can monopolize a DBA s time. Compliance with corporate standards IT can be an integral part of your organization or it can be delivered as a service by an external service provider. IT has matured to a level where a company cannot function without it. In order to deliver its products and services, a business needs accounting, marketing, production or service, customer service, and IT. Like accounting and marketing, IT is often regarded as overhead and, therefore, its costs need to be kept to a minimum. Over the years many standards, guidelines, and process methodologies have been developed and refined to document the interaction between IT and the business. A generic name often used for such standardization is Information Technology Service Management (ITSM). ITSM is process focused and can be compared to similar standard business practices used, like Six Sigma or TQM. ITSM is not a standard in itself. There is not even a clear definition what ITSM really is. The implementation of ITSM is usually done with one or more ITSM frameworks like ITIL and COBIT. Many people add PRINCE2 to this list because it is used within their IT organization. But PRINCE2 is a generic project management method and although it can be used for change management, it is not an ITSM framework. Change management Change management is a small, but important part of these ITSM frameworks. Change management aims to ensure that standardized methods and procedures are used for efficient handling of all changes. All standards dictate that a change is approved by management, is cost effective, and that the implementation has a minimum risk to the IT infrastructure. The goal for every standard is to: Incur minimal disruption in the delivery of IT services Minimize backout activities because of unsuccessful changes Page 23

24 Use any resources involved in the change in the most economical way To achieve this, change requests are mandatory. A change request is a form (often automated) that not only documents the request but also documents the implementation plan, the risk involved, cost estimates, and the backout plan. Once all aspects of the change request are completely documented, a number of people (management and business users) must give their approval. After a change request is approved, it can be scheduled for implementation. Often, in large organizations with many and frequent change requests, a change coordinator is appointed. This person will make sure that any changes scheduled together will not be in conflict with each other. Once a change is complete, there is usually a post-change review to evaluate the implementation process to review the effectiveness of the change and its implementation plan. Whenever an implementation is unsuccessful, it is important to analyze where things went wrong and whether the problems could have been foreseen. Lessons learned can make sure future changes will be more successful. Governance and audit All data stored in a database is important to a business, and it can be regarded as an asset. And like all other assets a company has, data needs protection against misuse, tampering, or loss. In other words, there is a need for governance. Most countries have laws to protect privacy of personal data, so if your database stores personal data, you must comply with official rulings. These include regulations and governance standards like Sarbanes-Oxley (SOX) Act, the European Union Data Privacy Directive, and Health Insurance Portability and Accountability Act (HIPAA). There is no universal compliance and it is important that a company investigates which governance directives must comply with by law and which extra rules it wants to enforce in order to ensure that business assets, in this case data, are protected. Here are a few things to consider: Ensure compliance policies are in place for DBAs, system administrators, programmers, and end users. Implement a standard configuration for DB2, such as a system administration policy. Encrypt all communication links (DB2 9 and later). Implement a regular security review and testing program (audit department). Automate system and data maintenance, user auditing, and log storage. Provide and test a business continuity plan for critical applications. Page 24

25 When you are changing data structures, your data is extremely vulnerable. As often reiterated throughout this book, there is a risk that in the process of making a change a simple mistake may corrupt data. Scripts generated to implement a change must not only be 100% accurate, but you also must safeguard them against tampering. The whole process must be auditable so at any point in time it is very clear what happened during the change process. Make sure that all files in the change process are as well protected as your database. A word about data replication: tools used for change implementation and change migration can create a clone of a data structure and its data. But be aware that this often against the company governance directives, or worse against the law. Every database that stores private information must ensure that when this information is used for testing and development that all data is changed in such a way that its origin cannot be traced. This also applies to data transferred to a data warehouse or data mart. You must ensure that the process is secure and the target environment has the same level of confidentiality as the production database. Ability to reproduce old data In many countries, laws require businesses to produce financial figures from the past, sometimes many years. Make sure that you not only save data files for this purpose but also the definitions of the related DB2 databases. It is very likely that some data structures will be completely different in the future. Also, it is very important that the data (alone) can be used with standard SQL and does not rely on old applications to be reproduced. This means that your data is well designed (normalized) and documented. Only when this is done can old data be easily fed into reporting tools to reproduce the results. Remember that it is not only the government or auditors who are interested in the old data. Sometimes DBAs and application owners need old data to be able to find a bug or undo the effects of an incorrect delete or update. Some IT organizations have started projects or standards to handle historical data correctly, making sure they can retire data without and loss of integrity or fidelity. Page 25

26 Page 26

27 Chapter 2: Managing application changes in DB2 Managing application changes involves more than just changing application code and data structures. The management of change is an activity that is applied throughout the life of an application. Its main goal is to coordinate tasks, eliminate confusion by organizing and controlling modifications to applications. An application s change management is initiated when a development project or change request begins, and ends only when it is taken out of production. There is also the discipline of planning the change, testing the change, implementing the change and when bad things happen, backing out the change. Lifecycle management Let s look at how changes fit into the system and application lifecycle. The First Law No matter where you are in the system lifecycle, the system will change, and the desire to change it will persist throughout the lifecycle. EH Bersoff, et al, 1980 Everyone involved, from the project manager to the customer and the DBA to the developer, sees change from a different perspective. From the DBA s view it is a process to build, change, and control DB2 objects. But remember, when stored procedures are involved a DBA needs to mix in a little developer perspective or when it comes a performance or usability change, a DBA should put him/her self in the customer s shoes. It is up to the DBA to identify all DB2 object changes that are required and to determine their dependencies and any associated risks. The size of this effort can range from very small, like changing a parameter in a definition, to very large when supporting a new application or even huge when it is a new release of DB2. The DBA is also responsible for controlling the change process. This means having a documented plan and following it! Of course things can happen during the process, and they usually do, so the DBA needs to document any changes along with their impact to the remaining previously planned activities. This documentation becomes part of the audit trail that ensures not only the end result is achieved, but also provides valuable information in a fallback situation and for making future changes. Whenever changes are made to a database and especially Page 27

28 for a new version of DB2, the DBA must ensure that the technical aspects of the change are properly implemented. A DBA must have the technical understanding of not only DB2 concepts, syntax, and restrictions, but also an intuitive understanding of how changes made can affect an application s performance, continuity, and integrity. Last but not least, the DBA must become the great communicator. Internal and external users, developers, quality assurance testers, and management all need to be apprised of status, actions, risks, and any corrective actions as a part of responsible change management. Most database changes are driven by application changes; however, changes are sometimes needed to improve performance, retire dormant data, or to take advantage of a new DB2 features. Regardless of the reason for change, there are important process and role concepts that should be put in place. System upgrades a conceptual best practice If everything works well, then the application and user data are completely separated from changes in the DBMS itself. Generally speaking you can alter the code of the DBMS without making application changes. It is however, possible that on the physical side (the interface with the DBMS itself) sometimes changes are needed when the DBMS is upgraded. A rebind after an upgrade is the most obvious example, but we have seen bigger changes in the past, such as type-1 to type-2 index conversions. IBM usually tries to avoid causing these high impact changes as much as possible. It should never come to a point that you would have to change the application itself to get it up and running again on the next release. Of course, you may want to change the application in order to benefit from new features. What can be a nuisance is the fact that the DBMS can behave differently to your query requests. A good example is the update to a partitioning key of a partitioned tablespace. This operation used to be prohibited in DB2 (it returned a SQL error), but it is allowed now. These behavioral changes can cause major problems when the logic of the program starts to perform differently. This is why IBM introduced the CM system mode in Version 8. Initially, the acronym CM meant Compatibility Mode; but since DB2 9 the acronym CM means Conversion Mode. During CM mode the system can fall back to the previous version if needed. You can test your application with the new software and the DBA or system programmer can monitor that the DBMS is stable with regards to your application. If everything is OK, then you can move forward and change the system in such a way that fallback to previous version is impossible. There is no 100% guarantee that the application will act 100% OK when the system runs completely on the new code which is known as New Function Page 28

29 mode (NFM). It is possible that a new feature, unavailable during CM mode, is now causing trouble. If this is the case, then you can fallback to Enable New Function Mode (ENFM). During ENFM the system acts like the previous release but with new data structures. This is an interesting concept from which we can borrow some ideas for our own change strategy. Independent from application During the lifecycle of our application, we frequently enhance our data model. When data structures change, often the programs have to change as well. Many people accept this as a given, although there are alternatives. But when you take a closer look at the DB2 system upgrade strategy, you see that it can be done differently. DB2 can use the older structures in the catalog with the new version code (CM), when moving forward it upgrades the structures (ENFM) and starts to exploit them (NFM). You could have a similar approach for your structures. This might be something we could consider for our programs as well. During a new release of our application, we introduce new programs and new structures. If during an initial test the new application fails, do we have to revert both programs and structures or can we run the old programs on the new structures? The backout of structure changes is often complex and unwanted. Maybe old programs can use the new structures. Of course this takes some careful planning and coding guidelines (e.g. never issue a select * from). Maybe you need to think about column defaults or temporary triggers. Obviously, in some cases it is simply not feasible (for example, column data type changes), but it is worth considering. Where to start? It begins at the beginning with a baseline, the place where changes are made only through a formal process. This is usually the first delivery point in the development process, possibly the move from development to QA or from the first production implementation. Documentation for the baseline information and all lifecycle artifacts should be retained in a repository. A repository ensures data integrity and provides information sharing, content consistency/standardization and in some cases automation. Going forward from a baseline, the next step in our change process is to identify, document, communicate, and review all items that need to be changed. Depending on the complexity, the list will vary from a single item with no dependencies to a huge list with referential integrity and privilege dependencies and, if the change is a DB2 release migration, any incompatibilities. This documentation becomes the initial task in change control. Change control is a procedural activity that ensures quality and consistency of object configuration throughout the change management process. Controlling the change includes documenting reviews, decisions, risks, Page 29

30 actions taken, version control and auditing the entire process. Its ongoing communication is a status report to all parties impacted or interested in the change. Versioning and fallback Now let s look at how you can fall back to older versions. Release policy When you look at vendor software, change is almost always driven by a release policy. This means that once the software vendor has done a number of changes they will freeze the code (stop making changes) and start testing and finally, create a script that will implement the new software on an existing (older versions) of the software. In most environments, something is always changing. Groups of features and functions are grouped together in a release, so downtime can be reduced. Because of the critical nature of mainframe applications, IT organizations almost always opt for a release policy. However, there are times when urgent changes (fixes) to a production system are allowed without going through the official release cycle. Of course, the emergency fix then has to be retrofitted into the development system and become an official part of the next release. There are many books and philosophies on release policies. It doesn t matter which one you use, as long as you use one and stick to it. Depending on the type and size of your business, you might have standards with which you have to comply. It must always be clear for auditors to know what happened, when, and by whom. Some development methodologies, such as Agile, dictate small and frequent changes to DB2 structures and application code for each sprint. Agile is a word often used (and sometimes abused) these days. It is a software development methodology or practice. It promotes initial application development and change management by implementing a small number of requirements or changes by a small team in a relatively fast way, called sprint. Then the system is reviewed and tested before any other features or changes are implemented. Not every sprint results in a new release. After a number of sprints, you can decide that it is a candidate for production and you can freeze the code to create a new release. It is a common misconception that Agile would not apply to a mainframe environment. This is totally false; Agile is a way of development, not a way to implement releases. Organizations sometimes have a release manager role. This individual serves as a facilitator, gatekeeper and coordinator of each release or sprint. This is usually filled by a highly skilled Page 30

31 technologist, who can also vouch for the architecture and technical quality of the release. The release manager is responsible to raise issues and resolve risks, monitor defects, manage change requests, and oversee any deployment packaging, from a technical engineering perspective. If there is no release manager, these responsibilities often fall to the DBA. This role is meant to complement the project manager, not take the place of it. Implement and backout When you implement a new release, it is always possible that the new software will fail. Depending on the type of failure and the moment it is detected, you might be forced to back out the application to old version. A backout has many aspects. We are talking about software, data structures, and the data itself. When you test your new release, the new software can corrupt the data and you are forced to bring back old programs, old structures and old data. Remember that when you drop and recreate a table you lose the ability to recover because older image copies are no longer available. When you detect the failure after end-users have already used the system, then backout might be extremely difficult because you cannot ask the end-user to redo their transactions. And even a redo of the transactions is often completely impossible because the sequence of the transactions dictates the system behavior. In such a case, you must write special programs to detect and correct incorrect data. This comes on top of doing a program and data structure backout. This is not an easy task and it would not go unnoticed by the end-users. Therefore, you need to plan a final test when software goes into production. Such a test often is skipped because the software was tested in the test system. Remember Murphy s Law - it will happen, and you will regret not doing that final test. Also, a backout scenario must be ready and tested. So when you implement the software in a test system, you should always test a backout in the test system. If you need a backout scenario in a production system and you don t have one ready, it will take a long time to create one. And the one you come up with is untested, a risk in itself. An extremely nice scenario of implement and backout can be done with modern disk technology. Disks can be mirrored. Make sure that application and data is mirrored and start the implementation with a shutdown of DB2. Then detach the mirrored disks and start DB2, do the upgrade and the final test. If all is OK then attach the mirrors again and bring them in sync. If the final test fails, then shut down DB2 again and make sure the content of the mirror disk becomes the real life data (procedure varies by disk vendor), then start DB2 again. The mirror allows you to go back in time! Page 31

32 Phased vs. big bang Your release implementation depends on how the software is being deployed. If you run the software in central location (such as CICS or WAS), then you have more or less only one copy of your programs. You are forced into the big bang scenario you either run the old version or the new version. But when you run many copies of the programs in many locations (such as in a client/server application), then making the change in a big bang is not so easy. In many cases we have a fat client, which means that the software is actually loaded from and run on the client itself. Now you must make sure you distribute the new version of the software to all clients, and this is not an easy task. In this case, you might want to opt for a so-called phased implementation, meaning that not all clients have to run the same software. Some run the old version and some run the new version, which means that client programs must be written to make this is possible. The big advantage here is that you can give the new software to a selective group of users as a final test (in production). If it works well, you can roll out the software to more end users. This phased approach also allows that at the next reboot of the client PC, an attempt is made to upgrade the client. If that fails, it is not a major problem. If you like client/server but don t like the fat client-phased approach, then you might consider using stored procedures which makes the software central again. How BMC can help BMC CHANGE MANAGER for DB2 enables version control and fallback with baselines. Baselines can act as a central repository for all versions of your databases and DDL. A baseline is a point to which your object definitions can be recovered. Generally speaking, you should create baselines of your database or DDL at meaningful sync points at times when it is important for you to have a snapshot of what the database looked like at that moment. Because the DB2 catalog, DDL files, and BMC CHANGE MANAGER migrate worklists are all valid inputs to the baseline process, you have a lot of flexibility in creating a baseline. When creating baselines, consider the project lifecycle characteristics and the change management process in your environment. How are database changes migrated from one environment to the next? If the changes to your database environment are based on releases, moving organized groupings of changes in a first-in, first-out fashion, you may want to create a baseline after each set of database changes is applied. Page 32

33 If database changes move from one environment to the next in no predictable order or grouping, you may not want to create a baseline every time a change is made. Instead, you may want to look at the broader picture and create a baseline only after major change implementations Test your changes Developers need a testing environment that is separate from production where they can pursue functional changes without compromising data integrity or system stability. If the change is to implement a new release of DB2 or make DB2 object modifications for non-application related changes, establish a similar environment. When we build software, we do so in an isolated environment called the development system. There are many ways you can do your development (including Agile or waterfall). But sooner or later, the efforts of multiple programmers must be merged and tested. We can do this in the development system but sometimes we opt to do this in a separate unit testing or development testing environment. Here we create and test the new release of the software. Changes to the data structures are often made by hand with every little move from development into this environment. Once the development team has tested the release, we are ready to give it to other users. Before application or database changes are moved to production, they should be promoted to a staging or QA environment, where all of the changed components can be thoroughly tested. Software quality is a part of change management, it provides a measurement of the degree that the change satisfies the requirements, is free of errors, performs as anticipated, and satisfies user expectations. If a company does not have an overall quality practice, then the change management plan should include and document defect tracking, unit testing, source-code tracking, technical reviews, integration testing, and system testing. The staging/qa environment is completely separated from either development or production. In larger installations it is usually its own DB2 instance, and almost always is a separate instance for a release migration. This is where your QA team can run processes to detect potential bugs, performance problems, or failures before your production users have access to the applications or DB2 data. This environment should be as close to production as possible, in terms of data characteristics (like size and value granularity). Some of this can be mimicked through catalog statistics, but not all. Consider populating this environment with other applications that interface with the changing application, to ensure they are functioning correctly. In the case of a DB2 release Page 33

34 upgrade, choose applications that are business critical or customer facing as test cases for this environment. Sometimes, especially in smaller installations, the costs of creating this environment are considered unreasonable. However, the cost of fixing bugs or long periods of system downtime can cost even more. Data management administration and comparison tools are effective in creating and managing baselines, cloning objects and data, and maintaining multiple versions of DB2 development, test, and production environments. We almost never move software from development test into production. Depending on the complexity of your release cycle, the software moves through one or more environments before it reaches the final destination: production. Here is a list of possible environments the software will go through before it reaches production: DEV, development environment * DTE, development testing environment/unit testing QA, quality assurance (testing environment) * DIT, development integration testing DST, development system testing SIT, system integration testing UAT, user acceptance testing * PROD, production environment * Not all of them must be present, but the basic four (marked with *) are found in most shops. Writing a script to implement the data structure changes must be adapted for a certain environment. This is a big challenge for a DBA. To mimic the production system, the user acceptance environment is often a close copy of production. This is a very good environment to test our implement and back-out strategy. One important factor to remember is that during DB2 release upgrades, you will most likely need separate release development and test environments, in addition to those you maintain for business application development, testing, and production support. Sometimes IT organizations with critical or highly visible applications, maintain a separate test environment for customer support to help re-create or resolve user problems. These environments must be taken into consideration as a part of the change management process for upgrade and application changes. Page 34

35 Once your change environments are in place, you can design and implement work flows, scripts, and guidelines that reflect your approach/methodology to develop, test, and move DB2 changes from one stage to another. You should have an easy way for developers to grab a copy of the schema and related application data from production. Backups can be a useful source of data. These same scripts can be adjusted to enable migration between development and QA and back. If you are using an Agile or RAD methodology, these scripts are even more important, as database objects undergo more frequent changes and migrations in development and between development, QA, and production. If you use tools to help manage your DB2 database administration, then you have help in ensuring the quality of your migration scripts. If you do not use a tool to generate and mange your DB2 structure changes, then you must carefully evaluate all change scripts, JCL, DDL, DCL, triggers and stored procedures at each point of staging before starting the migration. This includes syncing backout scripts and scenarios. In either case, be sure to save a copy of all scripts written or generated as part of your audit and ongoing verification process. Let s assume that at this point your changes have been moved from development to the staging/qa environment. If your organization uses this environment for production support, you are now out of sync with production. Be prepared and have a plan for handling production errors. This is more critical when the change is a new DB2 release and incompatibilities must be considered. If you have neglected to consider rollback in your process (for structures and data) and don t do this now, you have now defined your single point of failure and identified your Achilles heel. Once QA has given their approval, you should make one last evaluation pass of your change script or run a comparison of your production infrastructure against the staging/qa environment. This helps to ensure no untested features, changes, or modifications are accidentally moved to production. At this point, if the changes are considered highly critical and any misstep is a serious consequence to your business test your rollback process, no matter what. Now, you can move your changes to production with confidence, communicate the success, and take a new baseline. You are ready to hold post-implementation reviews, update any processes that need improvement and ready for the next round of change. How BMC can help When you have many different environments and DB2 systems it can be challenging to keep the DB2 data in sync between DB2 environments. It is also difficult to replicate a lot of DB2 data from production to any of your development or test systems without incurring a production outage. Page 35

36 Because replication frequently is done at an application or object level, granularity and flexibility in the extract process is important. Solutions aimed at capturing all transactions and replicating them to another site for disaster recovery failover do not address this challenge. During the replication process, users can face additional challenges such as: Getting consistent data (no in-flight updates on the replicated output data set) Reducing or eliminating any outage on the source DB2 Reducing resource requirements (CPU, I/O, and storage) to extract and apply the replicated data BMC Software offers several techniques to help replicate data from a DB2 z/os source to a target system (usually also a DB2 z/os system, but not always). The various techniques build on the following products: BMC CATALOG MANAGER for DB2 BMC CHANGE MANAGER for DB2 BMC COPY PLUS for DB2 BMC LOADPLUS for DB2 BMC Log Master for DB2 BMC RECOVER PLUS for DB2 BMC SNAPSHOT UPGRADE FEATURE BMC UNLOAD PLUS for DB2 Online Consistent Copy (OCC), a feature of BMC Recovery Management for DB2 Problems for system changes A common practice is to have system changes (new DB2 release) go through the same cycle as an application. We start at the development environment to migrate to the new release because development is not considered a mission-critical system. This is a misconception; when many developers cannot use their development environment, money is lost because developers are not productive. Page 36

37 Nevertheless system changes often follow the same flow through the systems as application changes. This is also true for the DB2 phases of a new release. In many cases the development system is brought into New Function Mode (NFM) first, and the production system will be the last in the chain of changes. This could lead to the unwanted situation where the new DB2 release functionality is used by the developers and cannot be migrated because the production systems are still running Conversion Mode (CM) or have done a backout (either the old release or ENFM* of the new release). You might consider leaving the development systems in CM mode for as long as possible or after a successful migrate of the development system to NFM force it back into ENFM* mode (a fallback scenario which you have to test anyway). To test the release, use the quality assurance or user acceptance test environments. For a very brief period before moving production to NFM, you can bring the development systems to NFM. By doing it this way, you avoid a classic problem: the application code uses code that production cannot run. One final word on change: remember that data also has a lifecycle that requires management. Have a plan for data that is saved in history files or offsite after it is no longer need by the current business. Audit and/or government regulations may dictate data retention. Include schema files/scripts or baselines to recreate DB2 structures for this data. However, as DB2 and the business applications mature, you may lose the ability to restore them, and the schema documentation can provide information to read the data from its physical file state. Page 37

38 Page 38

39 Chapter 3: The complexity of DB2 structure changes Many program changes, like business logic changes, do not require anything to be changed in the database. But more complex changes, like new application functionality, sometimes require the database model to be changed and extended. Implementing such changes is often more difficult than expected. No matter how small the change to the DB2 structures is, a certain amount of risk is involved. Even when the change can be easily undone and has no impact on program sources or other objects in DB2, there is always the performance risk. Some changes look easy, but might have additional actions that can have a negative impact if forgotten (e.g. some changes have a positive impact only after a rebind). In this chapter we will further analyze the types of changes, their complexity, how fast they can be implemented, how they can be audited, and the risk associated with the change. The three type of changes When we drill down only on the data structure changes and ignore the changes to the program source, we have three distinct types of changes with an increasing level of complexity. We can call these types 1, 2 or 3. It is a good idea to identify the type of change in your change management process. If the change has multiple steps with multiple levels, you could summarize this (for example, this change requires five type 1 changes and two type 2 changes). Be aware that implementing a structure change can be simple, but a reverse action (backout scenario) can be much more complicated. For example, adding a nullable column to a table is a simple ALTER statement, but undoing this action requires a DROP and re-create of the table. Type 1: Simple change minimal to medium impact This type of change has a very low complexity. DB2 supports this type of change by means of the SQL statements CREATE and ALTER. The change is effective immediately, but can require some additional steps to load data into the structure. The structure itself is in a healthy status. If the change impacts other object, they may need additional attention. Page 39

40 Examples: Add a new column to table; it is acceptable that column is nullable. o o o Use ALTER TABLE ADD COLUMN. Assess indexes: maybe the new column needs indexing, or existing indexes need revision. Immediate impact: none A table column needs a domain and the domain should be enforced. o o o o Use CREATE TABLE for a new table. Table needs a primary key; column in existing table becomes foreign key using ALTER TABLE ADD FOREIGN KEY. Immediate impact: existing table get CHECK pending status and is unavailable. Follow-up: populate the new table and resolve CHECK pending condition. Partitioning key boundaries for existing table have been a poor choice. New boundaries need to be set. o o o Use ALTER TABLE ALTER PARTITION to change the boundary. Immediate impact: the partition and the following partition become REORG pending and are unavailable. Follow-up: Run a REORG on the partitions in REORG pending condition. Risk Time needed Audit Undo Minimal. The changes and their impact are well known and require simple SQL statements and standard utilities. Try to avoid combining this change with high impact changes (if possible). Minimal. No time-consuming steps, just standard utilities (if any) Save the SQL DDL statements and utility job output. For the program changes, you can do a source compare to review that changes are only related to this change. Make sure the programs exploit the new data structures in the proper way. In many cases much more complex, as in the first case (ADD COLUMN); there is no ALTER DROP COLUMN statement in DB2. Therefore, the undo of this change becomes a type 3 change. Page 40

41 Type 2: Online change - initiates DB2 object versioning While this type of change is not complex, it has its own category because initially it appears that the change is done, is successful, and no forced follow-up action is required. IBM calls this type of change online change. But there is a serious price to pay. The change is done only in the meta data (the DB2 catalog) and is not made to the physical table itself. DB2 will do conversions at execution time to match the new attributes as recorded in the DB2 catalog. Because the catalog has a newer version than the physical data, we say that versioning (the process of converting the data and hiding the mismatch) is going on. To draw attention to this fact, DB2 puts the tablespace in an AREO (advisory reorg pending) status. This status allows any SQL action on the table, but alerts the user that versioning will continue until a REORG is run. The REORG process converts the data; then the catalog and physical data are in sync. Be aware that versioning can increase the CPU used by applications up to 50%. The online change never truncates data or returns lower precision. So a change like converting a char(50) column to a char(20) is actually a type 3 change. DB2 allows TYPE conversions, as long as the target data type falls into the same group of similar data types and can have more precision or be bigger. This means that character data cannot be converted to numeric, and float data type cannot be converted to integer. Many data type conversions are still (and will be for a long time to come), type 3 changes. Be aware that for a type 2 change, DB2 structure change is simple, but programs need a matching data type in their communication to DB2. A mismatch between DB2 and the program can give SQL conditions or increased CPU for conversion. Examples: An application needs to store larger numbers than the maximum values of an INTEGER column; conversion from INTEGER to BIGINT is needed. o o Use ALTER TABLE ALTER COLUMN to do the change. Immediate impact: initially none - AREO status is set and REORG is needed. CPU consumption by programs is increased until REORG fully completed. The column for surname is CHAR(60) and needs to be stretched to CHAR(120). o The implementation and impact is the same as the above example. Page 41

42 Risk Time needed Audit Undo Minimal. Run the (online) REORG as soon as possible Usually minimal. Time needed for standard utilities (if any) Keep the SQL DDL statements. For the program changes, you can do a source compare to review that changes are related to this change only. Make sure the program data types still match DB2 types. This is always is more complex and will be a type 3 change (to reverse the change, you need to truncate or accept reduced precision; type 2 does not allow this). Type 3: Complex change Unload, Drop, Create and Load (UDCL) A UDCL change is the most complex of all. To implement the change, you must drop and then recreate the object. But, of course, before you drop an object, you must save the contents by unloading the data and then reload the contents into the new table after the object is re-created. This process is called UDCL denoting the steps you have to take: Unload, Drop, Create and Load. This might sound straightforward, but there can be massive implications. First, the original data type and the target data type might be a mismatch (otherwise it is a simple conversion and a type 2 change). Because we want to use the LOAD utility for the load step, we need the data conversion to occur during the UNLOAD process. In many cases the UNLOAD and LOAD utilities will not be of help, because they provide only limited conversions. So we need to write a customized unload conversion program. The DROP has a cascading effect - all objects related to the dropped object are also dropped. (Note: Referential Integrity constraint is also regarded a related object; a child table will lose its foreign key definition). When the object is re-created, the related objects like indexes, triggers and constraints must be re-created (using DDL saved in the unload step). That is, if it is possible at all (think of a drop of column and the effect it could have on indexes referencing that column). Finally, all security related to the dropped objects is removed from the DB2 catalog. So security privileges must be reestablished after all objects are re-created (using DCL saved in the unload step). A simple change like removing a column from a table can easily result into a script of hundreds of lines (SQL and utilities). Some changes can be so complex that they are almost impossible, even for an experienced DBA and even a small scripting mistake can have a massive impact. Page 42

43 Examples: Application used an integer column to record a date in DDMMYYYY format. Decision is made to convert the column to a DATE column to allow DB2 functions related to dates. Because of changes in the law, an application has obsolete data that is no longer needed. The columns in use for this must be removed. A new release of an application was scheduled for the weekend but final tests revealed a performance problem so severe that decision is made to return to the previous version. All of these changes require an in-depth analysis and an implementation script. Risk Time needed Audit Undo High to very high. A small mistake can result in bad performance or a loss of data or data corruption. High to very high. The many drop and re-creates plus their follow-up actions can cause an implementation script to run a long time. In DB2 10 the time needed for these scripts will increase because of the changes made in the catalog. Creating the best possible script is imperative. If the UNLOAD utility cannot unload the data, you need to unload the data with a SQL application. SQL is much slower than a utility. Keep the script and the execution of the script. Make sure that you know what the situation was before the change so you can undo the change (see below). In the last example, you need to return the objects to their previous state. Under normal circumstances you compare the new (desired) situation to the existing catalog state and then create your script to move forward. In the last example, you must compare the existing catalog state to the old situation to go backwards. This means that you must store all of the old situation information somewhere. After a change implementation, it is wise store your complete current situation (often referred to as a baseline), so you can make these reverse comparisons can be made and you can return to the previous baseline. How BMC can help For all three types of changes, you will need to prepare, deploy, audit and (hopefully never) undo the change. Depending on the number of changes and objects, and the amount of data to handle with utilities, this can take up quite a long time. Manual preparation for any type of change increases the risk of making mistakes and running into performance, or even worse, availability problems. The more data you need to handle (type 3 but also type 1 and 2), the more time you will need for the deployment of changes. All this affects the lifecycle of the application and negatively affects the goto-market time of new business services. BMC CHANGE MANAGER for DB2 completely manages preparation, deployment, auditing and, if needed, undoing of any type (1,2,3) of change. Page 43

44 BMC CHANGE MANAGER for DB2 simplifies and automates change management and ensures the integrity of structures and data. With BMC CHANGE MANAGER for DB2, you can extract changes to a data structure on one DB2 subsystem and then apply them to the corresponding data structures on other DB2 subsystems, preserve data and local modifications, and fall back to a previous version of the database if needed. You can even synchronize database schema versions across multiple subsystems. BMC CHANGE MANAGER for DB2 automates change management in three phases: In the specification phase, BMC CHANGE MANAGER for DB2 lets you define all of the information needed to make changes to affected tables, indexes and other objects, or lets the built-in Compare feature determine which DB2 objects need changes. After you create change or migration requests in a work ID, the Analysis component checks the requests for validity with the DB2 catalog, develops an optimal implementation strategy, and generates a worklist the script. The worklist contains the DDL,DB2 utility commands, AMS commands, and security commands that are necessary for implementing the requests. In addition, the Analysis component propagates changes to dependent structures. The Execution component (deployment) executes the commands that are contained in the worklist that the Analysis component generates for change or migrate requests. The tasks that are required to implement a worklist vary depending on the type of change. Execution can take a long time when many objects with a lot of data must be handled especially for type 3 changes. BMC CHANGE MANAGER for DB2 also supports mass changes with CM/PILOT, which acts like a wizard for defining and executing mass changes. It enables you to specify changes once and apply them multiple times, to multiple objects and to multiple databases. You set up tasks that can be performed immediately or at a later time by someone else or through job scheduling. With CM/PILOT you can ensure that the change management task is performed the same way every time it is executed. Unavailability during the deployment In almost all cases, there is a period of unavailability during the deployment of a change script. Even for type 1 changes, DB2 forces a restricted status upon the in many cases. If we are very aware of what we are doing, we might lift that status very rapidly. Page 44

45 In the example type 1 change where a table column needed a domain and the domain should be enforced, we established a Referential Integrity constraint. If the parent table is populated using select distinct of the foreign key column (in the child table), then the CHECK condition and the execution of the CHECK utility is not needed. In this case, we can quickly remove that restriction with REPAIR NOCHECKPEND. But if we look at a complex type 3 change, the object is dropped and then re-created, causing a long period of unavailability; related objects can be placed in a restricted status for a period of time during this process. When creating online change (type 2) IBM strived toward 100% availability, hence the name online. At the price of performance degradation, this is a success. But for most online changes, the application itself must be changed and a new version of the program or programs must be deployed. It is very difficult (if even possible) to create a scenario that results in 100% availability when application code is involved. Often we assume it can be done by using data sharing, but this not true. In data sharing, all members share the same catalog and suffer the same consequences of a change deployment. How BMC can help BMC Software can reduce the window of unavailability during the deployment of a change dramatically. For type 3 changes, you can unload and load the data with BMC UNLOAD PLUS for DB2 and BMC LOADPLUS for DB2, which are high speed alternatives to native IBM utilities. If available, ziip engines are used to move workload from away from the busy general purpose processors. In addition to the speed of BMC UNLOAD PLUS for DB2 and BMC LOADPLUS for DB2, you can use the worklist parallelism. With worklist parallelism you can deploy multiple changes in parallel (using multiple initiators on the same LPAR or on multiple LPARs within the same sysplex) instead of sequentially. This saves a lot of time when you need to unload and load a lot of data for many objects. Worklist parallelism is a component of BMC Database Administration for DB2. Keep your data healthy Every change to the data structures has an impact on the health of the data. To ensure that you can always assess your data s condition, you must include two important items in your change scripts: statistics and a new recovery point. During the change deployment process, the existing statistics can be partially wiped out (for example, during a type 2 change, statistics like HIGH2KEY, LOW2KEY and frequency are removed from the catalog), and (re)created structures have no Page 45

46 statistics at all. This means that in many scripts you need to run RUNSTATS after the loads or reorgs. New statistics could trigger a rebind. Of course, rebinds are advisable anyway because the packages were invalidated by the script. The second important piece regarding health is to establish a new recovery point. Depending on the nature of your change, a recovery with an older image copy might be impossible, if they even exist. Sometimes you can combine utilities in a smart way (for example, do a LOAD and COPY together); other times you need to include a separate COPY into the script to keep the data healthy from a recovery point of view. Be aware that if you deploy special scenarios, like hardware related backup and recovery, you must be extra careful. You must establish a new recovery point before you can do a recovery using this special scenario (an example would be IBM s recovery expert system backup). How BMC can help Together with the BMCSTATS feature of BMC DASD MANAGER for DB2, BMC LOADPLUS for DB2 can gather up-to-date object statistics during the load. The BMCSTATS utility is executed as part of the load process, and it performs much faster than IBM RUNSTATS while using less CPU to gather statistics. If ziip engines are available, the BMCSTATS code is eligible to run on those specialty processors. All of this helps you to speed up the deployment of changes. Risk of altering scripts It is very tempting to create a script in a non-production environment and then deploy that script with a small change (manual edit). Even though you have used your script in the non-production environment, there is no guarantee that the two environments are 100% identical. If they are not, you may need to compensate for the differences and you may need to edit the script even further. Before you know it, you are looking at an untested script again with an increased risk when the script is executed. How BMC can help BMC CHANGE MANAGER for DB2 can creates a valid script for any environment based on a single change request. No editing is needed. Page 46

47 Deployment and impact of changes All change types (1, 2 and 3) involve DDL statements. To keep the definitions of a database stable, DB2 locks the database descriptor (DBD) in an exclusive mode during DDL execution. This has negative effect on running applications. Before DB2 10, the catalog was not suited for running multiple DDL streams concurrently, so often changes needed to be implemented serially. Almost every DDL change has an impact on the programs using the data structures. Even for changes type 1 and 2, DB2 will regenerate certain views and invalidate packages and the dynamic statement referencing the changed structures. To deploy a successful change, you need to do an indepth analysis and create a script that not only includes the change itself, but also all steps to fix the side effects, like RUNSTATS and REBIND. It is wise to review the health of your system after each change. Determine if any tablespaces are in a restricted status (issue a -DIS DB(*) SP(*) LIMIT(*) RESTRICT and DIS DB(*) SP(*) LIMIT(*) ADVISORY) and query the catalog for plans and/or packages that need rebinding (see Chapter 5, Managing the DB2 catalog ). A special undo scenario If you are able to hold all production programs until the changes are implemented and tested, you might have a unique opportunity to undo your changes in a 100% guaranteed and fast way: bring the whole system (including all data) back in time. There are several scenarios: DB2 has supported system backup/restore since Version 8, but shutting down DB2 and using disk snapshot or decoupling mirror disks can create nice scenarios that guarantee that the whole system can be restored as it was before the change implementation. Is an audit needed? Auditing is not only the domain of the company s auditor, although it would be wise for them to be included in the change process. While implementing changes, the system is vulnerable. For example, when you are unloading and loading data, mistakes are easily made, especially when you have to restart your script after an error. This is the perfect moment for someone to insert malicious code or alter data. An extra file concatenated to the load file would allow for inserting data into a normally restricted table. Therefore, it is wise to build in audit checks to detect mistakes and attempts to alter or damage the system. Page 47

48 Here are a few tricks: count the number of rows in the tables using select count(*) from table - are they the same before and after the change? If not, can they be explained? You can compare the program source programs; compare the changed lines in the programs that are part of the change and carefully look at SQL and logic that is changed. As always, it is wise to have someone review these changes ( another pair of eyes principle). What s new in DB2 10 In the releases prior to DB2 10, IBM made it possible to ALTER most table attributes. Through the online change feature, you could change most attributes, as long as the change did not result in data loss. The versioning feature of DB2 would take care of the necessary conversions and the next REORG would implement the physical changes. What could not be changed were the more physical attributes of a tablespace in which a table resides. Until DB2 10 the only way to change tablespace attributes like segsize, pagesize or the tablespace type like segmented and universal tablespace was with the UDCL method (type 3 change). DB2 10 allows you to change these physical attributes of a tablespace, but the approach for this is not versioning because data conversions are not needed for this change (making it a type 1 change). Instead, DB2 will remember what change was requested and make that change during the next REORG. So there is a major difference with versioning, after the ALTER TABLESPACE statement the change is not immediately reflected in the catalog s table describing the object. Instead, your request becomes a pending change and is stored in the new catalog table SYSIBM.SYSPENDINGDDL. A simple select on this table would reveal the changes that are pending. If you want to undo the ALTER statement before the REORG, it is possible using the new ALTER TABLESPACE DROP PENDING statement. Be aware that some changes are one-way changes and cannot be undone after the REORG (e.g. you can convert to universal tablespace, but you go back to the original type). The following changes can be implemented this way: Altering the tablespace PAGESIZE (use a different size buffer pool) Altering the tablespace DSSIZE Altering the tablespace SEGSIZE Altering the tablespace type by: o Changing a simple or segmented tablespace (with one table!) to a partition-bygrowth universal tablespace Page 48

49 o o o Changing a partitioned tablespace to a partition-by-growth universal tablespace (do an un-partition) Changing a partitioned tablespace to a range-partitioned universal tablespace Changing a partition-by-growth universal tablespace to use hashing access (new DB2 10 feature) Altering the MEMBER CLUSTER structure for a (universal) tablespace Altering a tablespace with LOBs to have small LOBs inline (new DB2 10 feature) Altering the index PAGESIZE Altering the index to have include columns (new DB2 10 feature) Despite the many new ALTER features, many change scenarios still require a UDCL script. It is very unlikely that DB2 will ever ALTER support for extensive conversion (e.g. char to date) or data truncation. How BMC can help If you are running DB2 V8 or 9, BMC can help you with the physical changes described above. By exploiting page oriented hardware exploitive processes, the BMC Recovery Management for DB2 High Speed Structure Change (HSSC) feature can dramatically reduce the downtime and CPU consumption to effect physical structure changes to DB2 table spaces. You can use it with BMC CHANGE MANAGER for DB2 to further automate the process. A change that may take hours using standard unload/drop/create/load process can be transformed in minutes, with minimal resource waste. HSSC is available with BMC Recovery Management for DB2 version 9.2 and later. The HSSC process allows you to perform the following transformations: Convert a simple or segmented table space to UTS PBG (for example, when a table space is nearing the 64 GB limit). Convert a non-large partitioned table space to large partitioned table space (for example, convert a 4 byte RID to a 5 byte RID). Convert a partitioned by range table space to a UTS PBR (providing the benefits of a segmented table space to a partitioned table). Convert a partitioned by range table space to a UTS PBG (dropping the partitioning index requirement). Page 49

50 Change the table space SEGSIZE and/or DSSIZE. Add or drop compression from indexes Change index PIECESIZE. You can use the HSSC process in two ways: SHRLEVEL REFERENCE and SHRLEVEL CHANGE. Each method provides more availability and uses fewer CPU resources than traditional unload/drop/create/load techniques. In both methods, the transformation is done using a shadow - or new - object. The source object being transformed is the original object. At the end of the process, the new table is renamed to the original table; you do not need to change application SQL. Page 50

51 Chapter 4: Managing DB2 security You can manage security in DB2 internally or externally: Internally - use the authorization tables in the DB2 catalog Externally - use Security Access Facility (SAF), an interface to security mangers like RACF or ACF2. When you move to external security, do not use the DB2 GRANT and REVOKE statements because it is up to the external security tool to grant access to a resource or disallow it. External security provides several advantages. You can set up rules before you create objects, and security is retained when you drop an object. This makes UDCL scripts less complex because you can omit the DCL. Another great feature of external security is the ability to use wildcards for objects when creating the access rules. For example, a generic rule for all tables under a certain schema name sets the access for existing tables but also for new tables under that schema name. A great feature! External security also has its downsides, like the decoupling of security from objects. Rules can exist that do not apply to any existing object. Sometimes the fact that DB2 cannot function without the external security tool (a single point of failure) is mentioned. But you can also approach this from another view: z/os is pretty useless without a security tool, so when the security fails the whole machine fails. Object qualifier When an object is created, it is created within a certain schema. Because DB2 for z/os has never implemented proper schema handling like DB2 for Linux/UNIX/Windows (LUW) has, there are no CREATE SCHEMA or GRANT CREATEIN statements. Instead DB2 uses the term object qualifier, which you regard as a schema name. You cannot use just any schema name (unless you are system administrator); you are limited to the values provided by the security exits of DB2. The default security exits by IBM supply next to your user ID the group names of the groups you are connected to in your security manager (such as RACF). It is, of course, possible that you installation has modified the exit and provides you with more or fewer names. To see which groups you belong Page 51

52 to, type LU under TSO. The security exits provide you with a primary authorization ID (your user ID) and a number of secondary authorization IDs (the groups you belong to). You can create an object using a qualified name (such as CREATE TABLE GROUPDBA.TABLE1) or unqualified name (such as CREATE TABLE TABLE1). When using the qualified name, the qualifier must be either your primary authorization ID or one of your secondary authorization IDs. The system administrator is the exception to this rule; SYSADM can use any qualifier. When you use an unqualified name, the special register CURRENT SCHEMA is used to qualify the object. After you connect to DB2, CURRENT SCHEMA is set to the primary authorization ID (your user ID). You can change the CURRENT SCHEMA with the SET CURRENT SCHEMA = <value>. The value must be either your primary authorization ID or one of your secondary authorization IDs. Again the system administrator is the exception; SYSADM can use any value. If you have always used the special register CURRENT SQLID to qualify your objects. Don't worry, that still works. Like CURRENT SCHEMA, the CURRENT SQLID is set to the primary authorization ID during connection and as soon as you change CURRENT SQLID to another value (same restrictions as CURRENT SCHEMA), then it drags along the CURRENT SCHEMA. But as soon as you set CURRENT SCHEMA to another value, it disconnects itself from CURRENT SQLID. CURRENT SQLID is used to hold the owner, and CURRENT SCHEMA is used for qualification. Here is an example where the TSO user ID KBRANT has HRM and DBA as secondary auth IDs (is connected to these group in RACF): SET CURRENT SCHEMA = 'HRM'; SET CURRENT SQLID = 'DBA'; CREATE TABLE TABLEX (COL1 INTEGER); COMMIT; Here is the result in the DB2 catalog for TABLEX: SELECT CREATOR AS SCHEMA, NAME, CREATEDBY, OWNER FROM SYSIBM.SYSTABLES WHERE CREATOR = 'HRM' AND NAME = 'TABLEX'; SCHEMA NAME CREATEDBY OWNER HRM TABLEX KBRANT DBA Page 52

53 CATMAINT for updating schemas When you create an object, the creator and owner are carved in stone - they cannot be altered. Prior to DB2 9, you had to drop/create (UDCL) when you wanted to change this. In DB2 9 a new option in the CATMAINT utility made this easier. In previous releases CATMAINT was used purely for system upgrades but since DB2 9 it can be used to change a number of object attributes like owner and creator, transfer ownership to roles (more about this later), and change VCAT of index spaces and tablespaces. There are some restrictions, and plans and packages get invalidated with some options. But beware of the biggest restriction: the CATMAINT utility can only be used by the INSTALL SYSADM, which might be completely locked down and not available to DBAs at your installation. Nevertheless it can be a useful tool. All options are documented in the DB2 utility manual. System administrator A topic for discussion has always been the system administrator (SYSADM) privilege. Users who hold this special privilege have ultimate power within DB2. They can CREATE/DROP and GRANT/REVOKE anything and have unlimited access to all data. You can imagine that security officers and auditors do not like this privilege. This global authority changes in DB2 10; it is now possible to remove the unlimited access to data and disallow the unlimited GRANT/REVOKE. These privileges will be under the auspices of a new security administrator. This makes the SYSADM role what it was originally intended to be used for - administering the system. But even then, SYSADM remains a user ID with extreme power and it is easy to make a mistake (such as dropping the wrong object or dropping it in the wrong system). You can compare this to the root user ID in UNIX - ultimate power. By default in UNIX, there is only one root user ID (compared to DB2, where you can have multiple SYSADM user IDs) and most installations know very well how to handle this. I know this a painful discussion for many system administrators, but you are better off to not have the SYSADM privilege all the time. It is better to obtain when you need it, and make sure that this procedure is not too complex and can be done at any time. The problem is that most installations want user IDs to be personal. They want the user to be accountable; so a single SYSADM is often not an option. Having many SYSADM user IDs is asking for trouble (see next topic); you need a more permanent solution. Here is a scenario that works really well: use a RACF group ID as a Page 53

54 granted SYSADM. There are two types of SYSADM: The installed SYSADM (from the DSNZPARM) and the granted one. The installed SYSADM has a big disadvantage: it cannot be audited! The granted SYSADM has as big disadvantage in that it is difficult to revoke; the effects of the cascading within the DB2 catalog would be horrible. DB2 10 has special keywords on the REVOKE statement to prevent a cascading revoke. You can finally purge old system administrators from your system without risk. Here is a nice scenario to handle proper SYSADM usage: do not connect the people who need the SYSADM permanently into the SYSADM RACF group. Create a procedure that can be triggered when the person needs the SYSADM privilege. The procedure (in job scheduler) connects the user into the SYSADM group, notifies the auditor the SYSADM is used, sends an on behalf of the auditor to the requestor with a request to reply why the privilege is needed, and turns on a DB2 audit to follow the requestor. When the user is done with their tasks, a SYSADM disconnect procedure is triggered. You can also set up a procedure that runs nightly to disconnect all users connected to the SYSADM group, just in case the disconnect procedure was forgotten. The auditor expects the report about why the SYSADM was needed and can review the audit trace if needed. This is a simple and effective way to have a single and auditable SYSADM. Trusted context and roles When you connect to DB2, one or more authorization IDs are assigned to your session. In DB2 9 this can be extended with the one or more roles. A role is a new DB2 entity that groups privileges together. A role is created by the system administrator (or security administrator in DB2 10). You grant privileges to this role. For example, you create a role called DBA. You grant all privileges needed to do a DBA task to the DBA role using simple GRANT statements. Once you are done, you must ensure that this role gets assigned to a DBA user at the next DB2 session. That is exactly where the power trusted context is. You can specify conditions that must be met before the role gets assigned. Here is an example: CREATE ROLE DBA; GRANT BINDADD TO ROLE DBA; CREATE TRUSTED CONTEXT.. ATTRIBUTES(SERVAUTH LCLNET01). DEFAULT DBA; Page 54

55 In this example, you expect RACF to set the SERVAUTH attribute to LCLNET01 if the user is identified with the network addresses assigned to the system support group. Only then does the user get assigned the extra privileges identified by the role DBA. If the user were able to connect to the DB2 system from outside the trusted network group identified by RACF as LCLNET01, then the user would not get the extra privileges. The roles concept complements the already existing security of DB2. Another great feature of trusted context is the ability for a user to act as another user. If the trusted context definition allows, a user can connect to DB2, but request to immediately to become another user. No password is required. This can be very helpful for doing work on behalf of another user (for example, when a DBA s colleague goes on vacation). Here is an example: CREATE TRUSTED CONTEXT BACKUP_PROD_DBA BASED UPON CONNECTION USING SYSTEM authorization ID DBA0123 ATTRIBUTES (JOBNAME DBA0123 ) WITH USE FOR DBA0999 ENABLE; In this example, user ID DBA0123 is allowed to act as user ID DBA0999, but only when connected from a TSO session (attribute jobname). Under TSO, user ID DBA0123 fills in the value DBA0999 on the DB2-I defaults panel in the AS USER field. When the user goes to SPUFI and issues SQL commands, all privileges of DBA0999 are used for the SQL. When objects are created during this session, the DB2 catalog shows they were created by DBA0999 and not as DBA0123. You can, of course, enforce more attributes in the trusted context (like SERVAUTH in the previous example). When the trusted context is no longer needed, issue ALTER TRUSTED CONTEXT BACKUP_PROD_DBA DISABLE. You no longer need to give a password from one user to another. How BMC can help BMC CATALOG MANAGER for DB2 provides an intuitive interface to the content of the DB2 catalog. With BMC CATALOG MANAGER for DB2, you don t need extensive knowledge of the DB2 catalog table structure or the SQL that is required to query the DB2 catalog. BMC CATALOG MANAGER for DB2 provides easy to understand online and batch reports for any type of DB2 security definition. BMC CATALOG MANAGER for DB2 provides several tools to help you manage authorizations: Page 55

56 The COPYAUTHS command enables you to copy privileges from one user ID to another user ID and from one object to another object easily, saving you the time and effort of issuing multiple GRANT commands. The Cascade Report shows you possible effects of a REVOKE action. The Reassign Grants option prevents you from losing authorizations when you execute a REVOKE by enabling you to assign those authorizations to another user ID. Page 56

57 Chapter 5: Managing the DB2 catalog This chapter discusses best practices for managing the DB2 catalog. Catalog REORG The DB2 catalog needs relatively limited maintenance. The DB2 catalog uses strict tablespace clustering: multiple tables are put into a single simple tablespace, but DB2 maintains a special catalog clustering. All rows belonging to the same object are stored on the same page. When DB2 needs to analyze an object, the getpages result in minimal I/O. DB2 tries to avoid direct access to the DB2 catalog by using the in-storage DBD in the EDM pool to work with objects. When this special clustering fails, you need to reorganize the catalog. The normal rules that determine if reorganization is needed apply less to the DB2 catalog, making it hard to decide if reorganization is really needed. Remember that the catalog is crucial to DB2 and therefore needs to be in top condition. As a rule of thumb, you can anticipate one or two reorgs per year for the DB2 catalog. All catalog tablespaces can be reorganized except the special directory table DSNDB01.SYSUTILX. Because this tablespace contains only one table, which contains only a few rows during utility executions, it will probably never require reorganization, so this is not a problem. You cannot reorganize the catalog with an online reorg. You can use the SHRLEVEL REFERENCE parameter, which allows read access to the catalog during the reorg, but you cannot execute DDL because it updates the catalog. You cannot specify certain reorg options, like UNLOAD ONLY, UNLOAD EXTERNAL or LOG YES for a catalog reorg. When you reorganize certain tablespaces in DSNDB06 (the DB2 catalog database), you should also reorganize their counterparts in the directory (DSNDB01): DSNDB06.SYSDBASE and DSNDB01.DBD01 DSNDB06.SYSPLAN and DSNDB01.SCT02 DSNDB06.SYSPKAGE and DSNDB01.SPT01 Because the DB2 catalog changes with every DB2 release, you should update and test your DB2 catalog reorg JCL and statements whenever you upgrade DB2. Page 57

58 Verify catalog consistency The following DB2 catalog and directory tablespaces contain links or hashes: DSNDB06.SYSDBASE DSNDB06.SYSDBAUT DSNDB06.SYSGROUP DSNDB06.SYSPLAN DSNDB06.SYSVIEWS DSNDB01.DBD01 The catalog reorg does not verify any DB2 internal links or the completeness of the DB2 catalog. However, there are a number of options to verify the integrity of the DB2 catalog. DSN1CHKR is a utility that scans these tablespaces for broken links, hash chains, and orphans (records that are not part of any link or chain). This utility works directly on the VSAM data sets and does not use the DB2 buffer pools. DSN1CHKR is run when DB2 and the DB2 catalog stopped, or run against a copy made with DSN1COPY or from a SHRLEVEL reference copy of the DB2 Catalog. Because DSN1CHKR directly on the VSAM data sets, be sure to create file security permissions for the user ID that will run the utility. Because DSN1CHKR is not the friendliest utility, there are a couple of other online options you can use to verify the DB2 catalog, but they do not offer the same rigid check that DSN1CHKR does. First, when you copy the catalog with the COPY utility, specify that you want the pages to be checked. This verifies the integrity of each single page, but does not verify hashes or links, nor does it follow off-page pointers. A second option is to verify the integrity of the indexes using the CHECK INDEX utility to detect any orphans, run the SQL integrity checking queries that are used during a DB2 upgrade (members DSNTIJP9 and DSNTIJPM). Remember that the DB2 catalog/directory is a single point of failure in a DB2 subsystem and DB2 data sharing complex. Make sure that you do some integrity checking at least once a month. Use the check option on the COPY utility with every run of the COPY utility on the catalog. Page 58

59 DBDs and SYSCOPY A database descriptor (DBD) is a special control block maintained by DB2 and stored in DSNDB01.DBD01. The DBD is initially built from the DB2 catalog and contains the structure of all the objects in the database. It contains one control block per database and it is loaded into storage once a database is in use. DB2 can unload a DBD from storage only when all activity on a database is gone. You can determine the size of the DBD by issuing a DIS DB command. DB2 prefers to use the DBD over catalog queries because of speed. But the DB2 catalog is regarded as the truth, or the single point of integrity. When a DBD gets corrupted, you can always rebuild it from the DB2 catalog using the REBUILD DBD utility. There is a relationship between the DBD and the SYSIBM.SYSCOPY table. Information from all image copies, for instance object ID, is recorded in the DBD. The DBD keeps growing until you clean up SYSIBM.SYSCOPY using the MODIFY RECOVERY utility. Run MODIFY at least once a month to keep SYSIBM.SYSCOPY clean and the size of the DBD reasonable. Remember, old object IDs of dropped objects can be reused only if no image copies refer to the old object ID. Whenever a DBD is updated, it is completely logged. Because a DBD can be large, this can potentially use a lot of log space. Prior to DB2 V8, DB2 did log the DBD with every update. DB2 V8 optimized it to log the DBD only once per committed unit of work. Therefore when you run large DDL streams, do not to commit every CREATE, DROP or ALTER. You must commit any changes before a dependent object can be created. DDL commit optimization can be difficult. SYSIBM.SYSUTILX The DB2 directory contains SYSIBM.SYSUTILX, which keeps track of utility progress. It is hidden in the directory, making it impossible to query with SQL because it is unstructured. Each row is a complete page, and each utility uses the row in a different way. All utilities store restart information in the row. When you issue a DIS UTIL command, you in fact query the table and the display processor formats the content in a human-readable format. You can use the diagnose utility (DIAGNOSE SYSUTIL) to dump SYSUTLX, which gives you even more insight into what DB2 stores in this special table. In general, it is not a good idea to have old utilities hanging around. You can have unexpected restart results from utilities, and when you upgrade you are prohibited from having pending utilities. Therefore, it is wise to DIS UTIL(*) command to see if there are any stopped utilities when you don t expect any utilities to be active Page 59

60 (your job scheduler might know for sure). If you terminate a utility, DB2 triggers a clean-up process that can set the status of a tablespace to a very unwanted status. Carefully analyze what the result of terminating a utility would be and never issue TERM UTIL(*) Invalid or inoperative packages When you execute DDL statements, packages can become invalid. If you attempt to execute an invalid plan, then DB2 attempts an auto-rebind. This is an unwanted situation for two reasons: you must wait until auto-rebind is done and you will, just as during a normal bind or rebind, lock many resources including the catalog and the DBDs used by the packages. If auto-rebind fails, the packages are marked inoperative to prevent DB2 from attempting another auto-rebind. An auto rebind can fail for many reasons. The most common ones are that objects needed by the packages do not exist or you encounter locking problems during the auto rebind. When you have finished executing DDL statements, you should detect invalid and inoperative packages. You can use this query: SELECT COLLID AS COLLECTION, NAME AS PACKAGE, HEX (CONTOKEN) AS CONTOKEN, OWNER, CREATOR, VALID, OPERATIVE, EXPLAIN, VERSION, PDSNAME FROM SYSIBM.SYSPACKAGE WHERE (VALID = 'N' OR OPERATIVE = 'N') ORDER BY COLLECTION, PACKAGE, VERSION WITH UR; If the query produces a list of packages, then you can manually rebind them. If this fails again, you need an in-depth analysis to determine the cause. One of the most obvious reason why program do not rebind is because it refers to an object that no longer exists. If the programs are no longer needed, then you should free the packages (and the plans) to save space in the directory. Package versions Package versioning is a blessing, because it allows you to easily switch programs. DB2 does this by using an consistency token which is stored in load module and compared to the tokens stored in packages. Using the consistency token, the program selects the correct version of the package. This allows you to bind your packages before the programs are live and enables you to easily fall back to the previous version of a program. If you do not use versioning, a new bind overwrites the existing package and you always have the current package in use. With versioning, you keep adding new Page 60

61 versions and it becomes your responsibility to remove packages that are no longer needed. Use this query to identify packages older than 3 months AND have 3 or more versions: SELECT P.COLLID AS COLLECTION, P.NAME AS PACKAGE, P.VERSION, P.BINDTIME FROM SYSIBM.SYSPACKAGE P WHERE EXISTS (SELECT 1 FROM SYSIBM.SYSPACKAGE SYSPACK1 WHERE P.LOCATION = SYSPACK1.LOCATION AND P.COLLID = SYSPACK1.COLLID AND P.NAME = SYSPACK1.NAME AND SYSPACK1.BINDTIME < CURRENT TIMESTAMP - 3 MONTHS ) AND EXISTS (SELECT SYSPACK2.COLLID, SYSPACK2.NAME, COUNT(*) FROM SYSIBM.SYSPACKAGE SYSPACK2 WHERE P.LOCATION = SYSPACK2.LOCATION AND P.COLLID = SYSPACK2.COLLID AND P.NAME = SYSPACK2.NAME GROUP BY SYSPACK2.COLLID, SYSPACK2.NAME HAVING COUNT(*) > 3 ) ORDER BY COLLECTION, PACKAGE, BINDTIME WITH UR; Carefully review the output. It is likely that some of these older versions are no longer needed and can be freed using the FREE PACKAGE DB2 command. DB2 plan stability Plan stability, which was added in DB2 9, allows you to REBIND and keep the older versions of a package, not to be confused with package versioning (see Package versions above). During REBIND, DB2 can choose a new - and hopefully better - access path. If DB2 selects a less efficient access path (often the result of statistics or external reason), you still have the previous copy of the package with the old access path. Plan stability allows you to switch back to the older copy of the access path. You could think of the feature as a backup feature for packages in the DB2 directory. Although this is a great feature, there is a minor problem. The size of the SPT01 space in the directory (DSNDB01) starts to grow. This space is limited to 64 GB in DB2 9, so be careful with how many backup versions you have. Page 61

62 The following query can help you to determine which packages have one or more backup versions: SELECT SP.COLLID, SP.NAME, SP.VERSION, COUNT(DISTINCT SPD.DTYPE) AS COPY_COUNT FROM SYSIBM.SYSPACKAGE SP, SYSIBM.SYSPACKDEP SPD WHERE SP.COLLID = SPD.DCOLLID AND SP.NAME = SPD.DNAME GROUP BY SP.COLLID, SP.NAME, SP.VERSION HAVING COUNT(DISTINCT SPD.DTYPE) > 1 WITH UR; This query counts the number of copies in use. For plan stability BASIC it will find 2 and plan stability EXTENDED it will find 3. If you don t need to fall back to an older copy of the of the package, you can free the inactive packages with PLANMGMT SCOPE(INACTIVE) option of the FREE PACKAGE command to keep the SPT01 space reasonable. View regeneration errors Some types of ALTERs (online changes) have an impact on the internal representation of a view. These views are automatically regenerated by DB2. In rare cases, this regeneration can fail and the view gets flagged as regeneration failed (STATUS=R in SYSIBM.SYSTABLES). You can detect successful regeneration by specifying VERSION=800 (a fake version number used by DB2) in the WHERE clause. If you have made DDL changes to your tables, use the following query as a final step: SELECT CREATOR,NAME FROM SYSIBM.SYSTABLES WHERE TYPE = 'V' AND STATUS = 'R' AND TABLESTATUS = 'V' WITH UR; If rows are returned by this query, regenerate the views using the following SQL: ALTER VIEW <viewname> REGENERATE; If this fails, you most likely have internal errors in DB2. How BMC can help BMC CHANGE MANAGER for DB2 automatically regenerates views after any type of online change is made to base table of the view. In addition, the AREO pending status is resolved by running a reorganization and a REBIND for all dependent packages is executed. This level of automation ensures that all resources are available and performing well after any type of change. Page 62

63 Catalog queries Last but not least, some final words about the DB2 catalog. The catalog (DSNDB06) and the directory (DSNDB01) are the central core of your DB2 system. Always keep them in good shape and take image copies regularly. When you make queries to the catalog, add WITH UR which tells DB2 to use isolation level uncommitted read (a bad practice for normal SQL statements) to ensure you don t lock the catalog data. Because changes within the DB2 catalog are infrequent, it is unlikely the UR will return incorrect results. Page 63

64 Page 64

65 Chapter 6: Schema management Making changes to DB2 on z/os has never been easy. Whether we are talking about a system upgrade or application enhancement, the challenge remains the same: How do you keep downtime to a minimum and reduce the risk of errors? Some DB2 changes are relatively simple and others quite complicated, but all require careful planning. This paper discusses change management for DB2 9 and how to reduce the risks involved. It looks at the impact of DB2 change management in general and how it impacts your business application availability.. What can we change and how does it affect us? Many organizations have invested a lot of money in minimizing the impact of DB2 application and system changes. A frequent discussion topic is whether it is really possible to make DB2 changes without any impact to the system or business users. Most changes in DB2 require some amount of downtime. While many organizations would like to have 100% availability, it is not always possible. We have two kinds of downtime: planned and unplanned: Planned downtime includes any time that we plan for applications to be offline, including application and data-related changes and system upgrades Unplanned downtime occurs when we have hardware or software failures or data corruption, which is oft en a by-product of an application or data-related change. When things go wrong, a planned outage can extend to become a longer unplanned outage. Page 65

66 Data sharing When you need to minimize downtime, consider using data sharing; it was created to enable DB2 for z/os to provide 24 x 7 operations. Although data sharing attempts to achieve zero downtime, every change has a risk of downtime and data corruption. In practice, even a small change can have a huge impact if the change is done incorrectly, possibly causing long outages or even a system malfunction. It has been said that the z/os mainframe platform is capable of delivering % uptime. This magic number means that you can only have 5.3 minutes downtime per year. Keep in mind, a system IPL or recycle of DB2 usually takes more than 5 minutes. Because an upgrade of z/os or DB2 often requires an IPL or DB2 recycle, the impact of such an event can only be truly minimized using data sharing. IBM has enabled old and new versions of DB2 to tolerate each other within a data sharing environment, but the price for this scenario is quite high. Not only does it require an expensive data sharing set-up, but it also creates a complex change scenario with multiple processes and many steps. When you move to a new version of DB2 without data sharing, the impact is similar to making application and data changes, but with a much broader scope - more planning, recovery points, a fallback strategy, and a considerable planned outage. Application and data changes rarely benefit from data sharing because part or all of an application must be offline for the change. Even with data sharing, the application must be stopped on all members. Page 66

67 Anatomy of a change A change to an application or its data schema usually includes the following processes: Stopping the application Making the change to the data structures and the programs Testing the application Resuming your business operations Surprisingly, many users omit the testing step or if they do a test, they do not have any back-out scenario in place to undo the recent changes. One effective way to undo an application change is to stop DB2 and snap all disks containing DB2 data (including system files) and the program files. If the test is unsuccessful, you can snap the disks back and it looks as if the change was never done (a go-back-in-time scenario). With the proper snapshot technology in place, this provides a quick and low impact scenario and reduces the overall risk. Application changes: a closer look You can make several types of application changes and the impact of the each type on the business environment (downtime and the potential for data corruption or loss) varies. These risks increase with complex changes. As the implementation scripts become more complex, they also become more error prone. If the scripts are written manually, it is easy to make a mistake. The impact of an error can be major. When you need to implement complex changes, you need a higher level of knowledge and skills to prepare and execute changes. You can make the following types of changes: Program-only changes with no data structure changes required (for example, SQL statements and business logic) Data structure changes with minimal impact (for example, adding a new index) Data structure changes using online change (for example, altering columns) High-impact changes using unload, drop, create, load, etc.- (for example, deleting columns) Let s review these one by one. Page 67

68 Program only changes, no data structure changes In this category the program logic needs to be altered or extended, but the data structures (tables, indexes, views, and so on) do not require any changes. This is o en the case when the program contains only dynamic SQL and DB2 is unaware of the change. A good example of this is SAP hot packages where the SAP program business logic changes and new SQL statements are generated. Even if the statements were previously cached, DB2 will eventually delete them from the cache with no impact on the application. These types of changes are usually done quite quickly with no real impact to the business user. Another good example of a no-impact change is a Windows application that updates itself at the next reboot or within 24 hours (next update moment). In this case, the roll-out of an application is gradual and might take a while. While this approach appears to be attractive, be aware that it is not easy to implement the undo scenario. How do you force hundreds or thousands of PCs to go back to an old version of the program at the same time? By default, any undo scenario is time driven and in contradiction to gradual (when you have time or next update moment). If you require a companywide business change to be done at a certain date/time, then you probably need a different solution. For application programs that use static SQL, the situation is more complex. DB2 is now involved because you must BIND the programs. By using package versioning, you can preserve the package of the old program. When you carefully prepare the change and if you can switch the programs dynamically, then this change can have zero impact on the application itself. However, the preparation process includes binding the program DBRM into a new version. BIND always has an impact because it places exclusive locks on a number of DB2 resources. You must make sure that you perform binds at quiet points when the application and utility activity is relatively low. The undo scenario is the reverse of the implementation: switching the programs back to the old version is all that is required because the old package is still there. This simple undo is a good scenario unless your data integrity has been compromised by the application change. We will discuss what could go wrong in the next section DB2 data structure changes with minimal impact DB2 has a golden rule: If it is missing then it must be NULL. This enables DB2 to have an ALTER TABLE ADD COLUMN function. The columns added using this statement have, by default, the attribute NULLABLE, because the change is reflected only in the DB2 catalog. There are no dependencies on the newly created columns, and DB2 will not invalidate any packages. Page 68

69 DB2 9 has a special column type called ROW CHANGE TIMESTAMP to support the optimistic locking feature. Does this mean that adding a column to a table has no impact? The classic IT answer is it depends. If your program does a SELECT * FROM, the newly added columns will show up, but the program may not expect them. Depending on the programming language and interface used, there could be a problem. DB2 9 has a new syntax (EXPLICITLY HIDDEN) that can be added to a column definition. Columns with this attribute will not show up in a SELECT * FROM, but they can be retrieved when named explicitly. Of course, you do need program changes to use the new column in your business logic, and the program must be rolled out too. Remember the no impact implications we just discussed? Not all no impact changes are executed immediately. Some changes to resources are not used by programs directly. For example, if you RENAME an index, the application is not affected because DB2 uses identifiers (not names) when it generates access paths. Most other structure changes require an unload-drop-create-load (UDCL) scenario or could be implemented with online change, which is less online than you might think. DB2 data structure changes with impact: online change DB2 for z/os has always been criticized for its inflexibility when it comes to changing data structures. Even what would seem to be a simple change, like dropping a column, is not possible. IBM has made it a priority to provide more flexibility in data structure changes. DB2 V8 and DB2 9 introduced additional availability improvements in this area, and DB2 10 will add even more flexibility. But take note: IBM is very much opposed to changes that would cause data loss or compromise integrity, like truncating columns or dropping columns. It is very likely that scenarios like these will always need the UDCL scenario for some changes in z/os, even though DB2 for Linux/Unix/Windows has features for dropping columns. DB2 V8 delivered an online change function. It sounds good, but it comes with a price. The structure change is applied to the DB2 catalog but NOT to the data table / index itself. The data is always put into a pending status. Depending of the type of change this status can be: REORP (immediately needs REORG), RBPD (need immediate REBUILD), ARDBP (needs a REBUILD index ASAP) or AREO* (needs a REORG ASAP). The utility will do the actual data conversion and rebuild the data in the new format. In a worst case scenario, the REORP status, you need a DB2 REORG with SHRLEVEL NONE (referred to as a classic REORG or offline REORG). The Page 69

70 REORP status takes the data offline and the REORG executed with this concurrency level keeps the data offline while DB2 performs a data unload, sort, and reload. The scenario looks a bit like the UDCL scenario. But because there are no real DROP-CREATE actions with a cascading effect inside the catalog, the UNLOAD-LOAD can be done quickly by the REORG utility. All the work like data conversion and creating the object with new attributes is done by the REORG utility. For years, customers have asked IBM to supply a DROP command with an option to keep dependent objects in a pending status, thereby signaling DB2 that the dropped object will come back. This would simplify user scenarios significantly. But the impact of such a feature on the DB2 core code is so big that is unlikely that we see such a feature soon (if ever). A pending status that starts with A is called Advisory status. In this case, DB2 uses what is called versioning (this is different from the versioning of a package). The data is NOT offline but also not yet converted. DB2 performs a data conversion when the data is read; however, there is a price to pay for the conversion time (CPU overhead) and there is a caveat. If you make changes to the data types in a table, then your program needs changes as well. Otherwise, there is a data type mismatch and DB2 will issue a negative SQL return code. DB2 will allow data type mismatch in some cases. But keep in mind, DB2 data conversion is very CPU intensive. In many cases, DB2 will simply reject the SQL statement, and you do not want this to happen in a production environment. So in addition to the online change, if there are program changes required; you need to distribute the program again. Programs with static SQL will REBIND. Depending on the complexity and number of these changes, data is vulnerable to integrity loss. The high impact change Some changes will always require downtime and involve many resources and programs, such as a new release of a complex application or a commercial so ware product. For these changes, the UDCL scenario is needed. Any DROP and re-create of a DB2 object is a tricky business. The DROP will have a cascading effect on all dependent objects and will invalidate PLANS. This scenario becomes more difficult because every DB2 release has new objects and the rules for existing objects changes. Think of the impact if you use triggers and identity columns. In addition to the dependent objects being dropped by DB2, security privileges are also revoked (unless you have external security by RACF or another security tool). Changes to the security arena in each new DB2 release make things more complex, for example the addition of roles and trusted context in DB2 9. Perhaps you can say, We don t use these fancy features. Don t forget to include the word yet. In a 2002 survey, most DB2 users said they didn t use stored procedures and probably never would. In Page 70

71 2010, the situation is just the reverse - most DB2 users have stored procedures and those who don t are planning for them. So to be realistic, your UDCL scripts will become more complicated in the years to come. There will always be scenarios where UDCL scripts are needed. Think of trying to do a conversion from character to a date column. This is a change DB2 will never support online. Creating and testing a script for UDCL takes a lot of time and is not without risk (more about this later). You need to know what you are doing. Many organizations complain that the level of DB2 knowledge is decreasing because many knowledgeable DBAs are retiring or reductions in workforce have cut too many people. Time and budget to train junior DBAs is often at a premium, so doing more with less also means more risk and probably more errors and more downtime. From the earliest days of DB2 these issues were addressed by using software tools for change management to minimize risk and improve availability. A tool like BMC CHANGE MANAGER for DB2 simplifies and automates not only changes, but also for application version control, migrating applications between DB2 subsystems, and more. Because UDCL will always be required and UDCL will only become more complex, DB2 change management tools are a prudent investment for the future. The impact of change Two important things influence the impact of a change: the time needed to implement the change and the use of DB2 versioning. The time needed to implement most changes is much more than creating the simple DDL command or creating/testing the UDCL script. It includes the time needed to analyze which objects will Page 71

72 participate in the change and which follow-up actions you must take (for example, REBIND or run RUNSTATS again). Sometimes a change will affect only a single object and a few programs. But when a new version of a complete business application is involved, a change may affect hundreds of objects because they are all related to each other. It is not a fun task to create a large UDCL script. It quickly becomes extremely complex due to identifying and including existing dependencies. When re-creating the objects you must be aware of the correct re-creation order; otherwise, the DDL (or worse, the application logic) could be changed, for example when triggers are not built in the correct sequence. DB2 versioning is the backbone for online change. When you make an online change, only the catalog gets updated, but not the data itself. DB2 implements versioning only to describe the physical layout. This makes a structure changes less intrusive. However, it gets more complicated than you would think. Consider the need for image copies. These data sets are used for disaster and point-in-time data recovery. To provide a point-in-time recovery, the versions are tracked by DB2 in the special SYSOBDS catalog table and in special system pages within the table itself. The concept of these system pages is that the data in the tablespace becomes self-describing for offline utilities, which don t have access to the DB2 catalog. This combined with a few extra columns in various catalog tables constitutes versioning. A tablespace can have 256 active versions, and indexes can have up to 16. Now for the bad news: while the catalog and the physical data are not of the same version, DB2 must do a conversion when you retrieve the data. Inserts and updates are always done in the catalog s latest version format. The conversions during a SQL select result in more CPU cycles being used. This extra CPU, which is pure overhead, can be up to 30% or more. Eliminate this overhead, as soon as possible, or your end users will notice! This is why DB2 puts the tablespace into Advisory Reorg Pending status (AREO*) and indexes into Advisory Rebuild Pending (ARBDP) after an online change. An offline or online REORG is required to rebuild your data according to the current version in the catalog. Indexes must be rebuilt, taking it offline as well. Depending on the size of the data, these utilities can take a long time to execute, making the online change less real-time than you anticipate. And if you need to do a point-in-time recovery, it is possible you will re-introduce the overhead to resolve any catalog and data version mismatch. Page 72

73 The risk of implementing a change We briefly discussed some risks when implementing DB2 changes. We talked about knowledge needed to create a UDCL script. It is easy for mistakes to go unnoticed, but the impact of the mistake could become a disaster. Consider this scenario, which seems simple to fix: You forget an index; it is dropped but never re-created. You REBIND the packages and some transactions now have bad response times, which results in grumpy end-users. You need to debug and fix the situation, but at least no data is lost. But what if you forget a trigger? Because triggers are part of the business logic, that (essential) logic is now missing. Before the error is detected and fixed, many transactions could have been processed and data might have been corrupted. Correcting this issue could take days and affect your customers. In scenarios like these, no SQL error is given and return code set to tell you there is a problem. Simple mistakes can turn into big disasters. Change management tools can prevent mistakes like these and eliminate a lot of risk. But even with tools, other serious risks are involved: REBIND can result into another access path. When you REBIND an application t» hen DB2 can opt for a different access path. Many things influence the access path selection. Even if you have not made any mistakes in re-creating the objects and have run RUNSTATS correctly, it is possible that DB2 takes a different access path. Because in 99% or more of all the cases the new access path performs as good as or better than the original one, most DBAs consider this an acceptable risk. Software such as BMC SQL Performance for DB2 provides access path comparisons to mitigate the risk of running into a worse access path after a REBIND. Changes from a rejected change (including any updates done to the data during the test) need to get backed out completely and correctly. If you implement a change and the implementation is rejected during testing or after the application is used in production, then you must back out the change completely. Many DBAs don t create a back-out scenario, and they try to fix the error on the fly. When this proves impossible, they must manually back out the change without any guarantee that it will be successful. Page 73

74 This is a risky business, because basically it is never practiced. Even if you undo structure changes, in many cases no one looks after the data. The DBA may have no knowledge of the business processes that affect the data in the application. Has it been corrupted during the test or been updated by users? The only way of undoing everything is to go back to a time when the subsystem and the application were in sync, and the only way to do this quickly is to use snapshot technology for all disks involved. DB2 10 improvements IBM has made many online change enhancements in DB2 version 10. One of them is a DDL pending feature, which allows changes to physical parameters like pagesize, segsize and dssize. Similar to the previous releases of DB2, the DB2 REORG utility will implement these parameter changes during the next REORG. Pending DDL is different from versioning as there is no resulting CPU overhead. With pending DDL some changes will move from the high impact to the online change category, but there will still be many changes which still require a UDCL scenario. Conclusion Making changes in DB2 isn t always easy. You must be knowledgeable about DB2 and even then every change has a potential for risk. If your application is complex and you exploit many DB2 features, then creating a UDCL script manually can be a nightmare. No matter if you use tools or not, you must properly plan, test, and execute every change. No matter how experienced you are or what tool you use: there will be a day that you have to undo your change. A good change strategy also means you have a good undo strategy. Page 74

75 Chapter 7: Simplify DB2 for z/os schema management This chapter discusses how BMC simplifies and automates schema management. Change management - a necessary evil As DB2 applications become more structurally complex, the need to add and modify data structures increases significantly. What s more, the growing complexity of the DB2 environment has made the change process itself more difficult and costly. In large complex DB2 environments, changes imposed by an application usually result in changes to the supporting DB2 structures. Changes are varied and complex, including changing buffer pools, customizing data set sizes, adding a column, adding views, and adding or dropping an index. Without an effective process in place, change management is complex, tedious, resource intensive, time consuming, and error-prone. Most organizations use at least three environments for each of their applications: development, test, and production. For example, you could use your development system to maintain application code and perform unit tests. You could use your test system to perform system and stress tests, as well as simulate production. Finally, your production system could be located in single or multiple locations. Sometimes, multiple environments are located on the same DB2 subsystem. The same DB2 subsystem can be used for multiple environments if you use different database names and you have different owners. Where are your environments located? Are they on the same subsystem? Do they share DASD? Are they located in different cities or in different parts of the world? In some organizations, multiple teams are responsible for developing or maintaining an application in a development environment. If the changes to the data structures are needed by each team, you might need to synchronize the data structures in the development environment before you migrate the entire structure to the test environment. How will you propagate these changes? Do you manage your change requests based on release cycles or by date? For example, you could create your development environment based on your production environment by migrating your production environment. Or you could synchronize your development environment with your production environment by comparing your development environment to the production environment. Page 75

76 Regardless of how your environments are structured, BMC Database Administration for DB2 simplifies and automates change management by combining a comprehensive change management tool with high-speed utilities. The change management process usually requires image copies, unload and load utilities, and SQL execution for the DB2 object itself and the object dependencies. BMC Database Administration for DB2 provides the following technologies: BMC CATALOG MANAGER for DB2 BMC CHANGE MANAGER for DB2 BMC COPY PLUS for DB2 BMC LOADPLUS for DB2 BMC UNLOAD PLUS for DB2 BMC SNAPSHOT UPGRADE FEATURE BMC CHANGE MANAGER for DB2 BMC CHANGE MANAGER for DB2 simplifies and automates the change process so that you can effectively manage a dynamic DB2 environment. BMC CHANGE MANAGER for DB2 analyzes the change request, determines the required changes, changes the structures, moves the data, and tracks the changes of all the DB2 environments all while properly managing ERP and CRM applications. BMC CHANGE MANAGER for DB2 ensures that a planned change is implemented correctly, the first time, without costly mistakes. BMC CHANGE MANAGER for DB2: Migrates data structure changes across multiple databases and subsystems Determines changes to data structures and migrates those changes to one or more copies of the data structures Captures and records structure definitions and data within a DB2 subsystem at a point in time (to establish a full-recovery baseline) Recovers structures and data to a point in time defined by a full-recovery baseline within a DB2 subsystem Page 76

77 Compares two versions of structure definitions to o o Determine the changes necessary to upgrade one version to another Selectively apply changes to copies of the data structures while preserving the uniqueness of each copy Uses data modeling tool outputs to determine the changes to existing DB2 application structures Reduces the volume of information that is needed to communicate changes by using a proprietary BMC Software language called Change Definition Language (CDL) to transmit the change information Feeds changes that are made on a remote system back to the development system Defines and stores reusable rules for making changes Uses SQL-like statements to update, delete, and migrate data structures BMC CHANGE MANAGER for DB2 provides full support for databases, table spaces, tables, indexes, and much more. When you specify changes for any of these data structures, BMC CHANGE MANAGER for DB2 automatically propagates the changes to any dependent objects. For example, if you change the name of a table, BMC CHANGE MANAGER for DB2 creates a corresponding change in the indexes, synonyms, and other dependent objects that reference the table under its former name. BMC CHANGE MANAGER for DB2 uses BMC Software or IBM utilities in worklists when required. However, by using installed BMC Software utilities instead of IBM utilities, you can significantly enhance the performance of executing the worklists. BMC Software utilities run faster, provide additional features, and may reduce the number of steps in a worklist. Using BMC CHANGE MANAGER for DB2 compare and baseline features, you can perform synchronization or versioning of the databases, and implement changes without disrupting local modifications. When coupled with the high-speed utilities in BMC Database Administration for DB2, BMC CHANGE MANAGER for DB2 performs faster changes and migrations. For instance, when you implement development changes into the production system, you copy an application s data structures from one DB2 subsystem to another. BMC Database Administration for DB2 automates all of the necessary tasks: Analyzes the impact of the changes Page 77

78 Issues warnings and error message for conflicts Identifies dependent objects Determines the least expensive implementation strategy Creates a worklist Unloads the data if data movement is involved Executes the changes to the DB2 structures Loads the data BMC CHANGE MANAGER for DB2 enables you to create data structures on a DB2 subsystem from structures that already exist on the same subsystem or on a different subsystem. This process of creating data structures is called migration. In the migration process, you create a new set of data structures, using the existing data structures as a template. The migration process enables you to Copy an application s data structures from a development subsystem to a test subsystem Copy an initial version (or perhaps a major update) of an application s data structures to one or more production subsystems Migrate an entirely new set of structures, because the new version of an application s data structures is substantially different from prior versions Migrate data with the data structures Use migration as the preferred installation method when you install the initial version of an application Manage changes With BMC CHANGE MANAGER for DB2, you can migrate data structures or just the changes. Migrate data structures The process of migrating data structures and data within the same subsystem or to a different subsystem is virtually the same. Both processes involve the following steps: Page 78

79 Create a migrate-type work ID, an outbound migrate profile, or both to define the scope of the migration, migration options, and the change rules for the migration Specify the data structures and their dependents Analyze the requests in the work ID Generate a worklist (a BMC internal language used to drive the generation of utilities and commands) Generate the execution JCL Execute the worklist When the worklist is executed, BMC CHANGE MANAGER for DB2 unloads the data, creates the new data structures, loads the data, and runs any other utilities. When you migrate data and data structures to a different subsystem, the worklist is executed in two phases: Phase 1 executes on the sending subsystem and unloads the data. Phase 2 executes on the receiving subsystem and creates the new data structures, loads any migrated data, and runs any utilities. Migrate data structure changes only Change migration, the process of migrating only data structure changes (instead of entire structures) to another copy of the structure on the same or a different DB2 subsystem, enables you to update structures that have already been migrated to another copy. However, you must be able to retain any structure modifications that were made locally after the structures were migrated. You may need to make local modifications or variations from the control definition of the application to meet the following needs: Performance and usage tuning. Local tuning is a complex process that is based on the specific system that is involved and the performance and use of the application in that environment. Because separate sites often have different amounts of data and different transaction loads, as well as different CPU and DB2 environments, local tuning of application structures is common. Security. Authorization requirements are associated with object ownership and, in some cases, with object names. As a result, the authorizations for one DB2 subsystem can vary Page 79

80 significantly from the authorizations of another subsystem running the same application. In this situation, each subsystem requires local modifications to the application. Additional uses of data. Often, production systems need to use data that is beyond the fixed scope of an application. For example, the data in a payroll application might be needed for reports that the finance department develops. DB2 enables users to locally generate additional data structures (such as indexes, synonyms, views, aliases, and authorizations) to meet these needs. Because local modification of applications is so common in DB2, you need an application management strategy that enables you to manage basic elements of the application globally, without compromising the elements that vary locally. BMC CHANGE MANAGER for DB2 migrates global changes while retaining local modifications by determining which changes were made to the control version of the application and applying only those changes to other versions while still preserving all local modifications. BMC CHANGE MANAGER for DB2 verifies the accuracy of the proposed changes before applying them. Because changes are allowed at the lowest level of an object s definition, BMC CHANGE MANAGER for DB2 preserves and converts data as necessary to accommodate the requested changes. BMC CHANGE MANAGER for DB2 propagates changes to all dependent structures. For example, when you change a column name, BMC CHANGE MANAGER for DB2 propagates the change to any index and foreign key definitions that use the column, even though the index and foreign key definitions might be unknown to the sending DB2 subsystem. Recover data structures Changes to data structures may not produce the effect that you intended. For example, a change may significantly increase an application s response time, in which case you need to restore the previous data structures. While DB2 has features for logging, backup, and recovery of data, it has no similar features for data structures. If you make an unwanted structure change, or if your changes fail or are unusable, you must fall back to the previous definition. To do so, you must have saved the structure definitions and data of the previous version. BMC CHANGE MANAGER for DB2 enables you to capture a set of data structures from the DB2 catalog or DDL file and store the set in a baseline. A baseline can contain only data structures or data structures with their associated data. Page 80

81 Data structure recovery entails establishing baselines and recovering to them. You can reload the data that is stored in a full-recovery baseline after the structures have been recovered. You can reload the data that exists in those structures at the time of their recovery. BMC CHANGE MANAGER for DB2 can convert current data to match the restored data structures. Record and control changes Managing change requires that you know what changes have been made to an application s data structures during different periods of time. The BMC CHANGE MANAGER for DB2 compare feature generates a file that shows the differences between two sets of data structures. Feed back changes Data structure changes might not flow in order from the development system, through testing, and into production. Because changes to the basic application definition can occur at any point in the cycle, you must be able to transmit changes back to the development system or to the control node. Transmitting changes from a remote system to the development system (or to the control node) is called change feedback. To feed back changes, BMC CHANGE MANAGER for DB2 enables you to identify the changes made and apply the changes at the development or control system. BMC CHANGE MANAGER for DB2 components Most BMC CHANGE MANAGER for DB2 components can run in the foreground or as batch jobs. Baseline The baseline component captures a set of DB2 structure definitions from the DB2 catalog, a DDL file, or a migrated worklist at a specific point in time. The captured set of data structure definitions is called a baseline. If the structures are defined in the DB2 catalog, a baseline can also capture the data and authorizations that are associated with those structures. A baseline of both data structures and data is called a full-recovery baseline. Page 81

82 Baselines act as control points during data structure management. They establish a static set of data structures for an application version. If you make a change with unwanted results, you can restore the data structures. If you establish a full-recovery baseline, you can also restore the data. You can restore the data structure back to a prior baseline and convert the current data to those structures. You can also use baselines for version comparison. For example, when you initially install an application, create a baseline. At a later time, compare the baseline and the DB2 catalog to create a CDL file that shows any modifications that have been made to the application s data structures since the initial installation. Compare The compare component determines the differences between two sets of data structures and generates a CDL file. The generated CDL file contains all of the changes that the comparison found between the two sets of data structures. You can compare data structures that are stored in a DDL file, work ID, baseline, worklist, or DB2 catalog. The CDL file contains the changes that, if applied to the primary input source, would produce the data structures of the secondary input source. CDL has advantages over DDL: CDL allows more types of modifications to data structures and, unlike DDL, can retain local modifications to those structures. You can use the generated CDL file as follows: To process the file as a set of change requests for the current subsystem To save the file as a record of the changes made To import the file to a different subsystem in order to update a separate version of the data structures CM/PILOT CM/PILOT automates DB2 change management tasks and makes it easy to implement mass or repetitive changes. With CM/PILOT, you do not need to decide which BMC CHANGE MANAGER for DB2 processes are required for a task or the sequence in which you need to complete them CM/PILOT provides scripts to guide you through the process. You can copy scripts and modify them to meet your needs. Page 82

83 CM/PILOT enables you to create tasks that can be done later by someone else or through job scheduling. By reusing these tasks, you can ensure that the change management task is done the same way every time. You can use two CM/PILOT scripts to create SQL-like Data Manipulation Language (DML) statements to update, delete, and migrate data structures. CDL and DDL files CDL is a BMC Software proprietary language that you use to specify changes to DB2 data structures. You can use CDL files to transmit data structure changes between subsystems or to provide a record of changes to data structures. BMC CHANGE MANAGER for DB2 can generate executable SQL DDL statements (in SPUFI format) when generating a baseline report. The power of parallel performance Implementing parallel technology in hardware and software: Reduces maintenance windows Increases application availability Improves CPU resource utilization Balances dynamic workloads Improves performance Adds flexibility and scalability BMC Database Administration for DB2 provides a high performance solution to the shrinking maintenance window that takes advantage of both hardware and software through parallelism in your DB2 for z/os environment. BMC Database Administration for DB2 provides parallel execution of change management tasks. BMC CHANGE MANAGER for DB2 supports parallelism when executing the commands and statements in the worklist. Worklist parallelism works within a single LPAR or across a sysplex environment. Worklist parallelism automates processes within the job, requiring no intervention or additional manipulation. Page 83

84 BMC CHANGE MANAGER for DB2 identifies which work can be performed in parallel. A unit of work (UOW) defines the commands, which can be grouped for similar types of tasks, such as unload, load, check, copy, and statistics. The UOW has a begin and end sequence, enabling a restart if any part of the process is interrupted. This is done without manual tracking or intervention. A single point of control monitors the status of each UOW within BMC CHANGE MANAGER for DB2. The high-speed utilities (BMC LOADPLUS for DB2, BMC UNLOAD PLUS for DB2 and BMC COPY PLUS for DB2) are invoked in the parallel execution of the worklist, further enhancing performance. Exploiting hardware and software capabilities In typical long-running tasks, worklist parallelism divides multiple parallel tasks that run across multiple machines within the sysplex on more than one IBM z/os image. By dividing the work into multiple address spaces on multiple z/os systems, BMC CHANGE MANAGER for DB2 avoids any memory constraints for a single address space and allocates the work to images that have adequate CPU capability. As a result, work is dynamically distributed to underutilized processors, workloads are balanced, elapsed time for processing is improved, and CPU usage is minimally affected. ERP/CRM environments benefit where thousands of tables exist. Worklist parallelism can be used to unload, create, load and copy structures and data to create new environments quickly and efficiently. Summary To manage complex DB2 environments, it is imperative to have a comprehensive change management solution. BMC Database Administration for DB2 simplifies and automates change management processes with implementing changes with integrity and minimal outages. It saves significant time and effort by automating the process of migrating database chanced to other environments while providing fallback capability. Page 84

85 Page 85

86 BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. DB2 and z/os are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. All other trademarks or registered trademarks are the property of their respective owners BMC Software, Inc. All rights reserved. BUSINESS RUNS ON IT. IT RUNS ON BMC SOFTWARE. Business thrives when IT runs smarter, faster and stronger. That s why the most demanding IT organizations in the world rely on BMC Software across distributed, mainframe, virtual and cloud environments. Recognized as the leader in Business Service Management, BMC offers a comprehensive approach and unified platform that helps IT organizations cut cost, reduce risk and drive business profit. For the four fiscal quarters ended March 31, 2011, BMC revenue was approximately $2.1 billion. Visit for more information. *209134* Page 86

High Speed Transaction Recovery By Craig S. Mullins

High Speed Transaction Recovery By Craig S. Mullins High Speed Transaction Recovery By Craig S. Mullins AVAILABILITY is the Holy Grail of database administrators. If your data is not available, your applications cannot run, and therefore your company is

More information

Improve SQL Performance with BMC Software

Improve SQL Performance with BMC Software Improve SQL Performance with BMC Software By Rick Weaver TECHNICAL WHITE PAPER Table of Contents Introduction................................................... 1 BMC SQL Performance for DB2.......................................

More information

Revolutionized DB2 Test Data Management

Revolutionized DB2 Test Data Management Revolutionized DB2 Test Data Management TestBase's Patented Slice Feature Provides a Fresh Solution to an Old Set of DB2 Application Testing Problems The challenge in creating realistic representative

More information

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &

More information

Performance rule violations usually result in increased CPU or I/O, time to fix the mistake, and ultimately, a cost to the business unit.

Performance rule violations usually result in increased CPU or I/O, time to fix the mistake, and ultimately, a cost to the business unit. Is your database application experiencing poor response time, scalability problems, and too many deadlocks or poor application performance? One or a combination of zparms, database design and application

More information

SQL-BackTrack the Smart DBA s Power Tool for Backup and Recovery

SQL-BackTrack the Smart DBA s Power Tool for Backup and Recovery SQL-BackTrack the Smart DBA s Power Tool for Backup and Recovery by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Mati Pitkanen, SQL-BackTrack for Oracle Product Manager, BMC Software

More information

Oracle 11g Database Administration

Oracle 11g Database Administration Oracle 11g Database Administration Part 1: Oracle 11g Administration Workshop I A. Exploring the Oracle Database Architecture 1. Oracle Database Architecture Overview 2. Interacting with an Oracle Database

More information

Facilitating Efficient Data Management by Craig S. Mullins

Facilitating Efficient Data Management by Craig S. Mullins Facilitating Efficient Data Management by Craig S. Mullins Most modern applications utilize database management systems (DBMS) to create, store and manage business data. The DBMS software enables end users

More information

Database Auditing and Compliance in a Mainframe Environment. Craig S. Mullins, Corporate Technologist, NEON Enterprise Software, Inc.

Database Auditing and Compliance in a Mainframe Environment. Craig S. Mullins, Corporate Technologist, NEON Enterprise Software, Inc. Database Auditing and Compliance in a Mainframe Environment Craig S. Mullins, Corporate Technologist, NEON Enterprise Software, Inc. Table of Contents Introduction................................................................................

More information

Online Transaction Processing in SQL Server 2008

Online Transaction Processing in SQL Server 2008 Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,

More information

How To Improve Your Database Performance

How To Improve Your Database Performance SOLUTION BRIEF Database Management Utilities Suite for DB2 for z/os How Can I Establish a Solid Foundation for Successful DB2 Database Management? SOLUTION BRIEF CA DATABASE MANAGEMENT FOR DB2 FOR z/os

More information

DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New

DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New Robert Catterall, IBM [email protected] DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New Baltimore / Washington DB2 Users Group June 11, 2015 Information Management 2015 IBM Corporation

More information

New Security Options in DB2 for z/os Release 9 and 10

New Security Options in DB2 for z/os Release 9 and 10 New Security Options in DB2 for z/os Release 9 and 10 IBM has added several security improvements for DB2 (IBM s mainframe strategic database software) in these releases. Both Data Security Officers and

More information

Session: Archiving DB2 comes to the rescue (twice) Steve Thomas CA Technologies. Tuesday Nov 18th 10:00 Platform: z/os

Session: Archiving DB2 comes to the rescue (twice) Steve Thomas CA Technologies. Tuesday Nov 18th 10:00 Platform: z/os Session: Archiving DB2 comes to the rescue (twice) Steve Thomas CA Technologies Tuesday Nov 18th 10:00 Platform: z/os 1 Agenda Why Archive data? How have DB2 customers archived data up to now Transparent

More information

Continuous integration for databases using

Continuous integration for databases using Continuous integration for databases using Red Wie Sie Gate die tools Microsoft SQL An overview Continuous integration for databases using Red Gate tools An overview Contents Why continuous integration?

More information

White Paper. Regulatory Compliance and Database Management

White Paper. Regulatory Compliance and Database Management White Paper Regulatory Compliance and Database Management March 2006 Introduction Top of mind in business executives today is how to meet new regulatory compliance and corporate governance. New laws are

More information

SQL Server Training Course Content

SQL Server Training Course Content SQL Server Training Course Content SQL Server Training Objectives Installing Microsoft SQL Server Upgrading to SQL Server Management Studio Monitoring the Database Server Database and Index Maintenance

More information

Oracle Total Recall with Oracle Database 11g Release 2

Oracle Total Recall with Oracle Database 11g Release 2 An Oracle White Paper September 2009 Oracle Total Recall with Oracle Database 11g Release 2 Introduction: Total Recall = Total History... 1 Managing Historical Data: Current Approaches... 2 Application

More information

DATABASE VIRTUALIZATION AND INSTANT CLONING WHITE PAPER

DATABASE VIRTUALIZATION AND INSTANT CLONING WHITE PAPER DATABASE VIRTUALIZATION AND INSTANT CLONING TABLE OF CONTENTS Brief...3 Introduction...3 Solutions...4 Technologies....5 Database Virtualization...7 Database Virtualization Examples...9 Summary....9 Appendix...

More information

Concepts of Database Management Seventh Edition. Chapter 7 DBMS Functions

Concepts of Database Management Seventh Edition. Chapter 7 DBMS Functions Concepts of Database Management Seventh Edition Chapter 7 DBMS Functions Objectives Introduce the functions, or services, provided by a DBMS Describe how a DBMS handles updating and retrieving data Examine

More information

Server Consolidation with SQL Server 2008

Server Consolidation with SQL Server 2008 Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations

More information

Continuous integration for databases using Redgate tools

Continuous integration for databases using Redgate tools Continuous integration for databases using Redgate tools Wie Sie die Microsoft SQL Server Data Tools mit den Tools von Redgate ergänzen und kombinieren können An overview 1 Continuous integration for

More information

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side

More information

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper Connectivity Alliance Access 7.0 Database Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Database Loss Business Impact... 6 2.2 Database Recovery

More information

IBM DB2 Recovery Expert June 11, 2015

IBM DB2 Recovery Expert June 11, 2015 Baltimore/Washington DB2 Users Group IBM DB2 Recovery Expert June 11, 2015 2014 IBM Corporation Topics Backup and Recovery Challenges FlashCopy Review DB2 Recovery Expert Overview Examples of Feature and

More information

IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux

IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux WRITTEN BY Greg Robidoux Top SQL Server Backup Mistakes and How to Avoid Them INTRODUCTION Backing up SQL Server databases is one of the most important tasks DBAs perform in their SQL Server environments

More information

Real-time Data Replication

Real-time Data Replication Real-time Data Replication from Oracle to other databases using DataCurrents WHITEPAPER Contents Data Replication Concepts... 2 Real time Data Replication... 3 Heterogeneous Data Replication... 4 Different

More information

MySQL Enterprise Backup

MySQL Enterprise Backup MySQL Enterprise Backup Fast, Consistent, Online Backups A MySQL White Paper February, 2011 2011, Oracle Corporation and/or its affiliates Table of Contents Introduction... 3! Database Backup Terms...

More information

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper Connectivity Alliance 7.0 Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Loss Business Impact... 6 2.2 Recovery Tools... 8 3 Manual Recovery Method...

More information

BMC Mainframe Solutions. Optimize the performance, availability and cost of complex z/os environments

BMC Mainframe Solutions. Optimize the performance, availability and cost of complex z/os environments BMC Mainframe Solutions Optimize the performance, availability and cost of complex z/os environments If you depend on your mainframe, you can rely on BMC Sof tware. Yesterday. Today. Tomorrow. You can

More information

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led

Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Microsoft SQL Server for Oracle DBAs Course 40045; 4 Days, Instructor-led Course Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills

More information

Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010

Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,

More information

Cisco Change Management: Best Practices White Paper

Cisco Change Management: Best Practices White Paper Table of Contents Change Management: Best Practices White Paper...1 Introduction...1 Critical Steps for Creating a Change Management Process...1 Planning for Change...1 Managing Change...1 High Level Process

More information

Oracle. Brief Course Content This course can be done in modular form as per the detail below. ORA-1 Oracle Database 10g: SQL 4 Weeks 4000/-

Oracle. Brief Course Content This course can be done in modular form as per the detail below. ORA-1 Oracle Database 10g: SQL 4 Weeks 4000/- Oracle Objective: Oracle has many advantages and features that makes it popular and thereby makes it as the world's largest enterprise software company. Oracle is used for almost all large application

More information

What are the top new features of DB2 10?

What are the top new features of DB2 10? What are the top new features of DB2 10? As you re probably aware, at the end of October 2010 IBM launched the latest version of its flagship database product DB2 10 for z/os. Having been involved in the

More information

OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni

OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni OLTP Meets Bigdata, Challenges, Options, and Future Saibabu Devabhaktuni Agenda Database trends for the past 10 years Era of Big Data and Cloud Challenges and Options Upcoming database trends Q&A Scope

More information

Build (develop) and document Acceptance Transition to production (installation) Operations and maintenance support (postinstallation)

Build (develop) and document Acceptance Transition to production (installation) Operations and maintenance support (postinstallation) It is a well-known fact in computer security that security problems are very often a direct result of software bugs. That leads security researches to pay lots of attention to software engineering. The

More information

Oracle Database 10g: New Features for Administrators

Oracle Database 10g: New Features for Administrators Oracle Database 10g: New Features for Administrators Course ON10G 5 Day(s) 30:00 Hours Introduction This course introduces students to the new features in Oracle Database 10g Release 2 - the database for

More information

Proactive Performance Management for Enterprise Databases

Proactive Performance Management for Enterprise Databases Proactive Performance Management for Enterprise Databases Abstract DBAs today need to do more than react to performance issues; they must be proactive in their database management activities. Proactive

More information

<Insert Picture Here> Application Change Management and Data Masking

<Insert Picture Here> Application Change Management and Data Masking Application Change Management and Data Masking Jagan R. Athreya ([email protected]) Director of Database Manageability Oracle Corporation 1 The following is intended to outline

More information

Oracle Database 11g: New Features for Administrators

Oracle Database 11g: New Features for Administrators Oracle University Entre em contato: 0800 891 6502 Oracle Database 11g: New Features for Administrators Duração: 5 Dias Objetivos do Curso This course gives students the opportunity to learn about-and practice

More information

Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle

Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle Product Review: James F. Koopmann Pine Horse, Inc. Quest Software s Foglight Performance Analysis for Oracle Introduction I ve always been interested and intrigued by the processes DBAs use to monitor

More information

Oracle Database 11g: New Features for Administrators DBA Release 2

Oracle Database 11g: New Features for Administrators DBA Release 2 Oracle Database 11g: New Features for Administrators DBA Release 2 Duration: 5 Days What you will learn This Oracle Database 11g: New Features for Administrators DBA Release 2 training explores new change

More information

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3 Wort ftoc.tex V3-12/17/2007 2:00pm Page ix Introduction xix Part I: Finding Bottlenecks when Something s Wrong Chapter 1: Performance Tuning 3 Art or Science? 3 The Science of Performance Tuning 4 The

More information

B.Sc (Computer Science) Database Management Systems UNIT-V

B.Sc (Computer Science) Database Management Systems UNIT-V 1 B.Sc (Computer Science) Database Management Systems UNIT-V Business Intelligence? Business intelligence is a term used to describe a comprehensive cohesive and integrated set of tools and process used

More information

5 barriers to database source control and how you can get around them

5 barriers to database source control and how you can get around them WHITEPAPER DATABASE CHANGE MANAGEMENT 5 barriers to database source control and how you can get around them 91% of Fortune 100 companies use Red Gate Content Introduction We have backups of our databases,

More information

Course Outline: Course 6317: Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to SQL Server 2008 DBA Skills

Course Outline: Course 6317: Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to SQL Server 2008 DBA Skills Course Outline: Course 6317: Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to DBA Skills Learning Method: Instructor-led Classroom Learning Duration: 3.00 Day(s)/ 24 hrs Overview:

More information

Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to SQL Server 2008 DBA Skills Course 6317A: Three days; Instructor-Led

Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to SQL Server 2008 DBA Skills Course 6317A: Three days; Instructor-Led Upgrading Your SQL Server 2000 Database Administration (DBA) Skills to SQL Server 2008 DBA Skills Course 6317A: Three days; Instructor-Led About this Course This three-day instructor-led course provides

More information

In-memory Tables Technology overview and solutions

In-memory Tables Technology overview and solutions In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory

More information

Tech Notes. Corporate Headquarters EMEA Headquarters Asia-Pacific Headquarters 100 California Street, 12th Floor San Francisco, California 94111

Tech Notes. Corporate Headquarters EMEA Headquarters Asia-Pacific Headquarters 100 California Street, 12th Floor San Francisco, California 94111 Tech Notes Faster Application Development via Improved Database Change Management Integrating Database Change Management with Software Development to Reduce Errors, Re-Work, and Testing Efforts Embarcadero

More information

Data Quality Assessment. Approach

Data Quality Assessment. Approach Approach Prepared By: Sanjay Seth Data Quality Assessment Approach-Review.doc Page 1 of 15 Introduction Data quality is crucial to the success of Business Intelligence initiatives. Unless data in source

More information

How To Use Adobe Software For A Business

How To Use Adobe Software For A Business EXHIBIT FOR MANAGED SERVICES (2013V3) This Exhibit for Managed Services, in addition to the General Terms, the OnDemand Exhibit, and any applicable PDM, applies to any Managed Services offering licensed

More information

Enhance visibility into and control over software projects IBM Rational change and release management software

Enhance visibility into and control over software projects IBM Rational change and release management software Enhance visibility into and control over software projects IBM Rational change and release management software Accelerating the software delivery lifecycle Faster delivery of high-quality software Software

More information

Introduction to Enterprise Data Recovery. Rick Weaver Product Manager Recovery & Storage Management BMC Software

Introduction to Enterprise Data Recovery. Rick Weaver Product Manager Recovery & Storage Management BMC Software Introduction to Enterprise Data Recovery Rick Weaver Product Manager Recovery & Storage Management BMC Software Contents Introduction...1 The Value of Data...2 Risks to Data Assets...2 Physical Loss...2

More information

Many DBA s are being required to support multiple DBMS s on multiple platforms. Many IT shops today are running a combination of Oracle and DB2 which

Many DBA s are being required to support multiple DBMS s on multiple platforms. Many IT shops today are running a combination of Oracle and DB2 which Many DBA s are being required to support multiple DBMS s on multiple platforms. Many IT shops today are running a combination of Oracle and DB2 which is resulting in either having to cross train DBA s

More information

Database High Availability. Solutions 2010

Database High Availability. Solutions 2010 Database High Availability DB2 9 DBA certification Solutions 2010 exam 731 P.O. Box 200, 5520 AE Eersel, The Netherlands Tel.:(+31) 497-530190, Fax: (+31) 497-530191 E-mail: [email protected] Disclaimer The

More information

Oracle Database 12c: Performance Management and Tuning NEW

Oracle Database 12c: Performance Management and Tuning NEW Oracle University Contact Us: 1.800.529.0165 Oracle Database 12c: Performance Management and Tuning NEW Duration: 5 Days What you will learn In the Oracle Database 12c: Performance Management and Tuning

More information

Mary E. Shacklett President Transworld Data

Mary E. Shacklett President Transworld Data Transworld Data Mary E. Shacklett President Transworld Data For twenty-five years, Transworld Data has performed technology analytics, market research and IT consulting on every world continent, including

More information

SERVER VIRTUALIZATION IN MANUFACTURING

SERVER VIRTUALIZATION IN MANUFACTURING SERVER VIRTUALIZATION IN MANUFACTURING White Paper 2 Do s and Don ts for Your Most Critical Manufacturing Systems Abstract While the benefits of server virtualization at the corporate data center are receiving

More information

Backups and Maintenance

Backups and Maintenance Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the Database

More information

HP Quality Center. Upgrade Preparation Guide

HP Quality Center. Upgrade Preparation Guide HP Quality Center Upgrade Preparation Guide Document Release Date: November 2008 Software Release Date: November 2008 Legal Notices Warranty The only warranties for HP products and services are set forth

More information

Would-be system and database administrators. PREREQUISITES: At least 6 months experience with a Windows operating system.

Would-be system and database administrators. PREREQUISITES: At least 6 months experience with a Windows operating system. DBA Fundamentals COURSE CODE: COURSE TITLE: AUDIENCE: SQSDBA SQL Server 2008/2008 R2 DBA Fundamentals Would-be system and database administrators. PREREQUISITES: At least 6 months experience with a Windows

More information

Maximum Availability Architecture. Oracle Best Practices For High Availability

Maximum Availability Architecture. Oracle Best Practices For High Availability Preventing, Detecting, and Repairing Block Corruption: Oracle Database 11g Oracle Maximum Availability Architecture White Paper May 2012 Maximum Availability Architecture Oracle Best Practices For High

More information

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction Software Testing Rajat Kumar Bal Introduction In India itself, Software industry growth has been phenomenal. IT field has enormously grown in the past 50 years. IT industry in India is expected to touch

More information

Ingres Backup and Recovery. Bruno Bompar Senior Manager Customer Support

Ingres Backup and Recovery. Bruno Bompar Senior Manager Customer Support Ingres Backup and Recovery Bruno Bompar Senior Manager Customer Support 1 Abstract Proper backup is crucial in any production DBMS installation, and Ingres is no exception. And backups are useless unless

More information

Perform-Tools. Powering your performance

Perform-Tools. Powering your performance Perform-Tools Powering your performance Perform-Tools With Perform-Tools, optimizing Microsoft Dynamics products on a SQL Server platform never was this easy. They are a fully tested and supported set

More information

Best Practices Report

Best Practices Report Overview As an IT leader within your organization, you face new challenges every day from managing user requirements and operational needs to the burden of IT Compliance. Developing a strong IT general

More information

Tips and Best Practices for Managing a Private Cloud

Tips and Best Practices for Managing a Private Cloud Deploying and Managing Private Clouds The Essentials Series Tips and Best Practices for Managing a Private Cloud sponsored by Tip s and Best Practices for Managing a Private Cloud... 1 Es tablishing Policies

More information

Oracle 11g New Features - OCP Upgrade Exam

Oracle 11g New Features - OCP Upgrade Exam Oracle 11g New Features - OCP Upgrade Exam This course gives you the opportunity to learn about and practice with the new change management features and other key enhancements in Oracle Database 11g Release

More information

Draft Copy. Change Management. Release Date: March 18, 2012. Prepared by: Thomas Bronack

Draft Copy. Change Management. Release Date: March 18, 2012. Prepared by: Thomas Bronack Draft Copy Change Management Release Date: March 18, 2012 Prepared by: Thomas Bronack Section Table of Contents 10. CHANGE MANAGEMENT... 5 10.1. INTRODUCTION TO CHANGE MANAGEMENT... 5 10.1.1. PURPOSE OF

More information

Optimizing Your Database Performance the Easy Way

Optimizing Your Database Performance the Easy Way Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of

More information

MICHIGAN AUDIT REPORT OFFICE OF THE AUDITOR GENERAL THOMAS H. MCTAVISH, C.P.A. AUDITOR GENERAL

MICHIGAN AUDIT REPORT OFFICE OF THE AUDITOR GENERAL THOMAS H. MCTAVISH, C.P.A. AUDITOR GENERAL MICHIGAN OFFICE OF THE AUDITOR GENERAL AUDIT REPORT THOMAS H. MCTAVISH, C.P.A. AUDITOR GENERAL ...The auditor general shall conduct post audits of financial transactions and accounts of the state and of

More information

Restore and Recovery Tasks. Copyright 2009, Oracle. All rights reserved.

Restore and Recovery Tasks. Copyright 2009, Oracle. All rights reserved. Restore and Recovery Tasks Objectives After completing this lesson, you should be able to: Describe the causes of file loss and determine the appropriate action Describe major recovery operations Back

More information

www.dotnetsparkles.wordpress.com

www.dotnetsparkles.wordpress.com Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.

More information

ORACLE DATABASE 11G: COMPLETE

ORACLE DATABASE 11G: COMPLETE ORACLE DATABASE 11G: COMPLETE 1. ORACLE DATABASE 11G: SQL FUNDAMENTALS I - SELF-STUDY COURSE a) Using SQL to Query Your Database Using SQL in Oracle Database 11g Retrieving, Restricting and Sorting Data

More information

Backup and Recovery in Laserfiche 8. White Paper

Backup and Recovery in Laserfiche 8. White Paper Backup and Recovery in Laserfiche 8 White Paper July 2008 The information contained in this document represents the current view of Compulink Management Center, Inc on the issues discussed as of the date

More information

IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications

IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications IBM Software Information Management Scaling strategies for mission-critical discovery and navigation applications Scaling strategies for mission-critical discovery and navigation applications Contents

More information

DBAs having to manage DB2 on multiple platforms will find this information essential.

DBAs having to manage DB2 on multiple platforms will find this information essential. DB2 running on Linux, Unix, and Windows (LUW) continues to grow at a rapid pace. This rapid growth has resulted in a shortage of experienced non-mainframe DB2 DBAs. IT departments today have to deal with

More information

Microsoft SQL Database Administrator Certification

Microsoft SQL Database Administrator Certification Microsoft SQL Database Administrator Certification Training for Exam 70-432 Course Modules and Objectives www.sqlsteps.com 2009 ViSteps Pty Ltd, SQLSteps Division 2 Table of Contents Module #1 Prerequisites

More information

Guideline for stresstest Page 1 of 6. Stress test

Guideline for stresstest Page 1 of 6. Stress test Guideline for stresstest Page 1 of 6 Stress test Objective: Show unacceptable problems with high parallel load. Crash, wrong processing, slow processing. Test Procedure: Run test cases with maximum number

More information

Running a Workflow on a PowerCenter Grid

Running a Workflow on a PowerCenter Grid Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

COURCE TITLE DURATION. Oracle Database 11g: Administration Workshop I

COURCE TITLE DURATION. Oracle Database 11g: Administration Workshop I COURCE TITLE DURATION DBA 11g Oracle Database 11g: Administration Workshop I 40 H. What you will learn: This course is designed to give students a firm foundation in basic administration of Oracle Database

More information

ITIL by Test-king. Exam code: ITIL-F. Exam name: ITIL Foundation. Version 15.0

ITIL by Test-king. Exam code: ITIL-F. Exam name: ITIL Foundation. Version 15.0 ITIL by Test-king Number: ITIL-F Passing Score: 800 Time Limit: 120 min File Version: 15.0 Sections 1. Service Management as a practice 2. The Service Lifecycle 3. Generic concepts and definitions 4. Key

More information

Handling Hyper-V. In this series of articles, learn how to manage Hyper-V, from ensuring high availability to upgrading to Windows Server 2012 R2

Handling Hyper-V. In this series of articles, learn how to manage Hyper-V, from ensuring high availability to upgrading to Windows Server 2012 R2 White Paper Handling Hyper-V In this series of articles, learn how to manage Hyper-V, from ensuring high availability to upgrading to Windows Server 2012 R2 White Paper How to Make Hyper-V Virtual Machines

More information

Best practices for data migration.

Best practices for data migration. IBM Global Technology Services June 2007 Best practices for data migration. Methodologies for planning, designing, migrating and validating data migration Page 2 Contents 2 Executive summary 4 Introduction

More information

FIREWALL CLEANUP WHITE PAPER

FIREWALL CLEANUP WHITE PAPER FIREWALL CLEANUP WHITE PAPER Firewall Cleanup Recommendations Considerations for Improved Firewall Efficiency, Better Security, and Reduced Policy Complexity Table of Contents Executive Summary... 3 The

More information

Designing, Optimizing and Maintaining a Database Administrative Solution for Microsoft SQL Server 2008

Designing, Optimizing and Maintaining a Database Administrative Solution for Microsoft SQL Server 2008 Course 50400A: Designing, Optimizing and Maintaining a Database Administrative Solution for Microsoft SQL Server 2008 Length: 5 Days Language(s): English Audience(s): IT Professionals Level: 300 Technology:

More information

Introduction to Database as a Service

Introduction to Database as a Service Introduction to Database as a Service Exadata Platform Revised 8/1/13 Database as a Service (DBaaS) Starts With The Business Needs Establish an IT delivery model that reduces costs, meets demand, and fulfills

More information

ORACLE DATABASE 10G ENTERPRISE EDITION

ORACLE DATABASE 10G ENTERPRISE EDITION ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.

More information

The Future of PostgreSQL High Availability Robert Hodges - Continuent, Inc. Simon Riggs - 2ndQuadrant

The Future of PostgreSQL High Availability Robert Hodges - Continuent, Inc. Simon Riggs - 2ndQuadrant The Future of PostgreSQL High Availability Robert Hodges - Continuent, Inc. Simon Riggs - 2ndQuadrant Agenda / Introductions / Framing the High Availability (HA) Problem / Hot Standby + Log Streaming /

More information

IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs

IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs coursemonster.com/au IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs View training dates» Overview Learn how to tune for optimum performance the IBM DB2 9 for Linux,

More information

Test Data Management Best Practice

Test Data Management Best Practice Test Data Management Best Practice, Inc. 5210 Belfort Parkway, Suite 400 Author: Stephanie Chace Quality Practice Lead [email protected], Inc. 2011 www.meridiantechnologies.net Table of

More information

Selecting the Right Change Management Solution Key Factors to Consider When Evaluating Change Management Tools for Your Databases and Teams

Selecting the Right Change Management Solution Key Factors to Consider When Evaluating Change Management Tools for Your Databases and Teams Tech Notes Selecting the Right Change Management Solution Key Factors to Consider When Evaluating Change Management Tools for Your Databases and Teams Embarcadero Technologies July 2007 Corporate Headquarters

More information

SQL Server Database Administrator s Guide

SQL Server Database Administrator s Guide SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or by

More information

11. Oracle Recovery Manager Overview and Configuration.

11. Oracle Recovery Manager Overview and Configuration. 11. Oracle Recovery Manager Overview and Configuration. Abstract: This lesson provides an overview of RMAN, including the capabilities and components of the RMAN tool. The RMAN utility attempts to move

More information

How To Secure A Database From A Leaky, Unsecured, And Unpatched Server

How To Secure A Database From A Leaky, Unsecured, And Unpatched Server InfoSphere Guardium Ingmārs Briedis ([email protected]) IBM SW solutions Agenda Any questions unresolved? The Guardium Architecture Integration with Existing Infrastructure Summary Any questions

More information

SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1

SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1 SQL Server 2008 Designing, Optimizing, and Maintaining a Database Course The SQL Server 2008 Designing, Optimizing, and Maintaining a Database course will help you prepare for 70-450 exam from Microsoft.

More information