DB2 Advanced Recovery Solutions for LUW Bill Minor, IBM bminor@ca.ibm.com Information Management
Acknowledgements and Disclaimers: Availability. References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. The workshops, sessions and materials have been prepared by IBM or the session speakers and reflect their own views. They are provided for informational purposes only, and are neither intended to, nor shall have the effect of being, legal or other guidance or advice to any participant. While efforts were made to verify the completeness and accuracy of the information contained in this presentation, it is provided AS-IS without warranty of any kind, express or implied. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this presentation or any other materials. Nothing contained in this presentation is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results. Copyright IBM Corporation 2011. All rights reserved. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM, the IBM logo, ibm.com, DB2 and DB2 Merge Backup for LUW, DB2 Recovery Expert for LUW and High Performance Unload for DB2 are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at Copyright and trademark information at www.ibm.com/legal/copytrade.shtml Other company, product, or service names may be trademarks or service marks of others.
Agenda Advanced Recovery Solutions for DB2 LUW DB2 Recovery Expert for LUW DB2 Merge Backup for LUW Optim High Performance Unload
DB2 LUW Recovery: Methodologies DB2 Database or Table Space BACKUP and RESTORE Database Partition Granularity HADR Database failover from Primary to Standby Replication Based Solution Capture changes in database and replicate to another system Others...
IBM DB2 Advanced Recovery Solutions for LUW DB2 Merge Backup V1.1 Improve speed and efficiency of your backup process Minimize application impact Backup DB2 Recovery Expert V3.1 Reduce disaster recovery time and maximize database availability Eliminate data errors before they compound into costly business mistakes Track and report data changes in response to auditing requests Optim High Performance Unload V4.2 Extract large amounts of data quickly and with minimal impact on system productivity Perform full data and system migrations from one DB2 instance to another Recover Unload
InfoSphere Warehouse Advanced Enterprise Edition Customers are looking for more complete capabilities in their warehouse packaging Looking for the next step in portfolio value to grow the business at above market rates Solution: Advanced Editions for Warehousing InfoSphere Warehouse Model Packs for fast out of the box experience Ready to Go Customer, Campaign, and Inventory Analytics Packs Adds additional Value and Business Content to InfoSphere Warehouse Optim Tools to complete the warehouse packaging DB2 Recovery Expert, DB2 Merge Backup, High Performance Unload Addresses cost of backup and recovery management Increases speed to backup or export large warehouse objects Enhances ability to recover from user or application error InfoSphere Data Architect Enhances the enterprise design capabilities of the warehouse Optim Query Tuner Lower cost of management and ownership with new tuning capabilities Simplifies the selling and purchasing process Provides complete portfolio solution for warehousing Everything is contained in one product Maximizes the competitive values of the IBM Warehouse Package
DB2 Recovery Expert for LUW 7
Recovery Expert V3.1 (ega June 17 2011) Recovery Expert (RE) is a data analysis and recovery tool that manages DB2 systems Data analysis is facilitated through processing of DB2 log files Recovery is facilitated through DB2 log files and available recovery assets i.e. backups There is minimal disruption to DB2 In addition to full license there is the option of installing 30-day Trial license version Goals: Reduce Disaster Recovery (DR) time Eliminate user data errors Track and report data changes for auditing purposes Recovery Expert provides a simple and intuitive interface to manage and report on database changes through log analysis as well as facilitate recovery in instances of either logical or physical data errors.
What Can Recovery Expert for DB2 LUW Do? Recovery Expert can complement, simplify, and extend your recovery capability Evaluates the available recovery assets to determine which recovery paths are possible to recover the object to the requested point-in-time Dropped Table Recovery without Table Space level restore Data change reporting through log mining (Who? What? When?/ Quiet Time ) Generate DML for recreation, rolling back or going forward (SQL Undo/Redo) Easy to use environment for less experienced DBAs
Recovery Expert for LUW: Possible Usage Scenarios 10
Recovery Expert: Supported Configurations Two types of client interfaces: Web browser console or Java command line Supports DB2 LUW V9.1, V9.5, V9.7 Supported OS versions: AIX (5.3, 6.1 and 7.1) 64-bit; Linux on xseries (RHEL 5, SLES 10 and 11) 32 and 64-bit; Solaris 10 (SPARC ) 64-bit; Windows (7 Professional/Enterprise, XP, Vista, Server 2003/ 2008 all editions) 32 and 64-bit HP-UX 11 (FP1) Install and configure with ease Lightweight, small footprint embedded web container automatically installed Database server components distributed with a remote installation technique Recovery assets: Backups can be online or offline, database or table space, compressed or noncompressed Backup and logs can reside on disk, TSM (5.5 or higher), vendor storage Merge Backup (FP1)
DB2 Recovery Expert for LUW: Client-Server Architecture DB2 Recovery Expert Server DB2 Instance web console client Mozilla Firefox 3.6 Internet Explorer 8.0 embedded web container license key JDBC datastore repository DB server components database 01 database 02 JDBC Command line client (Optional interface) 12
IBM DB2 Recovery Expert: GUI Startup View Starting point for all RE actions
Recovery Expert: Task Manager Object History Manage history information about the objects in your database Log Analysis Mine database logs and generate reports as well as Redo or Undo SQL Recovery Perform a recovery plan for a set of database objects Sessions Track currently running and completed Object History, Log Analysis and Recovery operations Specifications Save a set of user specified values for Object History, Log Analysis or Recovery and run the same task at a later time
DB2 Recovery Expert: Log Analysis Generate reports for a database that provides details about transaction activity and the changes that have occurred in particular tables or table spaces within a specific time frame Ad-hoc Auditing capability to determine who modified data and from what client Ability to determine which application is performing what data changes Produce SQL to Undo or Redo statements that have already been run Undo SQL: Produces SQL statements to reverse the changes made by the original database log operations Redo SQL: Produces SQL statements to re-apply the changes made by the original database logged operations Can optionally include DDL statements
DB2 Recovery Expert: Log Analysis Reports Report Filtering by: Application ID, application name, table name and ID, table space name and ID, Log Sequence Number (LSN), transaction ID, partition group, authorization ID, schema Include or exclude filtering and wildcards Post filtering to apply different filters to a Log Analysis report Operations Filtering by: Insert, Update, Delete Transaction status: Committed, Partial, Uncommitted, Rolled Back Report Types: Summary or Summary + Detail Report Formats: CSV, XML, text
Recovery Expert for LUW: Log Analysis GUI Illustration Log analysis report with Undo generated to back out transactions 1. Tables Show which tables any transactions have affected 2. Transaction Select the transaction you wish to review 4. Undo Review the Undo SQL for each DML operation Run Undo SQL if desired 3. Single DML operation See each DML operation for that transaction 17
Sample Log Analysis Report STATISTICS ---------- NGNAME UPDATES INSERTS DELETES ---------------------------------- ------------ ------------ ------------ IBMCATGROUP 1966 160 0 IBMDEFAULTGROUP 0 24 0 TSNAME UPDATES INSERTS DELETES ---------------------------------- ------------ ------------ ------------ IBMDB2SAMPLEREL 0 24 0 SYSCATSPACE 100 159 0 SYSTOOLSPACE 1866 1 0 ================================================================================================================== / / /... LOCAL TRANSACTION ID COMMIT LSN DATE TIME STATE AUTHID GLOBAL TRANSACTION ID -------------------- ---------------- ---------- --------------- ------------ ------------ ----------------------- / / /... 00000001FB2C 00000000059D8536 2011-10-03 15:25:49.000001 COMMITTED NGNAME TSNAME TABLE OWNER TABLE NAME ACTION PARTITION LSN -------------------- ------------------ ---------------------- ---------------------- -------- ---------- ---------------- IBMDEFAULTGROUP IBMDB2SAMPLEREL BILLM ORG2 I 0 00000000055F23A7 IBMDEFAULTGROUP IBMDB2SAMPLEREL BILLM ORG2 I 0 00000000055F240B IBMDEFAULTGROUP IBMDB2SAMPLEREL BILLM ORG2 I 0 00000000055F246B... TABLE UPDATES INSERTS DELETES ---------------------------------- ------------ ------------ ------------ BILLM.ORG2 0 24 0 SYSIBM.SYSCOLDIST 0 150 0... TOTAL STATISTICS ---------------- TOTAL INSERTS : 184 TOTAL UPDATES : 1,966 TOTAL DELETES : 0
DB2 Recovery Expert: Recovery Automatically determine the most efficient method for any given recovery situation Recover database objects to a user specified point-in-time Avoid using resource intensive disaster recovery i.e. database/table space restores Save recovery scenarios for reuse Recover dropped objects (RE maintains database object history) Ability to extract consistent data from online backup image via CLP Roll back unwanted data changes throughout a database (Undo/Redo SQL) Keep the system online during this time Automatic recovery of related objects, i.e. Referential integrity referenced tables, Indexes, Views, Triggers, Functions, Procedures
Recovery: Recovery plan Perspective
Recovery Expert: GUI Recovery Illustration Recovering a table called NEW_TABLE along with its dependent objects 1. Dropped object Find one or more dropped objects to restore. In this example, a table. 3. Scenarios Multiple scenarios can be determined. Review for the one that works best for you. 2. Dependencies All dependencies automatically determined and ready to be restored 4. Steps Each scenario is made of steps where you can review each step before running. 21
Recovery Expert: Command Line Interface In addition to the RE GUI, one can install the Java based client Interactive shell mode or accepts an input file of commands to perform in batch mode Allows many of the web console functions but not all Principal functions: (1) Batch Log Analysis (2) Data Extraction 22
DB2 Recovery Expert for LUW Command Line Processor client batch mode Recovery Expert CLP interactive batch mode Example #1: Log analysis report generation aryclp.bat -f RE1.cfg RE1.cfg set current connection run la for database gsdb export report from session Example #2: update the SLR aryclp.bat -f RE2.cfg RE2.cfg set current connection run slr for database gsdb perform update ************************* TOP OF DATA *************** DB2 LOG ANALYSIS - GENERAL REPORT BUILT ON 2011/07/07 AT 11:55:41 ********************************* FILTERS ------- DATABASE : GSDB ACTION : U I D OPTIONS : IGNORE CATALOG TABLES [...] TABLE UPDATES INSERTS DELETES -------------------------- ------- ------- ------- GOSALES.TAB_00001 55 GOSALES.TAB_00002 147 4 GOSALES.TAB_00003 33 8 GOSALES.TAB_00004 88 50 [...] TOTAL SUMMARY ------------- TOTAL UPDATES : 5,065 TOTAL INSERTS : 300,034 TOTAL DELETES : 99 23
Extract of Consistent Data From Online Backup Image Example: Extract table data for BILLM.STAFF from online database backup of SAMPLE Command Input File dbextract.re : set current connection clp_connect run ox for database sample from 20111003152048 format del tables billm.staff serverdir d:\wrk\backups reset connection D:> aryclp.bat -f dbextract.re Create session 8 at timestamp (server time) Oct 5, 2011 2:58:00 PM Prepare file for export: "d:\wrk\backups\[2].[15].del". Processing log file: S0000014.log Data export from backup complete for specified table [2/15]. ----------------------------------------------------------- Aryox command completed successfully. [2].[15].del 10,"Sanders",20,"Mgr ",7,98357.50, 20,"Pernal",20,"Sales",8,78171.25,612.45 30,"Marenghi",38,"Mgr ",5,77506.75, 40,"O'Brien",38,"Sales",6,78006.00,846.55 50,"Hanes",15,"Mgr ",10,80659.80, 60,"Quigley",38,"Sales",,66808.30,650.25 70,"Rothman",15,"Sales",7,76502.83,1152.00
Data Extract to Any Point in Time After extracting data object from an online backup image to the end of backup marker, use Log Anaylsis to generate REDO SQL statements to acquire data to any point in time CLP Control File Example: set current connection clp_connect run la for database sample report full from 20110930000000 to 20111004000000 sql redo tables "BILLM.ORG2" export SQL from session @ into d:\wrk\backups\ reset connection
Recovery Expert Logging Considerations Not logged database activity ALTER TABLE NOT LOG INITIALLY LOAD data is not logged LOB columns with NOT LOGGED attribute Time to run Log Analysis Number of database objects to be analyzed Number of logs to be processed Registry variable DB2_LOGGING_DETAIL = APPLINFO DB2 will write additional informational log record for each transaction UPDATE Logging DB2 LUW can write various formats for UPDATE log records Update log rec does not always contain full before and full after image of a changed row Table attribute: DATA CAPTURE CHANGES Currently Committed Full before image of record is logged good for RE
DB2 Merge Backup for LUW 27
IBM DB2 Merge Backup V1.1 (ega May 20 2011) A command line utility that allows you to merge full DB2 backups with DB2 incremental/delta backups to build a new full DB2 backup image Eliminates the need to take DB2 full backups Online and offline backup images are supported Merge Backup (MBK) can be run on the database server or on a standalone (remote) machine MBK uses the History File to determine which backups are available for a merge Value Proposition: Reduce backup intensive resources on the database server, i.e. eliminate full DB2 backups by backing up only what needs to be backed up Reduce number of logs required during recovery thereby shrinking recovery time Simplify recovery process through restore of up-to-date full backup images
DB2 Incremental and Delta Backups Incremental: Backup all changes since last full backup Incremental Delta: Backup all changes since last backup of any type Full Backup: All changes in a database for a given point in time Sunday Mon. Tue. Wed. Thu. Fri. Sat. Sunday Full Full Incremental Backups Full Incremental Delta Backups Full
DB2 Merge Backup for LUW Full + Incremental (use the first time to get started) Recovery history file DB2 Merge Backup Full + Delta(s) (use the first time to get started) Delta DB2 backups Recovery history file Graphics by Burt DB2 Merge Backup Vialpando Merged + Incremental or Delta(s) (use on an ongoing basis after that) Recovery history file DB2 Merge Backup 30
Merge Backup: Configuration Supported as of DB2 LUW V9.7 F4 and DB2 V9.5 FP8 Support for all DB2 supported OS platforms MBK is installed on the Database Server* For DPF, once per physical machine Cannot be installed on distributed or shared filesystems such as NFS To run MBK in Standalone mode Install MBK on any machine, doesn t have to be db server; machine doesn t even require DB2 Install is very quick and clean => Only two prompts: Choose install directory or accept default ( /opt/ibm/db2tools/ ) Accept license agreement
Merge Backup Command Line Syntax
Merge Backup: Control Files Example: db2mbk f <control_file_name> Control files are ASCII text files that contain a list of MBK options Provides additionally flexibility to perform complex operations, good for repetitive tasks Command line can override control file options so control file can be reused for different configurations without modifying it Example: Use same control file for different databases db2mbk d <other_db_name> -f <std_control_file_name> Simple structure to every control file example on following slide
Merge Backup: Use of a Control File ctl_file2.cfg MERGE DATABASE SAMPLE PART (1) OUTPUT USE TSM OPEN 2 SESSIONS COMPRESS NO db2mbk f ctl_file2.cfg What happens: DB2 Merge Backup merges all the DB2 incremental backups created since the last full DB2 backup on database partition 1 The backup is not compressed DB2 Merge Backup processing stores the new full DB2 backup image with Tivoli Storage Manager by using two sessions See the Merge Backup User s Guide for further examples and details on control file language
Merge Backup: Standalone Mode Storage (NFS/ Storage Managers) Backups (Full,Incremental/Delta) Read/Write backups -determines list of backups -extracts incrementals/delta to remote server -inventories images DB2 Server Merge Backup Merge backup is installed on a different server from where backups are taken Minimize CPU usage and I/O usage on DB2 server by offloading merge backup to another machine All of the backups that are involved in the merge must be accessible from the current machine as copies or through shared file systems Execute db2mbk with -s option for standalone mode
Optim High Performance Unload for LUW 36
Optim High Performance Unload High Performance Unload (HPU) is a standalone command line utility for DB2 LUW databases HPU performs high-speed, bulk data unloads Unload can be from backup images (full:db2 or merge backup, incremental) Unload can be directly from the database container files (bypasses the database engine) HPU key use-cases: Recoverability Bulk data conversion Test data management (selective unload with sampling and predicate based queries) Repartition/redistribution for warehouse databases Migrate DB2 database from one DB2 instance to another 37
Additional HPU Features HPU can unload data to: flat files, tape devices or named pipes With pipes no need for additional storage to land data Unload file format types: ASC, DEL and DELIMITED, DSNITAUL, IXF, XML Each file can be directed to different servers and directories, each with a different file format As part of unload processing, can automatically generate LOAD command required to populate data back into DB2 HPU is typically 10-12 times faster than EXPORT (some tests have benchmarked over 20 times faster) Can be configured and used as a DB2 Stored Procedure Supports DB2 LUW V9.8
Optim High Performance Unload Execution db2hpu Command Line syntax very similar to look and feel of Merge Backup Unload of a table from a database in three possible ways: Command Line: db2hpu d <dbname> -t <tname> -o <unload_file> Command Line with control file: db2hpu d <dbname> f <control_filename> Stored Procedure: db2 call db2hpu( /opt/ibm/db2tools/highperformanceunload42, -d <dname> -o <unload_file>, unload tablespace select * from <tname>; format del;,?,?) Control files have a general structure of blocks that contain keyword, options, and values (several examples on following slides; see also User s Guide) 39
Optim High Performance Unload Problem 1: Unload data from a DB2 table space Unload table BURT_V.MYTABLE from database DB_PROD1 HPU to process directly from table space data files DB2 engine to process if SQL too complex for HPU Output file to be fixed length ASCII Convert American dates to European dates Data is a sample of every 10 records Create a DB2 LOAD command file to load this data later Table Space: TS1 Data Files Database: DB_PROD1 Table Space: TS1 Graphics by Burt Vialpando Table: BURT_V.MYTABLE Fastest processing Complex processing HPU mytable.load DB2 load commands mytable.out ASC format European dates 10% Sampling 40
Optim High Performance Unload Solution 1: Unload data from a DB2 table space HPU Command Line db2hpu -f C:\ControlFiles\UnloadMyTable.ctr Invoke HPU with a control file (no overrides) UnloadMyTable.ctr (control file) GLOBAL CONNECT TO DB_PROD1 DB2 YES QUIESCE YES LOCK YES ; UNLOAD TABLESPACE SELECT * FROM BURT_V.MYTABLE; LOADFILE ( unload_dir\mytable.load ) OUTFILE ( unload_dir\mytable.out REPLACE) INTERVAL 10 FORMAT ASC OPTIONS DATE DATE_E ; Connect to DB_PROD1 database Allow DB2 engine to process if SELECT complex Ensure consistent load by flushing the buffer pools and disallowing edits to the table Unload specific table Generate a DB2 LOAD command for this data Specify pathname and filename of output Unload every 10 records Fixed length ASCII format Convert American dates to European dates 41
Optim High Performance Unload Problem 2: Unload data from a DB2 backup Unload table BURT_V.MYTABLE (database is not available) Have HPU process from a database backup file Use a specific backup, not just the latest one Create an XML formatted output file Force the XML row tag to be mydat Create a message file with unload details Override any Control Block message file Database Backup 20090126190035 UNAVAILABLE Database: DB_PROD1 Table Space: TS1 Table: BURT_V.MYTABLE HPU Source: If applicable, describe source origin 42 mytable.msg Unload message file mytable.out XML format row tag=mydat
Optim High Performance Unload Solution 2: Unload data from a DB2 backup HPU Command Line db2hpu -f C:\ControlFiles\UnloadMyBackup.ctr -o unload_dir\mytable.out -m messages\mytable.msg Invoke HPU with a control file Outfile and message file specified here will override any set in a control block UnloadMyBackup.ctr (control file) GLOBAL CONNECT TO DB_PROD1 DB2 NO USING BACKUP CATALOG DB_PROD1 FROM backup_dir TAKEN at 20090126190035; UNLOAD TABLESPACE SELECT * FROM BURT_V.MYTABLE; FORMAT XML ROWTAGSTRING mydat ; Force HPU to process, not the DB2 engine Use specific catalog rather than active one Use specific backup rather than latest one Format output in XML with specific row tag Source: If applicable, describe source origin 43
Optim High Performance Unload Problem 3: Migrate all the data in a DB2 database Migrate all the data from: DB_OLD to DB_NEW Host1 to Host2 InstOld to InstNew Specify a work file directory * Intermediate files: DEL format Table Space: TS1 Data Files Table Space: TS2 Data Files HPU Host: Host1 - Instance: InstOld Database: DB_OLD Table Space: TS1 Table1 Table2 Table3 Table Space: TS2 Table4 Table5 Table6 HPU NewSystemWorkDir All files placed here are DEL format Host: Host2 - Instance: InstNew Database: DB_NEW Table Space: TS1 Table1 Table2 Table3 Table Space: TS2 Table4 Table5 Table6 * HPU has the ability to do a migration without using intermediate files if this is desired Source: If applicable, describe source origin 44
Optim High Performance Unload Solution 3: Migrate all the data in a DB2 database Create new instance and database with appropriate objects db2icrt InstNew db2look d DB_OLD e > db2look.sql create database DB_NEW db2 connect to DB_NEW; db2 f db2look.sql HPU Command Line db2hpu -i InstNew -f C:\ControlFiles\MigrateAll.ctr Perform on instance InstNew MigrateAll.ctr (control file) GLOBAL CONNECT TO DB_OLD UMASK 022 ; MIGRATE DATABASE TARGET ENVIRONMENT (INSTANCE InstNew" ON host2 IN DB_NEW) WORKING IN ( NewSystemWorkDir ) FORMAT MIGRATION ; Source Database Force Operating System (OS) permissions on all files Target instance, host and database Working directory output file location Use default migration format (DEL) Source: If applicable, describe source origin 45
Optim High Performance Unload Problem 4: Repartition a data warehouse table Unload a 4 partition table from source database Split it into 2 files using target database pmap Include header in first file (Header contains partition map, partition number, partition key columns) Use /tmp as working directory Load output files into target partitioned table Database: DW1 Node0 Node1 Node2 Node3 pmap OldTable HPU /tmp NewTable.000 NewTable.001 pmap Database: DW2 Node0 NewTable Node1 Source: If applicable, describe source origin 46
Optim High Performance Unload Solution 4: Repartition a data Warehouse table HPU Command Line db2hpu -f C:\ControlFiles\RepartTable.ctr RepartTable.ctr (control file) GLOBAL CONNECT TO DW1 DB2 NO ; UNLOAD DATABASE SELECT * FROM OLDTABLE ; TARGET TABLE (NEWTABLE IN DW2) HEADER FIRST OUTPUT ( /tmp/newtable.%{target_node} ) TARGET KEYS ( CURRENT PARTS (ALL) ) FORMAT DEL ; Connect to source database Select from source table Point to new table in target database Put partitioning info in the first output file Intermediate file directory with template keyword to indicate node Load all partitions 47
Optim High Performance Unload: Plug-in to Data Studio Example: HPU commands built and invoked from the Data Studio Source: If applicable, describe source origin 48
developerworks Best Practice Paper
Customer Experiences with DB2 Advanced Recovery Solutions 50
What Do Customers Use DB2 Merge Backup For? Backup Large Wall Street Customer Customer Pain Points Build Data Warehouses in sizes of 2 TB to 200 TB The database backup times must be extremely fast and not affect application processing Can take delta/ incremental backups, but it takes too much time when having to merge these with the full backup How DB2 Merge Backup Helped? Take fastest possible type of backup and ensure that strategy is inline with SLA Combines incremental and delta backups with an older full backup to create a new full backup Improved response time for applications Large US Insurance Company Full Backups are too large and do not complete on time because of TSM errors Wants to look at DB2 Merge Backup as an alternative strategy for having a more consistent up to date backup Delta DB Find a better strategy for customers to deal with large size backup files From my testing, DB2 Merge Backup provides a faster way to capture delta backups compared to traditional techniques and helps ensure a fast recovery. As data volume grows and the time required for backup/recovery increases, the more important DB2 Merge Backup becomes. It reduces the time required for backup and recovery operations. Jean-Marc Blaise, Technical Architect
Optim High Performance Unload Testing at a US Bank Unload I had a chance to test DB2 HPU this weekend with this Production Database expansion, and results are pretty astounding Key Customer Requirements Move large amounts of data from one system to another Speed and accuracy with minimal disruption to DB2 for LUW system Execute concurrently with production applications maintaining system availability Support a data warehouse environment with 100 s of connections over 13,000 tables IBM Solutions Optim High Performance Unload provides Fast Unload capability with 10-12 X faster than regular export Parallel processing for higher speeds Unmatched data extraction capabilities including recovery and migration That s about 16 times faster without using any special features, just out of the box performance.
What Do Customers Use DB2 Recovery Expert For? Recovery Customer Pain Points How DB2 Recovery Expert Helped? Delta DB Processing logs and reporting activity on a database partitioned environment Dropped Table Recovery Recover a set of existing tables to a prior point-in-time DB2 Recovery Expert allows me to reduce my recovery time to seconds or minutes rather than hours. The Web User Interface is simple and easy to use. I can monitor data changes and isolate accidental or undesired changes made to the database tables and quickly undo those changes with minimal disruption to the business. Jean-Marc Blaise, Technical Architect
Recovery Expert Example Australian Federal Government Number of DBMS across multiple platforms DB2 z/os, DB2 LUW, Oracle, SQL Server, Model 204, Sybase, IMS, Teradata, ObjectStar DB2 LUW primary platform is Solaris 10 running DB2 Version 9.7 Have run DB2 Recovery Expert since V2.1 and participated in the Data Recovery Tools Beta program over the last year Main requirement was to see some of the facilities that have been available on the mainframe provided on the midrange
Recovery Expert Usage (continued) Investigate issues with log activity Databases experiencing large unexplained spikes in log activity Transactions not committing and holding logs Recovery Expert allows us to determine which programs/users/tables were involved This can be done after the event without requiring monitoring to be on at the time
Recovery Expert Usage (continued) Undo individual program updates Rogue programs or users updates can be reversed without the need to backout all work Recover Individual objects Recover a dropped object such as a Stored Procedure without impacting other DB2 objects
IBM Delivers Advanced Recovery for DB2 Linux, UNIX and Windows Backup IBM DB2 Merge Backup LUW Backup efficiently Maintain recent full backups, Accelerate recovery Unload IBM Optim High Performance Unload for DB2 LUW Pure Unload performance Minimal application impact Recover IBM DB2 Recovery Expert LUW Targeted Recovery Advanced Recovery Technology
References Article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1106db2advancedrecovery/index.html Video Demo: http://www.ibm.com/developerworks/offers/lp/demos/summary/im-optimbackuprecovery.html?ca=drs- DB2 Advanced Recovery Solutions Brochure: http://www-01.ibm.com/software/sw-library/en_us/detail/w269469i30015y81.html DB2 Recovery Expert: http://www-01.ibm.com/software/data/db2/linux-unix-windows/tools/db2-recovery-expert/ DB2 Recovery Expert Trial Page: http://www.ibm.com/developerworks/downloads/im/db2recovery/?s_cmp=rnav Optim High Performance Unload: http://www-01.ibm.com/software/data/optim/high-performance-unload-db2-luw/ Optim High Performance Unload Trial Page: http://www.ibm.com/developerworks/downloads/im/optimhighperf/index.html DB2 Merge Backup: http://www-01.ibm.com/software/data/db2/linux-unix-windows/tools/db2-merge-backup-luw/ DB2 Merge Backup Trial Page: https://www14.software.ibm.com/webapp/iwm/web/pick.do?source=swg-ibmdb2mbtb&s_cmp=rnav
Where Can I Get the Code? http://www-01.ibm.com/software/data/db2/linux-unix-windows/tools/db2-recovery-expert/
developerworks http://www.ibm.com/developerworks/downloads/im/db2recovery/
Online Product Documentation http://publib.boulder.ibm.com/infocenter/db2ars/v1r1/
Thank You! 62