Recommendations from DB2 Health Check Studies: Speed of Recovery
|
|
|
- Ashley Walker
- 10 years ago
- Views:
Transcription
1 Recommendations from DB2 Health Check Studies: Speed of Recovery John Campbell & Florence Dubois IBM DB2 for z/os Development IBM Corporation
2 Disclaimer/Trademarks Copyright IBM Corporation All rights reserved. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. THE INFORMATION CONTAINED IN THIS DOCUMENT HAS NOT BEEN SUBMITTED TO ANY FORMAL IBM TEST AND IS DISTRIBUTED AS IS. THE USE OF THIS INFORMATION OR THE IMPLEMENTATION OF ANY OF THESE TECHNIQUES IS A CUSTOMER RESPONSIBILITY AND DEPENDS ON THE CUSTOMER S ABILITY TO EVALUATE AND INTEGRATE THEM INTO THE CUSTOMER S OPERATIONAL ENVIRONMENT. WHILE IBM MAY HAVE REVIEWED EACH ITEM FOR ACCURACY IN A SPECIFIC SITUATION, THERE IS NO GUARANTEE THAT THE SAME OR SIMILAR RESULTS WILL BE OBTAINED ELSEWHERE. ANYONE ATTEMPTING TO ADAPT THESE TECHNIQUES TO THEIR OWN ENVIRONMENTS DO SO AT THEIR OWN RISK. ANY PERFORMANCE DATA CONTAINED IN THIS DOCUMENT WERE DETERMINED IN VARIOUS CONTROLLED LABORATORY ENVIRONMENTS AND ARE FOR REFERENCE PURPOSES ONLY. CUSTOMERS SHOULD NOT ADAPT THESE PERFORMANCE NUMBERS TO THEIR OWN ENVIRONMENTS AS SYSTEM PERFORMANCE STANDARDS. THE RESULTS THAT MAY BE OBTAINED IN OTHER OPERATING ENVIRONMENTS MAY VARY SIGNIFICANTLY. USERS OF THIS DOCUMENT SHOULD VERIFY THE APPLICABLE DATA FOR THEIR SPECIFIC ENVIRONMENT. Trademarks IBM, the IBM logo, ibm.com, and DB2 are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at DB2 for z/os 2012 IBM Corporation 2
3 Objectives Introduce and discuss the issues commonly found Share the experience from customer health check studies Share the experience from live customer production incidents Provide proven recommended best practice Encourage proactive behaviour 2012 IBM Corporation 3
4 Agenda Part I - Continuous availability Continuous availability vs. Fast failover Coupling Facility structure duplexing Automated restart Multiple DB2 data sharing groups Application considerations Part II - Special considerations for WLM, CICS and DDF WLM policy settings Balanced design across CICS and DB2 Distributed workload and DDF 2012 IBM Corporation 4
5 Agenda Part III - Speed of recovery Disaster recovery Mass data recovery Part IV - Operations Preventative software maintenance DB2 virtual and real storage Performance and exception-based monitoring 2012 IBM Corporation 5
6 Disaster Recovery Context Disaster recovery = complete site failure Common problems Disconnect between IT and business Lack of transparency about what IT can actually deliver Lack of IT understanding of the real business needs and expectations Lack of consistency between Continuous Availability and Disaster Recovery objectives Compromised recoverability Consistency of the remote copy with disk-based replication is not guaranteed Forgetting to delete all CF structures owned by the data sharing group Lack of automation and optimisation to recover in a timely manner Limited or no testing No Plan B 2012 IBM Corporation 6
7 Disaster Recovery Need to understand the real business requirements and expectations These should drive the infrastructure, not the other way round Quality of Service requirements for applications Availability Restart quickly? Mask failures? High availability? Continuous operations? Continuous availability? Performance In case of a Disaster Recovery Time Objective (RTO) How long can your business afford to wait for IT services to be resumed following a disaster? Recovery Point Objective (RPO) What is the acceptable time difference between the data in your production system and the data at the recovery site (consistent copy)? In other words, how much data is your company willing to lose/recreate following a disaster? 2012 IBM Corporation 7
8 Disaster Recovery Critical concept for disk-based replication solutions Dependent writes The start of a write operation is dependent upon the completion of a previous write to a disk in the same storage subsystem or a different storage subsystem For example, typical sequence of write operations for a database update transaction: 1. An application makes an update and the data page is updated in the buffer pool 2. The application commits and the log record is written to the log on storage subsystem 1 3. At a later time, the update to the table space is externalized to storage subsystem 2 4. A diagnostics log record is written to mark that the page has been externalized successfully Data consistency for secondary DASD copy = I/O consistency Order of the dependent writes is preserved DB2 data consistency = application consistency Re-established through normal DB2 warm restart recovery mechanisms Requires an I/O consistent base 2012 IBM Corporation 8
9 Metro Mirror a.k.a. PPRC (Peer-to-Peer Remote Copy) Disk-subsystem-based synchronous replication Host I/O 1 (1) Write to primary volume (disk subsystem cache and NVS) (2) Metro Mirror sends the write I/O to secondary disk subsystem (3) Secondary disk subsystem signals write complete when the updated data is in its cache and NVS (4) Primary disk subsystem returns Device End (DE) status to the application 4 2 P 3 S Limited distance Over-extended distance can impact the performance of production running on the primary site 2012 IBM Corporation 9
10 Metro Mirror Misconception #1: Synchronous replication always guarantees data consistency of the remote copy Answer: Not by itself Metro Mirror operates at the device level (like any other DASD replication function) Volume pairs are always consistent But in a rolling disaster, cross-device (or box) consistency is not guaranteed An external management method is required to maintain consistency 2012 IBM Corporation 10
11 Metro Mirror Traditional example of multiple disk subsystems In a real disaster (fire, explosion, earthquake), you can not expect your complex to fail at the same moment. Failures will be intermittent, gradual, and the disaster will occur over seconds or even minutes. This is known as the Rolling Disaster. Devices suspend on this disk subsystem but writes are allowed to continue (*) (*) Volume pairs are defined with CRIT(NO) - Recommended P1 P2 P3 Network failure causes inability to mirror data to secondary site Updates continue to be sent from this disk subsystem S1 S2 S3 Devices on secondary disk subsystems are not consistent P4 S IBM Corporation 11
12 Metro Mirror Example of single disk subsystem with intermittent problem with communication links or problem with the secondary DASD configuration 1) Temporary communications problem causes P1- S1 pair to suspend e.g. network or SAN event P1 S1 P2 3) Subsequent I/O to P2 will be mirrored to S2 S2 2) During this time no I/O occurs to P2 so it remains duplex 4) S1 and S2 are now not consistent and if the problem was the first indication of a primary disaster we are not recoverable 2012 IBM Corporation 12
13 Metro Mirror Consistency Group function combined with external automation 2) The volume pair defined with CGROUP(Y) that first detects the error will go into an extended long busy state and IEA494I is issued 3) Automation is used to detect the alert and issue the CGROUP FREEZE command (*) to all LSS pairs P1 1) Network failure causes inability to mirror data to secondary site S1 (*) or equivalent P2 S2 5) Automation issues the CGROUP RUN (*) command to all LSS pairs releasing the long busy. Secondary devices are still suspended at a point-in-time 4) CGROUP FREEZE (*) deletes Metro Mirror paths, puts all primary devices in long busy and suspends primary devices Without automation, it can take a very long time to suspend all devices with intermittent impact on applications over this whole time IBM Corporation
14 Metro Mirror Misconception #1: Synchronous replication always guarantees data consistency of the remote copy Answer: In case of a rolling disaster, Metro Mirror alone does not guarantee data consistency at the remote site Need to exploit the Consistency Group function AND external automation to guarantee data consistency at the remote site Ensure that if there is a suspension of a Metro Mirror device pair, the whole environment is suspended and all the secondary devices are consistent with each other Supported for both planned and unplanned situations 2012 IBM Corporation 14
15 Metro Mirror Misconception #2: Synchronous replication guarantees zero data loss in a disaster (RPO=0) Answer Not by itself The only way to ensure zero data loss is to immediately stop all I/O to the primary disks when a suspend happens e.g. if you lose connectivity between the primary and secondary devices FREEZE and STOP policy in GDPS /PPRC GDPS will reset the production systems while I/O is suspended Choosing to have zero data loss really means that You have automation in place that will stop all I/O activity in the appropriate circumstances You accept a possible impact on continuous availability at the primary site Systems could be stopped for a reason other than a real disaster (e.g. broken remote copy link rather than a fire in the computer room) 2012 IBM Corporation 15
16 Metro Mirror Misconception #3: Metro Mirror eliminates DASD subsystem as single point of failure (SPOF) Answer: Not by itself... Needs to be complemented by a non-disruptive failover HyperSwap capability e.g., GDPS/PPRC Hyperswap Manager Basic Hyperswap in TPC-R 1) Failure event detected UCB application UCB 4) UCBs swapped on all systems in the sysplex and I/O resumed 2) Mirroring suspended and I/O quiesced to ensure data consistency P PPRC S 3) Secondary devices made available using failover command 2012 IBM Corporation 16
17 z/os Global Mirror a.k.a. XRC (extended Remote Copy) Combination of software and hardware functions for asynchronous replication WITH consistency Involves a System Data Mover (SDM) on z/os in conjunction with disk subsystem microcode Host I/O 1 (1) Write to primary volume (2) Primary disk subsystem posts I/O complete Every so often (several times a second): (3) Offload data from primary disk subsystem to SDM (4) CG is formed and written from the SDM s buffers to the SDM s journal data sets (5) Immediately after the CG has been hardened on the journal data sets, the records are written to their corresponding secondary volumes 2 P 3 SDM 4 5 J S 2012 IBM Corporation 17
18 z/os Global Mirror Use of Time-stamped Writes and Consistency Groups to ensure data consistency All records being written to z/os Global Mirror primary volumes are time stamped Consistency Groups are created by the SDM A Consistency Group contains records that the SDM has determined can be safely written to the secondary site without risk of out-of-sequence updates Order of update is preserved across multiple disk subsystems in the same XRC session Recovery Point Objective Amount of time that secondary volumes lag behind the primary depends mainly on Performance of the SDM (MIPS, storage, I/O configuration) Amount of bandwidth Use of device blocking or write pacing Pause (blocking) or slow down (pacing) I/O write activity for devices with very high update rates Objective: maintain a guaranteed maximum RPO Should not be used on the DB2 active logs (increased risk of system slowdowns) 2012 IBM Corporation 18
19 Global Copy a.k.a. PPRC-XD (Peer-to-Peer Remote Copy Extended Distance) Disk-subsystem-based asynchronous replication WITHOUT consistency (1) Write to primary volume (2) Primary disk subsystem posts I/O complete Host I/O 1 At some later time: (3) The primary disk subsystem initiates an I/O to the secondary disk subsystem to transfer the data (only changed sectors are sent if the data is still in cache) (4) Secondary indicates to the primary that the write is complete - primary resets indication of modified track 2 3 P 4 S 2012 IBM Corporation 19
20 Global Copy Misconception #4: Global Copy provides a remote copy that would be usable in a disaster Answer: Not by itself Global Copy does NOT guarantee that the arriving writes at the local site are applied to the remote site in the same sequence Secondary copy is a fuzzy copy that is just not consistent Global Copy is primarily intended for migrating data between sites or between disk subsystems To create a consistent point-in-time copy, you need to pause all updates to the primaries and allow the updates to drain to the secondaries e.g., Use the -SET LOG SUSPEND command for DB2 data 2012 IBM Corporation 20
21 Global Mirror Combines Global Copy and FlashCopy Consistency Groups Disk-subsystem-based asynchronous replication WITH consistency (1) Write to primary volume (2) Primary disk subsystem posts I/O complete At some later time: (3) Global Copy is used to transmit data asynchronously between primary and secondary Host I/O 1 At predefined time interval: (4) Create point-in-time copy CG on A-disks Write I/Os queued for short period of time (usually < 1 ms) (5) Drain remaining CG data to B-disk (6) FlashCopy used to save CG to C-disks (7) Primary disk system notified to continue with the Global Copy process 2 4 Consistency group Co-ordination and formation 2012 IBM Corporation A Data transmission Global Copy B C 6 Consistency group save 21
22 Global Mirror Recovery Point Objective Amount of time that the FlashCopy target volumes lag behind the primary depends mainly on Bandwidth and links between primary and secondary disk subsystems Distance/latency between primary and secondary Hotspots on secondary in write intensive environments No pacing mechanism in this solution Designed to protect production performance at the expense of the mirror currency RPO can increase significantly if production write rates exceed the available resources 2012 IBM Corporation 22
23 Remote Copy Services Summary Metro Mirror z/os Global Mirror Global Copy Global Mirror Synchronous Hardware Asynchronous with consistency Hardware and software SDM on z/os only Asynchronous without consistency Hardware Used for HA and DR DR Data Migration DR Distance RPO Supported devices Supported disk subsystem s Metro distances, with impact to the response time of the primary devices (10 microsec/km) 0 if FREEZE and STOP (no data loss, but no more production running) > 0 if FREEZE and RUN (data loss if real disaster) CKD (System z) and FB (Open) Any enterprise disk subsystem (ESS, DS6000, DS8000, HDS/EMC) Virtually unlimited distances, with minimal impact to the response time of the primary devices 1-5 seconds or better with sufficient bandwidth and resources Max RPO can be guaranteed with pacing CKD only (z/os, z/linux, z/vm) Any enterprise disk subsystem (ESS, DS8000, HDS/EMC) Virtually unlimited distances, with minimal impact to the response time of the primary devices Depends on usermanaged procedures and consistency creation interval CKD (System z) and FB (Open) IBM enterprise disk subsystem (ESS, DS6000, DS8000) Asynchronous with consistency Hardware Virtually unlimited distances, with minimal impact to the response time of the primary devices 3-5 seconds or better with sufficient bandwidth and resources No pacing CKD (System z) and FB (Open) IBM enterprise disk subsystem (ESS, DS6000, DS8000) 2012 IBM Corporation 23
24 CA and DR Topologies and the GDPS Family Continuous Availability of Data within a Data Center Continuous Availability / Disaster Recovery within a Metropolitan Region Disaster Recovery at Extended Distance Continuous Availability Regionally and Disaster Recovery Extended Distance Single Data Center Two Data Centers Two Data Centers Three Data Centers Applications remain active Continuous access to data in the event of a storage subsystem outage Systems remain active Multi-site workloads can withstand site and/or storage failures Rapid Systems Disaster Recovery with seconds of Data Loss Disaster recovery for out of region interruptions High availability for site disasters Disaster recovery for regional disasters A B C GDPS/HyperSwap Mgr GDPS/HyperSwap Mgr GDPS/GM GDPS/MGM GDPS/PPRC GDPS/XRC GDPS/MzGM 2012 IBM Corporation 24
25 DB2 Restart Recovery DB2 restart is the key ingredient for disk-based DR solutions Normal DB2 warm restart Re-establishes DB2 data consistency through restart recovery mechanisms Directly impact the ability to meet the RTO See tuning recommendations for fast DB2 restart in Part I DB2 restart MUST NOT be attempted on a mirrored copy that is not consistent Guaranteed inconsistent data that will have to be fixed up No way to estimate the damage After the restart it is too late if the damage is extensive Damage may be detected weeks and months after the event DB2 cold start or any form of conditional restart WILL lead to data corruption and loss of data 2012 IBM Corporation 25
26 DB2 Restart Recovery All DB2 CF structures MUST be purged before attempting a disaster restart Otherwise, guaranteed logical data corruption Required even if using Metro Mirror with FREEZE/GO policy No consistency between frozen secondary copy and content of the CF structures Necessary actions and checks should be automated New DB2 ZPARM in V10 DEL_CFSTRUCTS_ON_RESTART=YES/NO Auto-delete of DB2 CF structures on restart if no active connections exist GRECP/LPL recovery will be required as a result of deleting the GBPs Likely to be the largest contributor to DB2 restart time 2012 IBM Corporation 26
27 DB2 Restart Recovery Optimise GRECP/LPL Recovery Frequent castout Low CLASST (0-5) Low GBPOOLT (5-25) Low GBPCHKPT (2-4) Switch all objects to CLOSE YES to limit the number of objects to recover Also has the potential to reduce the data sharing overhead Additional recommendations Tune DSMAX to avoid hitting it too often Adjust PCLOSEN/T to avoid too frequent pseudo closes ROT: #DSETS CONVERTED R/W -> R/O < per minute Apply APAR PM5672 Fixed issue: Candidate list for physical close becomes stale and may result in incorrectly closing datasets that are currently being frequently accessed, rather than others which are not currently being used 2012 IBM Corporation 27
28 DB2 Restart Recovery Optimise GRECP/LPL Recovery V9 Set ZPARM LOGAPSTG (Fast Log Apply) to maximum buffer size of 100MB V10 LOGAPSTG is removed and the implicit size is 510MB Build an automated GRECP/LPL recovery procedure that is optimised Identify the list of objects in GRECP/LPL status Start with DB2 Catalog and Directory objects first For the rest, generate optimal set of jobs to drive GRECP/LPL recovery Spread the -START DATABASE commands across all available members V9 10 commands maximum per member (based on 100MB of FLA storage) V10 51 commands maximum per member objects maximum per -START DATABASE command 2012 IBM Corporation 28
29 DB2 Restart Recovery Optimise GRECP/LPL Recovery DB2 9 can automatically initiate GRECP/LPL recovery at the end of both normal restart and disaster restart when a GBP failure condition is detected Why would I still need a procedure for GRECP/LPL recovery? If for any reason the GRECP recovery fails Objects that could not be recovered will remain in GRECP status and will need to be recovered by issuing -START DATABASE commands If the GBP failure occurs after DB2 restart is complete GRECP/LPL recovery will have to be initiated by the user Highly optimised user-written procedures can still outperform and be more robust than the automated GRECP/LPL recovery May want to turn off automated GRECP/LPL recovery if aggressive RTO DB2 -ALTER GBPOOL. AUTOREC(NO) 2012 IBM Corporation 29
30 Disaster Recovery Testing Need to periodically rehearse DR Objectives Validate recovery procedures and maintain readiness Ensure that RTO/RPO can be achieved Flush out issues and get them fixed Any sign of inconsistency found in your testing should be driven to root cause Broken pages, data vs.index mismatches, etc. Recommendations Allocate resources to test DR without impacting production E.g. Splitting/suspending the DASD replication and using Flashcopy Testing should be end-to-end, not based on individual application silos Test in anger and not simulated Need representative application workload running concurrently 2012 IBM Corporation 30
31 Disaster Recovery Testing Recommendations Verify data consistency DBD integrity REPAIR DBD DIAGNOSE DATABASE dbname for each user data base DB2 RI and table check constraint violations + consistency between base table and corresponding LOB/XML table spaces CHECK DATA TABLESPACE dbname.tsname part SCOPE ALL Consistency between table spaces and indexes CHECK INDEX ALL TABLESPACE or CHECK INDEX LIST (with appropriate LISTDEF) Invalid LOB values or structural problems with the pages in a LOB table space CHECK LOB TABLESPACE dbname.tsname for each LOB table space Integrity of the DB2 catalog IBM-provided DSNTESQ queries (should always return zero rows) Options to check each page of a table space for consistency DSN1COPY with CHECK option SELECT * from tbname (throw away output) Basic RUNSTATS to touch each row 2012 IBM Corporation 31
32 Mass Data Recovery Context Widespread logical data corruption e.g. introduced by Operational error, CF microcode, DASD microcode, DB2 bug, etc. Plan B for disaster recovery if The mirror is damaged/inconsistent Bad Disaster Restart e.g., using stale CF structures Mass data recovery would be based on traditional DB2 log-based forward recovery (image copies + logs) Very rare event but if not prepared, practiced and optimised, it will lead to extended application service downtimes (possibly many hours to several days) In real world mass data recovery is more frequent than a true DR event 2012 IBM Corporation 32
33 Mass Data Recovery... Common problems Lack of planning, design, optimisation, practice & maintenance No prioritised list of application objects and inter-dependencies Limited use of DB2 referential integrity Logical model, data dependencies, integrity management are buried in the applications Heavily dependant on application knowledge and support Procedures for taking backups and executing recovery compromised by lack of investment in technical configuration Use of tape including VTS Cannot share tape volumes across multiple jobs Relatively small number of read devices Concurrent recall can be a serious bottleneck Even though VTS has a disk cache, it is known to z/os as tape device Same serialization characteristics as all tape devices A single virtual volume cannot be shared by different jobs or systems at the same time 2012 IBM Corporation 33
34 Mass Data Recovery... Results: any or all of the following No estimate of elapsed time to complete Elongated elapsed time to complete recovery Performance bottlenecks so that recovery performance does not scale Breakage in procedures Surprises caused by changing technical configuration Unrecoverable objects 2012 IBM Corporation 34
35 Logging Environment Design for performance Keep a minimum of 6h of recovery log data in the active log pairs at any time Objective: provide some reaction time in case of archiving problem Adjust number/size of the DB2 active log pairs (limit is 93 log pairs) Active logs can be added dynamically in DB2 10 New -SET LOG NEWLOG option New active log must be IDCAMS defined & preformatted by DSNJLOGF Only a single log dataset at a time >> Issue command twice for dual logging No dynamic delete of active logs Keep 48-72h of recovery log data on DASD Keep archive log COPY1 on DASD for 48-72h before migrating it to tape Archive log COPY2 can be migrated away to tape at any point Be ready to extend the amount of recovery log beyond what is available on DASD Set BLKSIZE=24576 to optimise reads on DASD Prepare a procedure to copy archive logs from tape or VTS to DASD 2012 IBM Corporation 35
36 Logging Environment Design for resiliency Separate COPY1 and COPY2 of the active log pairs and BSDS across different DASD controllers Isolate objects into separate ICF catalogs to achieve more resiliency from an ICF catalog failure Separate out the datasets for each DB2 member into separate ICF catalogs Active logs, archive logs and BSDS for member DB2A away from those for member DB2B Result: an ICF catalog failure would only affect one DB2 member Should also consider further isolation COPY1 of active log, archive log and BSDS into one ICF catalog COPY2 of active log, archive log and BSDS into an alternate ICF catalog Result: an ICF catalog failure would not affect DB2 Additional ICF catalog considerations for better performance and resilience Isolate DB2 catalog/directory objects into a separate ICF catalog Use multiple ICF catalogs for the DB2 user objects Separate ICF catalogs for DB2 objects and DB2 image copy backups 2012 IBM Corporation 36
37 Logging Environment Design for serviceability Retain archive log data for 30 days At first sign of logical data corruption, stop the deletion of image copies and archive log datasets 2012 IBM Corporation 37
38 DB2 Catalog and Directory Keep the DB2 catalog/directory FICs on DASD to speed up recovery Need to build and maintain a JCL to recover the DB2 catalog/directory Sizes of the underlying data sets must be documented and readily available Must be maintained over time when number of datasets and/or sizes are changed Recovery of DB2 catalog/directory should be periodically rehearsed Need a process to check periodically the DB2 catalog/directory integrity Using a cloned copy of the DB2 catalog/directory into an auxiliary DB2 subsystem Periodic REORG of the DB2 catalog/directory outside of release migration Most importantly, SYSLGRNX should be reorganised every quarter Can be run as SHRLEVEL(CHANGE) at a time of low activity Will speed up online REORG, MODIFY RECOVERY, GRECP/LPL recovery 2012 IBM Corporation 38
39 Mass Data Recovery The following information should be documented, kept current and made readily available to the DBAs Consolidated logical/physical database design view Application RI relationships Dependencies across application domains Need to agree on a prioritised list of business-critical applications and related data required by these applications This list should be used to drive the recovery order Critical information needed during a recovery event Objective: Bring back critical application services as soon as possible Without these lists, either have to wait for the whole world to be recovered, or take a risk in bringing back some of the application services earlier Should not rely exclusively on application expertise 2012 IBM Corporation 39
40 Mass Data Recovery... Schedule a daily production job to check for unrecoverable objects Build, test and optimise a set of recovery jobs exploiting capacity of the entire DB2 data sharing group Up to 10 (V9) or 51 (V10) RECOVER jobs per DB2 subsystem objects per RECOVER Spread the jobs across all DB2 data sharing members If elapsed time needs to be improved further, look for possible optimisations e.g., Change the recovery order to factor in the constraining elapsed time of the critical objects (RECOVER TABLESPACE+REBUILD index) Consider possible use of BACKUP SYSTEM / RESTORE SYSTEM Increase image copy frequency for critical objects Keep more image copies on DASD Optimise copy of archive logs to DASD Consider use of COPY/RECOVER instead of REBUILD for very large NPIs Additional buffer pool tuning for DB2 catalog objects 2012 IBM Corporation 40
41 Mass Data Recovery... Practise regular full-scale fire drills for mass recovery of the entire system Find out what the actual service level is Validate that procedures are in working order Both for local and remote DR recovery Maintain readiness on mass recovery execution Need to invest in procedures to validate the success of the recovery and possibly compensate for lost work Requires active cooperation and buy-in from the application teams Develop and maintain data reconciliation programs Consider cloning the data into an auxiliary DB2 subsystem, turn on DB2 RI and use CHECK DATA to validate the relationships Requires investment in a Log Analysis tool to support this critical step Needs regular practice on the tool 2012 IBM Corporation 41
42 Mass Data Recovery... FlashCopy provides interesting new options to help with mass data recovery System-wide point-in-time backup Can restart off a previous PIT copy Can use RESTORE SYSTEM LOGONLY or RECOVER to come forward in time Quick cloning of the environment to create a forensic system I/O consistency of the FlashCopy backup is key to allow quick restart When using FlashCopy to take backups outside of DB2 s control Option 1: Use -SET LOG SUSPEND to temporarily freeze all DB2 update activity For Data Sharing, you must issue the command on each data sharing member and receive DSNJ372I before you begin the FlashCopy Option 2: Use the FlashCopy Consistency Group function Not 100% transparent to the applications! All volumes in the group are put in long busy state (i.e. all writes are suspended) until they are all copied or the timeout value is reached (2 minutes with default settings) Use option FCCGVERIFY to determine if the copies of the group of volumes are consistent DB2 warm restart recovery will re-establish data consistency 2012 IBM Corporation 42
IMS Disaster Recovery Overview
IMS Disaster Recovery Overview Glenn Galler [email protected] IBM Advanced Technical Skills (ATS) August 8, 2012 (1:30-2:30 pm) IBM Disaster Recovery Solutions IMS Recovery Solutions IMS databases are
Lisa Gundy IBM Corporation. Wednesday, March 12, 2014: [email protected]. 11:00 AM 12:00 PM Session 15077
Continuing the understanding of IBM Copy Services: Peer-to-Peer-Remote-Copy (PPRC) and Point in Time Copy (FlashCopy) for High Availability (HA) and Disaster Recovery (DR) Lisa Gundy IBM Corporation [email protected]
IMS Disaster Recovery Overview
IMS Disaster Recovery Overview Glenn Galler IBM Advanced Technical Skills (ATS) February 7, 2013 Insert Custom Session QR if Desired. Disaster Recovery VS. Local Recovery Disaster Recovery All production
IMS Disaster Recovery
IMS Disaster Recovery Part 1 Understanding the Issues February 5, 2008 Author Bill Keene has almost four decades of IMS experience and is recognized world wide as an expert in IMS recovery and availability.
IBM DB2 Recovery Expert June 11, 2015
Baltimore/Washington DB2 Users Group IBM DB2 Recovery Expert June 11, 2015 2014 IBM Corporation Topics Backup and Recovery Challenges FlashCopy Review DB2 Recovery Expert Overview Examples of Feature and
Best Practices for DB2 on z/os Backup and Recovery
Best Practices for DB2 on z/os Backup and Recovery Susan Lawson and Dan Luksetich www.db2expert.com and BMC Software June 2009 www.bmc.com Contacting BMC Software You can access the BMC Software website
DB2 for z/os: [Some] Backup and Recovery Utility Enhancements in V8/9/10
DB2 for z/os: [Some] Backup and Recovery Utility Enhancements in V8/9/10 Bart Steegmans DB2 for z/os L2 Performance March 2011 Disclaimer The information contained in this presentation is provided for
Storage System High-Availability & Disaster Recovery Overview [638]
Storage System High-Availability & Disaster Recovery Overview [638] John Wolfgang Enterprise Storage Architecture & Services [email protected] 26 June 2014 Agenda Background Application-Based
Simplify and Improve Database Administration by Leveraging Your Storage System. Ron Haupert Rocket Software
Simplify and Improve Administration by Leveraging Your Storage System Ron Haupert Rocket Software Session Agenda and Storage Integration Overview System-Level Backup Methodologies and Storage Integration
Continuous Data Protection for DB2
Continuous Data Protection for DB2 zcdp for DB2 April 2010 Agenda IBM Systems and Technology Group Business Continuity Overview Introduction to Continuous Data Protection Overview Types DB2 Solution Fast
HITACHI DATA SYSTEMS USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013
HITACHI DATA SYSTEMS USER HITACHI GROUP DATA CONFERENCE SYSTEMS 2013 USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013 AGENDA Hitachi Mainframe Strategy Compatibility
z/os Data Replication as a Driver for Business Continuity
z/os Data Replication as a Driver for Business Continuity Karen Durward IBM August 9, 2011 Session Number 9665 Important Disclaimer Copyright IBM Corporation 2011. All rights reserved. U.S. Government
Local and Remote Replication Solutions from IBM, EMC, and HDS Session 16057 SHARE Pittsburgh August 6, 2014
Local and Remote Replication Solutions from IBM, EMC, and HDS Session 16057 SHARE Pittsburgh August 6, 2014 Brett Quinn Tony Negro EMC Corporation Insert Custom Session QR if Desired. EMC Local Replication
How To Choose A Business Continuity Solution
A Business Continuity Solution Selection Methodology Ellis Holman IBM Corp. Tuesday, March 13, 2012 Session Number 10387 Disclaimer Copyright IBM Corporation 2010. All rights reserved. U.S. Government
Synchronous Data Replication
S O L U T I O N S B R I E F Synchronous Data Replication Hitachi Data Systems Synchronous Data Replication Business continuity has become one of the top issues facing businesses and organizations all around
DB2 for z/os System Level Backup Update
DB2 for z/os System Level Backup Update Judy Ruby-Brown IBM Corp. Advanced Technical Skills (ATS) Wednesday, March 14, 2012 Session Number 10499 Notes System Level Backup, consisting of BACKUP SYSTEM and
Application Brief: Using Titan for MS SQL
Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation
Disaster Recovery for Oracle Database
Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches
Connectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance 7.0 Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Loss Business Impact... 6 2.2 Recovery Tools... 8 3 Manual Recovery Method...
Connectivity. Alliance Access 7.0. Database Recovery. Information Paper
Connectivity Alliance Access 7.0 Database Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Database Loss Business Impact... 6 2.2 Database Recovery
Affordable Remote Data Replication
SANmelody Application Affordable Remote Data Replication Your Data is as Valuable as Anyone s You know very well how critical your data is to your organization and how much your business would be impacted
Informix Dynamic Server May 2007. Availability Solutions with Informix Dynamic Server 11
Informix Dynamic Server May 2007 Availability Solutions with Informix Dynamic Server 11 1 Availability Solutions with IBM Informix Dynamic Server 11.10 Madison Pruet Ajay Gupta The addition of Multi-node
MySQL Enterprise Backup
MySQL Enterprise Backup Fast, Consistent, Online Backups A MySQL White Paper February, 2011 2011, Oracle Corporation and/or its affiliates Table of Contents Introduction... 3! Database Backup Terms...
DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New
Robert Catterall, IBM [email protected] DB2 for z/os Backup and Recovery: Basics, Best Practices, and What's New Baltimore / Washington DB2 Users Group June 11, 2015 Information Management 2015 IBM Corporation
Computer Associates BrightStor CA-Vtape Virtual Tape System Software
Edward E. Cowger, Herb Gepner Product Report 8 October 2002 Historical Computer Associates BrightStor CA-Vtape Virtual Tape System Software Summary Computer Associates (CA s) BrightStor CA-Vtape Virtual
Using Virtualization to Help Move a Data Center
Using Virtualization to Help Move a Data Center Jim Moling US Treasury, Financial Management Service Session 7879 Tuesday, August 3, 2010 Disclaimers The opinions & ideas expressed herein are those of
E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
E-Series NetApp E-Series Storage Systems Mirroring Feature Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
IBM TotalStorage IBM TotalStorage Virtual Tape Server
IBM TotalStorage IBM TotalStorage Virtual Tape Server A powerful tape storage system that helps address the demanding storage requirements of e-business storag Storage for Improved How can you strategically
Oracle 11g Database Administration
Oracle 11g Database Administration Part 1: Oracle 11g Administration Workshop I A. Exploring the Oracle Database Architecture 1. Oracle Database Architecture Overview 2. Interacting with an Oracle Database
IT Service Management
IT Service Management Service Continuity Methods (Disaster Recovery Planning) White Paper Prepared by: Rick Leopoldi May 25, 2002 Copyright 2001. All rights reserved. Duplication of this document or extraction
<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices
1 RMAN Configuration and Performance Tuning Best Practices Timothy Chien Principal Product Manager Oracle Database High Availability [email protected] Agenda Recovery Manager
User Experience: BCPii, FlashCopy and Business Continuity
User Experience: BCPii, FlashCopy and Business Continuity Mike Shorkend Isracard Group 4:30 PM on Monday, March 10, 2014 Session Number 14953 http://www.linkedin.com/pub/mike-shorkend/0/660/3a7 [email protected]
Maximum Availability Architecture. Oracle Best Practices For High Availability
Preventing, Detecting, and Repairing Block Corruption: Oracle Database 11g Oracle Maximum Availability Architecture White Paper May 2012 Maximum Availability Architecture Oracle Best Practices For High
Business Continuity: Choosing the Right Technology Solution
Business Continuity: Choosing the Right Technology Solution Table of Contents Introduction 3 What are the Options? 3 How to Assess Solutions 6 What to Look for in a Solution 8 Final Thoughts 9 About Neverfail
17573: Availability and Time to Data How to Improve Your Resiliency and Meet Your SLAs
17573: Availability and Time to Data How to Improve Your Resiliency and Meet Your SLAs Ros Schulman Hitachi Data Systems Rebecca Levesque 21st Century Software August 13, 2015 10-11:00 am Southern Hemisphere
How To Improve Your Database Performance
SOLUTION BRIEF Database Management Utilities Suite for DB2 for z/os How Can I Establish a Solid Foundation for Successful DB2 Database Management? SOLUTION BRIEF CA DATABASE MANAGEMENT FOR DB2 FOR z/os
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
FICON Extended Distance Solution (FEDS)
IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon [email protected] FEDS: The Optimal Transport Solution
Business Resilience for the On Demand World Yvette Ray Practice Executive Business Continuity and Resiliency Services
Business Resilience for the On Demand World Yvette Ray Practice Executive Business Continuity and Resiliency Services What s on the minds of 450 of the world s leading CEOs 2 CEO needs Revenue growth with
Four Steps to Disaster Recovery and Business Continuity using iscsi
White Paper Four Steps to Disaster Recovery and Business Continuity using iscsi It s a fact of business life physical, natural, and digital disasters do occur, and they interrupt operations and impact
IBM Tivoli Storage Productivity Center (TPC)
IBM Tivoli Storage Productivity Center (TPC) Simplify, automate, and optimize storage infrastructure Highlights The IBM Tivoli Storage Productivity Center is designed to: Help centralize the management
The Benefits of Continuous Data Protection (CDP) for IBM i and AIX Environments
The Benefits of Continuous Data Protection (CDP) for IBM i and AIX Environments New flexible technologies enable quick and easy recovery of data to any point in time. Introduction Downtime and data loss
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
VERITAS Business Solutions. for DB2
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
Storage Based Replications
Storage Based Replications Miroslav Vraneš EMC Technology Group [email protected] 1 Protecting Information Is a Business Decision Recovery point objective (RPO): How recent is the point in time for
Oracle Database Disaster Recovery Using Dell Storage Replication Solutions
Oracle Database Disaster Recovery Using Dell Storage Replication Solutions This paper describes how to leverage Dell storage replication technologies to build a comprehensive disaster recovery solution
Data Recovery and High Availability Guide and Reference
IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference Version 8 SC09-4831-00 IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference Version 8 SC09-4831-00
Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs
EMC RECOVERPOINT FAMILY Continuous Data Protection for any Point-in-Time Recovery: Product Options for Protecting Virtual Machines or Storage Array LUNs ESSENTIALS EMC RecoverPoint Family Optimizes RPO
Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases
Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side
IBM Replication Solutions for Business Continuity Part 1 of 2 TotalStorage Productivity Center for Replication (TPC-R) FlashCopy Manager/PPRC Manager
IBM Replication Solutions for Business Continuity Part 1 of 2 TotalStorage Productivity Center for Replication (TPC-R) FlashCopy Manager/PPRC Manager Scott Drummond IBM Corporation [email protected] Jeff
SQL Server Database Administrator s Guide
SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or by
HA / DR Jargon Buster High Availability / Disaster Recovery
HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster
Objectif. Participant. Prérequis. Pédagogie. Oracle Database 11g - Administration Workshop II - LVC. 5 Jours [35 Heures]
Objectif Back up and recover a database Configure Oracle Database for optimal recovery Administer ASM disk groups Use an RMAN backup to duplicate a database Automating Tasks with the Scheduler Participant
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key
SHARE Lunch & Learn #15372
SHARE Lunch & Learn #15372 Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Scott James VP Global Alliances Luminex Software, Inc. Randy Fleenor Worldwide Data Protection
Once the RTO and RPO objectives are known, the customer is able to determine which disaster recovery methodology works best for their needs.
1 With the base IMS product, we will explore disaster recovery where the remote site is restored to a specific recovery point using batch image copies and a backup of the DBRC RECON data set. There are
Introduction to Enterprise Data Recovery. Rick Weaver Product Manager Recovery & Storage Management BMC Software
Introduction to Enterprise Data Recovery Rick Weaver Product Manager Recovery & Storage Management BMC Software Contents Introduction...1 The Value of Data...2 Risks to Data Assets...2 Physical Loss...2
It s All About Restoring Your Data Achieving Rapid Business Resumption with Highly Efficient Disk-to-Disk Backup & Restore
It s All About Restoring Your Data Achieving Rapid Business Resumption with Highly Efficient Disk-to-Disk Backup & Restore All trademark names are the property of their respective companies. This publication
Analyzing IBM i Performance Metrics
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
IBM System Storage DS5020 Express
IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited
Optimizing Your Database Performance the Easy Way
Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of
DB2 for z/os: Disaster Recovery for the Rest of Us
DB2 for z/os: Disaster Recovery for the Rest of Us Judy Ruby-Brown IBM Advanced Technical Skills (ATS) Thursday, March 15, 2012 Session Number 10501 Notes System Level Backup, consisting of BACKUP SYSTEM
DS8000 Data Migration using Copy Services in Microsoft Windows Cluster Environment
DS8000 Data Migration using Copy Services in Microsoft Windows Cluster Environment By Leena Kushwaha Nagaraja Hosad System and Technology Group Lab Services India Software Lab, IBM July 2013 Table of contents
IBM Systems and Technology Group Technical Conference
IBM TRAINING IBM STG Technical Conference IBM Systems and Technology Group Technical Conference Munich, Germany April 16 20, 2007 IBM TRAINING IBM STG Technical Conference E72 Storage options and Disaster
Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino [email protected]
Performance Monitoring AlwaysOn Availability Groups Anthony E. Nocentino [email protected] Anthony E. Nocentino Consultant and Trainer Founder and President of Centino Systems Specialize in system
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
Oracle Database 11g: Administration Workshop II
Oracle University Entre em contato: 0800 891 6502 Oracle Database 11g: Administration Workshop II Duração: 5 Dias Objetivos do Curso In this course, the concepts and architecture that support backup and
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
DBAs having to manage DB2 on multiple platforms will find this information essential.
DB2 running on Linux, Unix, and Windows (LUW) continues to grow at a rapid pace. This rapid growth has resulted in a shortage of experienced non-mainframe DB2 DBAs. IT departments today have to deal with
IBM PowerHA SystemMirror for i. Performance Information
IBM PowerHA SystemMirror for i Performance Information Version: 1.0 Last Updated: April 9, 2012 Table Of Contents 1 Introduction... 3 2 Geographic Mirroring... 3 2.1 General Performance Recommendations...
Backing up a Large Oracle Database with EMC NetWorker and EMC Business Continuity Solutions
Backing up a Large Oracle Database with EMC NetWorker and EMC Business Continuity Solutions EMC Proven Professional Knowledge Sharing June, 2007 Maciej Mianowski Regional Software Specialist EMC Corporation
Exam : IBM 000-061. : IBM NEDC Technical Leader. Version : R6.1
Exam : IBM 000-061 Title : IBM NEDC Technical Leader Version : R6.1 Prepking - King of Computer Certification Important Information, Please Read Carefully Other Prepking products A) Offline Testing engine
EPIC EHR: BUILDING HIGH AVAILABILITY INFRASTRUCTURES
EPIC EHR: BUILDING HIGH AVAILABILITY INFRASTRUCTURES BEST PRACTICES FOR PROTECTING EPIC EHR ENVIRONMENTS EMC HEALTHCARE GROUP ABSTRACT Epic Electronic Health Records (EHR) is at the core of delivering
Big Data Storage in the Cloud
Big Data Storage in the Cloud Russell Witt Scott Arnett CA Technologies Tuesday, March 11 Session Number 15288 Tuesday, March 11Tuesday, March 11 Abstract Need to reduce the cost of managing storage while
EonStor DS remote replication feature guide
EonStor DS remote replication feature guide White paper Version: 1.0 Updated: Abstract: Remote replication on select EonStor DS storage systems offers strong defense against major disruption to IT continuity,
FDR/UPSTREAM INNOVATION Data Processing Providing a Long Line of Solutions
Presented to FDR/UPSTREAM Innovation Data Processing [email protected] Copyright 2015 2015 All rights INNOVATION reserved. Data Processing. All rights reserved. 1 Presented to INNOVATION Data Processing
3. S&T IT Governance & Information Assurance Forum
IBM RJEŠENJA ZA MIRAN SAN Danijel Paulin, Systems Architect, IBM Croatia Sadržaj prezentacije Uvod Business Continuity RPO RTO Business Continuity Tiers IBM metodologija Pregled IBM rješenja 1 Importance
End-to-End Availability for Microsoft SQL Server
WHITE PAPER VERITAS Storage Foundation HA for Windows End-to-End Availability for Microsoft SQL Server January 2005 1 Table of Contents Executive Summary... 1 Overview... 1 The VERITAS Solution for SQL
Oracle on System z Linux- High Availability Options Session ID 252
Oracle on System z Linux- High Availability Options Session ID 252 Sam Amsavelu IBM Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or
Exploiting the Virtual Tape System for Enhanced Vaulting & Disaster Recovery A White Paper by Rocket Software. Version 2.
Exploiting the Virtual Tape System for Enhanced Vaulting & Disaster Recovery A White Paper by Rocket Software Version 2.0 May 2013 Rocket Software, Inc. or its affiliates 1990 2013. All rights reserved.
Achieve Continuous Computing for Mainframe Batch Processing. By Hitachi Data Systems and 21st Century Software
Achieve Continuous Computing for Mainframe Batch Processing By Hitachi Data Systems and 21st Century Software February 2016 Contents Executive Summary... 2 Introduction... 3 Difficulties in Recovering
TSM for Advanced Copy Services: Today and Tomorrow
TSM for Copy Services and TSM for Advanced Copy Services: Today and Tomorrow Del Hoobler Oxford University TSM Symposium 2007 September 2007 Disclaimer This presentation describes potential future enhancements
Module 7: System Component Failure Contingencies
Module 7: System Component Failure Contingencies Introduction The purpose of this module is to describe procedures and standards for recovery plans to be implemented in the event of system component failures.
SQL-BackTrack the Smart DBA s Power Tool for Backup and Recovery
SQL-BackTrack the Smart DBA s Power Tool for Backup and Recovery by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Mati Pitkanen, SQL-BackTrack for Oracle Product Manager, BMC Software
NetApp SnapMirror. Protect Your Business at a 60% lower TCO. Title. Name
NetApp SnapMirror Protect Your Business at a 60% lower TCO Name Title Disaster Recovery Market Trends Providing disaster recovery remains critical Top 10 business initiative #2 area for storage investment
High availability and disaster recovery with Microsoft, Citrix and HP
High availability and disaster recovery White Paper High availability and disaster recovery with Microsoft, Citrix and HP Using virtualization, automation and next-generation storage to improve business
Backup Solutions with Linux on System z and z/vm. Wilhelm Mild IT Architect IBM Germany [email protected] 2011 IBM Corporation
Backup Solutions with Linux on System z and z/vm Wilhelm Mild IT Architect IBM Germany [email protected] 2011 IBM Corporation Trademarks The following are trademarks of the International Business Machines
Sanovi DRM for Oracle Database
Application Defined Continuity Sanovi DRM for Oracle Database White Paper Copyright@2012, Sanovi Technologies Table of Contents Executive Summary 3 Introduction 3 Audience 3 Oracle Protection Overview
DeltaV Virtualization High Availability and Disaster Recovery
DeltaV Distributed Control System Whitepaper October 2014 DeltaV Virtualization High Availability and Disaster Recovery This document describes High Availiability and Disaster Recovery features supported
DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led
DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led Course Description Gain a deeper understanding of the advanced features of DB2 9 for Linux, UNIX, and Windows database environments
VERITAS Volume Replicator in an Oracle Environment
VERITAS Volume Replicator in an Oracle Environment Introduction Remote replication of online disks and volumes is emerging as the technique of choice for protecting enterprise data against disasters. VERITAS
WHITE PAPER Overview of Data Replication
Overview of Data Replication 1 Abstract Replication is the process of making a copy of something, or of creating a replica. In different contexts, such as art (copies of a painting), bioscience (cloning),
Improving Microsoft SQL Server Recovery with EMC NetWorker and EMC RecoverPoint
Improving Microsoft SQL Server Recovery with EMC NetWorker and EMC RecoverPoint Applied Technology Abstract This white paper covers how EMC NetWorker and EMC NetWorker modules can be used effectively in
WHITE PAPER. The 5 Critical Steps for an Effective Disaster Recovery Plan
WHITE PAPER The 5 Critical Steps for an Effective Disaster Recovery Plan 2 WHITE PAPER The 5 Critical Planning Steps For An Effective Disaster Recovery Plan Introduction In today s climate, most enterprises
Microsoft SharePoint 2010 on VMware Availability and Recovery Options. Microsoft SharePoint 2010 on VMware Availability and Recovery Options
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
Agenda. GDPS The Enterprise-wide Continuous Availability and Disaster Recovery Solution. Why GDPS? GDPS Family of Offerings
GDPS The Enterprise-wide Continuous Availability and Disaster Recovery Solution Udo Pimiskern IBM Executive IT Specialist GDPS Design & Development World-wide GDPS Technical Support [email protected]
