GDPS z/os Global Mirror Versus Global Mirror

Similar documents
IMS Disaster Recovery Overview

Lisa Gundy IBM Corporation. Wednesday, March 12, 2014: 11:00 AM 12:00 PM Session 15077

IMS Disaster Recovery Overview

FICON Extended Distance Solution (FEDS)

Recommendations from DB2 Health Check Studies: Speed of Recovery

HITACHI DATA SYSTEMS USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013

z/os Data Replication as a Driver for Business Continuity

Continuous Availability for Business-Critical Data

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

Local and Remote Replication Solutions from IBM, EMC, and HDS Session SHARE Pittsburgh August 6, 2014

Solutions for BC/DR. (as seen trough Storage) IBM System Storage TM. Klemen Bačak , Banja Luka

IBM and Hitachi hereby confirm that testing for the support of ESCON, FICON, and FCP connectivity of the following has been successfully completed:

Storage System High-Availability & Disaster Recovery Overview [638]

IBM Replication Solutions for Business Continuity Part 1 of 2 TotalStorage Productivity Center for Replication (TPC-R) FlashCopy Manager/PPRC Manager

Business Resilience for the On Demand World Yvette Ray Practice Executive Business Continuity and Resiliency Services

3. S&T IT Governance & Information Assurance Forum

IBM DB2 Recovery Expert June 11, 2015

E-Series. NetApp E-Series Storage Systems Mirroring Feature Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

IBM and VERITAS Practical disaster recovery

Continuous Data Protection for DB2

IP Storage On-The-Road Seminar Series

IBM System Storage DS5020 Express

User Experience: BCPii, FlashCopy and Business Continuity

IBM Tivoli Storage Productivity Center (TPC)

Universal Data Access and Future Enhancements

Enhancements of ETERNUS DX / SF

What s the best disk storage for my i5/os workload?

Storage Based Replications

HA / DR Jargon Buster High Availability / Disaster Recovery

IBM Enterprise Storage Server

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

IBM PowerHA SystemMirror for i. Performance Information

EonStor DS remote replication feature guide

Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Microsoft Cross-Site Disaster Recovery Solutions

Continuous Availability for Business-Critical Data. MetroCluster Technical Overview

Exam : IBM : IBM NEDC Technical Leader. Version : R6.1

Using Virtualization to Help Move a Data Center

VERITAS Storage Foundation 4.3 for Windows

Performance Monitoring AlwaysOn Availability Groups. Anthony E. Nocentino

Oracle Database Disaster Recovery Using Dell Storage Replication Solutions

Tivoli Storage Flashcopy Manager for Windows - Tips to implement retry capability to FCM offload backup. Cloud & Smarter Infrastructure IBM Japan

Consolidated Service Test

Backup Solutions with Linux on System z and z/vm. Wilhelm Mild IT Architect IBM Germany mildw@de.ibm.com 2011 IBM Corporation

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

Agenda. GDPS The Enterprise-wide Continuous Availability and Disaster Recovery Solution. Why GDPS? GDPS Family of Offerings

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

Agenda. Three License Types Concepts for ThinManager Licensing License Activation Demo

Abstract Introduction Overview of Insight Dynamics VSE and Logical Servers... 2

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

IBM Tivoli Storage Productivity Center

High Availability Solutions for the MariaDB and MySQL Database

Compellent Source Book

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

High Availability for Linux on IBM System z Servers

SOFTWAREDEFINED-STORAGE

IMS Disaster Recovery

How To Create A Large Enterprise Cloud Storage System From A Large Server (Cisco Mds 9000) Family 2 (Cio) 2 (Mds) 2) (Cisa) 2-Year-Old (Cica) 2.5

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Veritas Storage Foundation 4.3 for Windows by Symantec

Synchronous Data Replication

Redpaper. A Disaster Recovery. Solution Selection Methodology. Front cover. ibm.com/redbooks. Learn and apply a Disaster Recovery

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper

TSM for Advanced Copy Services: Today and Tomorrow

PARALLELS CLOUD STORAGE

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices

IBM Systems and Technology Group Technical Conference

Disaster Recovery with General Parallel File System. August 2004

iseries Recovery Options Pro s & Cons

Disaster Recovery with EonStor DS Series &VMware Site Recovery Manager

Oracle EPM Disaster Recovery High Level Overview

DS8000 Data Migration using Copy Services in Microsoft Windows Cluster Environment

Next-Generation Disaster Recovery and Availability Technologies for IBM Power Systems An in-depth report of technologies and strategies to help

Achieve Continuous Computing for Mainframe Batch Processing. By Hitachi Data Systems and 21st Century Software

Informix Dynamic Server May Availability Solutions with Informix Dynamic Server 11

IBM System Storage Executive Briefing Center Topics (Tucson)

High Availability Architectures for Linux in a Virtual Environment

Redefining Oracle Database Management

Appendix A Core Concepts in SQL Server High Availability and Replication

17573: Availability and Time to Data How to Improve Your Resiliency and Meet Your SLAs

SAN Conceptual and Design Basics

Long Distance IBM Sysplex Data Sharing

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

Long-Distance Configurations for MSCS with IBM Enterprise Storage Server

PROTECTING AND ENHANCING SQL SERVER WITH DOUBLE-TAKE AVAILABILITY

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager

IBM SAP International Competence Center. The Home Depot moves towards continuous availability with IBM System z

Transcription:

66. GSE zos Expertenforum 2007 GDPS z/os Global Mirror Versus Global Mirror walter.muehlenstaedt@ch.ibm.com and System z9 2006 IBM Corporation

Agenda 1. Business Continuity Overview Business Continuity Objectives 2. Unlimited Distance D/R Solution Global Mirror z/os Global Mirror 3. CA/DR Solution (3 sites) z/os data only z/os and Open data 2

Business Continuance Objectives Determine your Objectives for Business Continuance (by application) Recovery Time Objective (RTO) how long can you afford to be without your systems? Recovery Point Objective (RPO) Data loss YES /NO when it is recovered, how much data can you afford to recreate? Network Recovery Objective (NRO) how long to switch over network? Determine cost / recovery time curve If I spend a little more, how much faster is Disaster Recovery? If I spend a little less, how much slower is Disaster Recovery? RPO Backup Application Processing Application Failure Determining the cost vs. RTO recovery curve is the key to selecting proper solution(s) 3

z/os Global Mirror Productivity tool that integrates management of XRC and FlashCopy with GDPS Full-screen interface Invoke scripted procedures from panels or through exit GDPS/XRC runs in the SDM location and interacts with SDM(s) Manages availability of SDM Sysplex Execute Scripts to perform a fully automated site failover Single point of control for multiple / coupled Data Movers production systems primary disk subsystems secondary disk subsystems SDM systems GDPS /XRC Flash Copy Devices State, Journal, Master, Control- Files 4

z/os Global Mirror XRC Update Processing Input Queue - IO's queued to read from primary CU XCGCO - Consistency groups queued to XLOPE Q XJTSK - Journal Queue XLOPE - Consistency groups queue to secondary's Residual count - Percentage of side file Limit based on HDS mode settings J1 A B J8 C A' B' 5

z/os Global Mirror XRC Update Processing 1. Timestamped application write 3. SDM requests data transfer 5. SDM forms consistency group 4. Updates sent to SDM 6. SDM journals consistency group 7. SDM writes consistency group to secondary vols 2. Primary disk subsystem maintains updates in cache J1 8. SDM notes consistency group complete A B J8 C A' B' 6

z/os Global Mirror Hlq.XCOPY.PARMLIB Also SYS1.PARMLIB(ANTXIN00) Contains tuning parameters Member name ALL or sessid Read at XSTART time Verified and changed with XSET command Hlq.XCOPY.msessid.MASTER Dataset for coupled SDM s 1 Cylinder Supports 14 SDM s Used to coordinate consistency between SDM s Hlq.XCOPY.sessid.CONTROL Sequential dataset Keep track of journal & secondary updates Read at XRECOVER to apply latest updates, ensure consistency Typically placed on same volume as MASTER dataset in coupled environment 7

z/os Global Mirror Hlq.xcopy.sessid.jrnl01 16 Used in odd even pairs Striped 2 stripes Should be placed in odd even LSS s Best performance if not on secondary CU (not always practical) Hlq.XCOPY.sessid.STATE PDSE Read at XRECOVER to relabel secondary volumes Typically placed on same volume as MASTER and CONTROL dataset MONITOR1 Records here Used by XRC Monitor 8

66. GSE zos Expertenforum 2007 Global Mirror Overview 2006 IBM Corporation

DS 8000 DS 6000 ESS 800 ESS 750 Native Performance Master Control Server LOCAL Global Mirror One of many possible configurations SAN Fabric Performance Transmission REMOTE FlashCopy DS 8000 ESS 750 ESS 800 Turbo Consistent Data Designed to Provide: Global Distance: Two-site, unlimited distance, data consistent asynchronous disk mirroring Heterogeneous: Data can span zseries and open systems data, and can contain a mix of zseries and open systems data Scalability: Consistency Group supported across up to 17 total ESSs in Global Mirror session (with RPQ) Flexibility: Many possible configurations Application Performance: Native Mirroring Performance: Two ESS Fiber Channel disk mirroring links per ESS sufficient for almost all workloads Intended Benefits Autonomic: No active external controlling software required to form consistency groups Saves cost: No server cycles required to manage consistency groups Lowers TCO: designed to provide improved performance, global distances, and lower costs 10

DSS LSS Global Mirror MASTER Session number A 12 A SBINFO START PAUSE RESUME STOP DEFINE/ UNDEFINE JOIN/ REMOVE SUBORDINATE A 12 11

Global Mirror A B C A Establish PPRC Paths Establish PPRC (XD) Pairs (A to B) Establish FC (ASYNC) (B to C) Done via the PPRC link (in-band) Establish SESSIONLINK to Subordinate Define SESSION to each primary LSS JOIN volumes to the session (within the LSS) START GM session with the MASTER CG creation starts - if all initial copy is done No action in Recovery Site! 12

Global Mirror Coordination time COORDINTERVAL CGINTERVAL DRAIN to B disk FlashCopy Coordination CGDRAIN COORDINTERVAL (ms def=15ms) and CGDRAIN (sec) is the max allowed time If exceeded, the CG will be lost but retried next time around! RPO will be CGINTERVAL (sec) plus the time to form a copy, so it may vary over time 13

10 9 8 zp2 GDPS/Global Mirror Site 1 Failure 12 11 1 12 11 1 Blue sysplex Red sysplex Non sysplex 7 6 5 2 4 3 Application Site CF1 zp3 zp5 zp1 zp4 Open Systems S S S S S L O7 CF2 10 9 8 7 6 GDPS K-sys O6 z/os and Open Systems sharing disk subsystem 5 2 4 3 GDPS R-sys NetView communication Recovery Site z/os and Open Systems sharing disk subsystem Capacity Back Up (CBU) Discretionary Backup open systems Global Mirror over Unlimited Distance S S S S S L Application site can have single z/os Systems, Open Systems, Systems in a Sysplex All data (z/os and Open Systems) can be mirrored using Global Mirror K-sys activities Manages multiple Global Mirror sessions Sends device info, scripts, alerts to R-sys R-sys activities: Secondary disk recovery, CBU activation, activate backup LPARs, IPLs systems. 14

66. GSE zos Expertenforum 2007 CA/DR Solution (3 sites) Differences 2006 IBM Corporation

A z/os Global Mirror PPRC C B K R SDM XRC PPRC Hyperswap In GDPS V3.3 a planned Hyperswap will not work Execute a unplanned Hyperswap Switch from A to B XRC Sessions from a to C are unusable Reestablish the XRC Session s (Initial Copy!) If downtime from A takes longer you must decide to establish XRC from B If downtime from A short then you can wait but this is a Business decision 16

z/os Global Mirror zseries Solution metropolitan unlimited Production Site 1 Site 2 Site 3 distance distance CF CF Parallel Sysplex CF Parallel Sysplex CF P' PPRC secondary GDPS/ PPRC P X PPRC primary XRC primary XRC X' F' XRC secondary Continuous Availability GDPS PPRC or GDPS/PPRC HM Designed to provide continuous availability and no data loss between sites 1 and 2 Sites 1 and 2 can be same building or campus distance to minimize performance impact Site 2 servers optional Site 1 failure Disaster/Recovery ƒ Switch to Site 2 disk (if server exists on Site 2) Site 2 failure ƒ Production continues in Site 1 Site 1 and 2 failure ƒ Failover to Site 3 with minimal data loss Continuous Availability, No data loss, Unlimited Distance 17

Metro & Global Mirror zseries and Open Solution metropolitan unlimited Production Site 1 Site 2 Site 3 distance distance CF CF Parallel Sysplex CF Parallel Sysplex CF P' PPRC primaries GDPS/ PPRC P X PPRC secondaries GM primaries Global Mirror X' F GM secondaries Continuous Availability GDPS/PPRC Designed to provide continuous availability and no data loss between sites 1 and 2 Sites 1 and 2 can be same building or campus distance to minimize performance impact Site 2 servers optional Site 1 failure Disaster/Recovery ƒ Switch to Site 2 disk (if server exists on Site 2) Site 2 failure ƒ Production continues in Site 1 Site 1 and 2 failure ƒ Failover to Site 3 with minimal data loss GDPS Managed coordinated solution for zseries and open systems 18

Questions 19