FICON Extended Distance Solution (FEDS)



Similar documents
IBM Systems and Technology Group Technical Conference

The Consolidation Process

IBM Virtualization Engine TS7700 GRID Solutions for Business Continuity

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

Primary Data Center. Remote Data Center Plans (COOP), Business Continuity (BC), Disaster Recovery (DR), and data

Lisa Gundy IBM Corporation. Wednesday, March 12, 2014: 11:00 AM 12:00 PM Session 15077

EonStor DS remote replication feature guide

IP Storage On-The-Road Seminar Series

Performance and scalability of a large OLTP workload

IBM Communications Server for Linux - Network Optimization for On Demand business

IBM Tivoli Storage FlashCopy Manager Overview Wolfgang Hitzler Technical Sales IBM Tivoli Storage Management

Integrated and reliable the heart of your iseries system. i5/os the next generation iseries operating system

High Availability Server Clustering Solutions

IBM Tivoli Monitoring for Network Performance

Using High Availability Technologies Lesson 12

SAN Conceptual and Design Basics

secure Agent Secure Enterprise Solutions

HITACHI DATA SYSTEMS USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013

IBM TotalStorage IBM TotalStorage Virtual Tape Server

IBM Systems Director Navigator for i5/os New Web console for i5, Fast, Easy, Ready

HP Cisco data replication solutions. Partnering to deliver business continuity and availability

Backups in the Cloud Ron McCracken IBM Business Environment

How To Build A Network For Storage Area Network (San)

IBM TotalStorage SAN Switch F16

IBM System Storage DS5020 Express

Business Resilience for the On Demand World Yvette Ray Practice Executive Business Continuity and Resiliency Services

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

BROCADE PERFORMANCE MANAGEMENT SOLUTIONS

Dynamic Verizon Ethernet Solutions for the Extended Manufacturing Enterprise

High Availability for Linux on IBM System z Servers

VERITAS VERTEX Initiative

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Universal Data Access and Future Enhancements

STORAGE AREA NETWORKS MEET ENTERPRISE DATA NETWORKS

Marker Drivers and Requirements. Encryption and QKD. Enterprise Connectivity Applications

High Availability Architectures for Linux in a Virtual Environment

SuSE Linux High Availability Extensions Hands-on Workshop

System z Batch Network Analyzer Tool (zbna) - Because Batch is Back!

Long-Distance Configurations for MSCS with IBM Enterprise Storage Server

The safer, easier way to help you pass any IT exams. Exam : Storage Sales V2. Title : Version : Demo 1 / 5

Synchronous Data Replication

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Protecting enterprise servers with StoreOnce and CommVault Simpana

VERITAS Business Solutions. for DB2

What s the best disk storage for my i5/os workload?

IBM Tivoli Network Manager software

Introduction to PCI Express Positioning Information

AT&T Ethernet Services. Your Network Should Fit Your Business Needs, Not The Other Way Around

IBM and VERITAS Practical disaster recovery

Microsoft SQL Server Always On Technologies

PCI Express* Ethernet Networking

Broadband Networks Virgil Dobrota Technical University of Cluj-Napoca, Romania

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

Quantum StorNext. Product Brief: Distributed LAN Client

Managed Services - A Paradigm for Cloud- Based Business Continuity

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Why Switched FICON? (Switched FICON vs. Direct-Attached FICON)

Alcatel-Lucent 1665 Data Multiplexer Extend (DMXtend) for Service Providers

Migrating LAMP stack from x86 to Power using the Server Consolidation Tool

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

OPTICAL TRANSPORT NETWORKS

Rapid Data Backup and Restore Using NFS on IBM ProtecTIER TS7620 Deduplication Appliance Express IBM Redbooks Solution Guide

Using Virtualization to Help Move a Data Center

How To Write An Architecture For An Bm Security Framework

IMS Disaster Recovery

Extended Distance SAN with MC/ServiceGuard Opens New Disaster Recovery Opportunities

Local-Area Network -LAN

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

IBM QRadar Security Intelligence Platform appliances

Protect Microsoft Exchange databases, achieve long-term data retention

TCP/IP Network Communication in Physical Access Control

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

IBM Replication Solutions for Business Continuity Part 1 of 2 TotalStorage Productivity Center for Replication (TPC-R) FlashCopy Manager/PPRC Manager

IBM Maximo Asset Management Essentials

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

IBM Tivoli Storage Productivity Center (TPC)

Virtualized Converged Data Centers & Cloud how these trends are effecting Optical Networks

Big data management with IBM General Parallel File System

ADVANCED NETWORK CONFIGURATION GUIDE

Mainframe hardware course: Mainframe s processors

Value Proposition for Data Centers

The Impact Of The WAN On Disaster Recovery Capabilities A commissioned study conducted by Forrester Consulting on behalf of F5 Networks

Fujitsu Gigabit Ethernet VOD Solutions

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

IBM Storwize Rapid Application Storage solutions

TSM (Tivoli Storage Manager) Backup and Recovery. Richard Whybrow Hertz Australia System Network Administrator

White Paper. What is IP SAN?

Using Multipathing Technology to Achieve a High Availability Solution

Evaluation of Enterprise Data Protection using SEP Software

Implementing Tivoli Storage Manager on Linux on System z

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Transcription:

IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon bfallon@us.ibm.com

FEDS: The Optimal Transport Solution for Backup and Recovery in a Metropolitan Area Network (MAN) Traditional means of backup and recovery for customer data centers are typically implemented using channel extension schemes. These connectivity requirements are well understood by networking and data center personnel alike. T1, T3, telco lines are time division multiplexed onto Synchronous Optical Network rings (SONET) between data center sites delivering tape, print, and disk data. SONET rings can typically handle a capacity of 2.5 gigabits per second of data. But as businesses have grown, so have their requirements for bandwidth. This paper focuses on a scalable solution, built to deliver higher capacity at faster speeds for lower cost. Extended Distance Solution (FEDS) is the answer for those customers that require such capacity within 150 KM (90 fiber miles) in the Metropolitan Area Network. The Problem: In today s world, terabytes of disk data are used for image and video applications. Traditional channel extension requires transport based on T1, T3, OC/3, over telecommunications lines as shown in Figure 1. These telecommunication line speeds do not have sufficient bandwidth to transport the kind of capacity required for today s Metropolitan Area Networks (MAN). Traditional Channel Extension out to 90 Miles: Escon channels The Problem: Channel ext. equipment. T1 / T3/OC3 Sonet OC/48 T1 / T3/OC3 Channel ext. equipment. Figure 1: Traditional Channel Extension Escon channels - Enterprise Server can only handle 256 maximum channels. - T1 speed = 1.544mbps T3 speed = 45mbps OC/3 speed = 155mbps. Escon speed = 200mbps (slower speeds = poorer performance). - Ficon cannot run over Sonet. - OC/48 Sonet Ring capacity = 2.5gigabits. (12 Escon channels) Multiple rings required to accommodate growth = higher costs. - Channel ext. equipment not scalable. Additional equipment needed to accommodate growth = higher costs. As stated previously, legacy channel extension utilizes a SONET ring infrastructure that on average can only handle a capacity between 622 megabits per second (Mbps) to 2.5 gigabits per second (Gbps). This equates to a maximum of about 12 channels in a 2.5 Gbps SONET ring and does not take into consideration other types of batch and interactive data traffic. Customers can find themselves trapped in a never ending bandwidth crunch, that requires more and more T1s, T3s, and SONET rings, just to accommodate even the most basic data growth within their organization. The Answer: Extended Distance Solution (FEDS) Several finance and securities customers required a more scalable channel extension solution to meet their data needs for backup and recovery within 150 KM (90 fiber miles). To meet this challenge, IBM constructed a FEDS lab at their Gaithersburg location. The following customer requirements were addressed: Reduce I/O channels on IBM S/390 Enterprise Servers and IBM ^ zseries. Remove data buffering equipment. Provide for transport scalability. Provide for seamless migration from an asynchronous disk copy to a synchronous disk copy technology when implementing the Geographically Dispersed Parallel Sysplex TM ( TM ) solution. : FEDS utilizes IBM s fiber connection ( ) technology to transport significant amounts of data at high speeds between data centers over 150 KM (90 fiber miles) apart. is the next step in channel I/O technology, available on the S/390 Enterprise Server and zseries. replaces the current channel implementation. resides at layer 5 of the fibre channel protocol stack (refer to Figure 2). A channel can deliver data at speeds of up to 1.0625 Gbps at distances of over 150 KM (90 fiber miles) as compared to T1 (1.544 Mbps) or T3 (45 Mbps). reduces the number of protocol exchanges, as compared to, and therefore can transport at longer distances with minimal performance impact. can preserve the customer s investment in S/390 and zseries I/O capacity by replacing a number of channels with a fewer number of high capacity channels. 1

Builds on Fibre Channel Standards FC-4 Mapping FC-3 FC-2 Signaling Protocol FC-1 Transmission Protocol FC-0 Interface / Media S/390 IPI / SCSI / HIPPI / SB / IP Audio / Video P / 802.2 Multimedia Channels Networks Upper Level Protocol (ULP) Common Services Framing Protocol / Flow Control Encode / Decode Physical Characteristics Single Mode Fibre / Multimode Fibre / Copper FC-PH leverages Fibre Channel Standards FC-3 layer and below IBM is enhancing the FC-4 layer with IBM has initiated an open committee process for standardization of this FC-4 mapping layer proposal with NCITS ( ANSI ) Native protocols differ from bridge protocols DWDM: Dense Wavelength Division Multiplexing (DWDM) can take multiple data streams (up to 80 gigabytes when using an IBM Fiber Saver 2029 Dense Wavelength Division Multiplexer) and multiplex them over just two pair of optical fibers. This, used in conjunction with Optical amplifiers, can extend the channel out beyond 150 KM (90 fiber miles). Remote Copy: Remote copy provides for the ability to mirror or copy data from the application or local site to the recovery or secondary site. This can be achieved through two specific implementations: extended Remote Copy or XRC is a combined hardware and software asynchronous remote copy solution. The application I/O is signaled completed when the data update to the primary storage is completed. Subsequently, a DFSMSdfp component, called System Data Mover (SDM), asynchronously offloads data from the primary storage subsystem s cache and updates the secondary disk volumes in the recovery site. Data consistency in a XRC environment is ensured by the Consistency Group (CG) processing performed by the System Data Mover (SDM). The CG contains records that have their order of update preserved across multiple Logical Control Units within a storage subsystem and across multiple storage subsystems. XRC operation results in minimal performance impact to the application systems at the application site. Unlike Peer -to- Peer Remote Copy (PPRC), XRC is not distance dependent. Peer-to-Peer Remote Copy or PPRC is a hardware solution that synchronously mirrors data residing on a set of disk volumes, called the primary volumes in the application site, to secondary disk volumes on a second system at another site, the recovery site. Only when the application site storage subsystem receives write complete from the recovery site storage subsystem is the application I/O signaled completed. PPRC is distance sensitive. 2 Test Setup and Test Results: Test Setup: At the IBM Gaithersburg test setup, two LPARs, LPAR1 and LPAR2 running on an IBM 9672-ZZ7 zseries simulated the application site and the recovery site as shown in Figure 3. The I/O driver executed in the application site s LPAR 1 copying data from IBM Enterprise Storage Server 1 (ESS1) to ESS2 to simulate a production environment. The extended Remote Copy / System Data Mover (XRC/SDM) executed in the recovery site s LPAR2 asynchronously copying data from the primary XRC disk subsystem ESS2 in the application site to the secondary disk subsystem ESS3 in the recovery site. Two channels from the SDM image were Dense Wavelength Division Multiplexed using an IBM 2029 Fiber Saver in the recovery site and another IBM 2029 Fiber Saver in the application site. In the application site, two channels were split out over channels via an IBM 9032-5 director with the bridge card feature. The channels in turn attach to ESS2.. Figure 3: / XRC Test Setup ESS 1 32 volumes /XRC Test 9672-ZZ7 LPAR 1 - I/O Driver ESS 2 32 volumes Director Mod 5 2 IBM 2029 Fiber Saver LPAR 2 - System Data Mover Optical Amplifier IBM 2029 Fiber Saver 2 SDM ESS 3 32 volumes

Tests: Various tests, indicative of XRC s and s scalability over distances ranging from 0KM to 150KM, were conducted. Elapsed time to initially synchronize 32 volumes from ESS2 to ESS3. Sustained throughput achieved during the 32 volume synchronization process from ESS2 to ESS3. Elapsed time to copy 100% write data, generated by 32 DSS Copy jobs from ESS1 to ESS2 and copying updated data from 32 volumes of ESS2 to ESS3 using XRC. Sustained throughput to copy 100% write data, generated by 32 DSS Copy jobs from ESS1 to ESS2 and copying updated data from 32 volumes of ESS2 to ESS3 using XRC. The final test required breaking the primary fiber path and dynamically switching to the secondary or backup fiber path without loss of data. Test Results: Figure 4 shows the test results summarized below: Elapsed Time (MIN) 25 24 23 22 21 20 19 18 17 15 ESS Testcase Results XRC/SDM via 2029/Amplifier at Various Distances 0 25 50 75 100 125 150 Distance (KM) Figure 4: / XRC Test Results Elapsed time to initialize 32 volumes (XADDPAIRs) remained consistent from 0 Km to 150 Km. The time ranged from 15:53 min (0 Km) to :32 min (150 Km). 32 XADDPAIR aggregate throughput achieved a consistent 70 to 74 MBS (mega bytes per second) for the 2 channels from 0 to 150 KM. The average throughput per channel was 35 to 37 MBS. 32 DSS copy jobs from ESS1 to ESS2 along with using XRC to copy updates from 32 volumes of 75 70 65 60 55 50 45 40 Avg Thruput (MBS) 32 XADDPAIRs time 32 XADDPAIR thruput 32 DSS Copy jobs time 32 DSS Copy thruput (1) XRC XADDPAIR initial copy/resynch processing is highly optimized (2) DSS copy jobs are 100% write activity whereas most enterprises' workload is 20-30% write 3 ESS2 to ESS3 had an elapsed time of 24.5 min and remained consistent, when the distance was varied from 0 to 150 km. 32 DSS copy jobs from ESS1 to ESS2 along with using XRC to copy updates from 32 volumes of ESS2 to ESS3 aggregate throughput achieved a consistent 46 MBS for the 2 channels from 0 to 150 Km. The average throughput per channel was 23 MBS. When the primary fiber path failed, the 2029 automatically switched over to the secondary fiber path without any impact to the XRC remote copy application. XRC/SDM continued normal execution after the switchover. FEDS, The Right Choice For Metropolitan Geographically Dispersed Parallel Sysplex () is a multisite management facility that is a combination of system code and automation that utilizes the capabilities of Parallel Sysplex technology, storage subsystem mirroring and databases to manage enterprise servers, storage, and network resources. It is designed to minimize and potentially eliminate the impact of a disaster or planned site outage. It provides the ability to perform a controlled site switch for both planned and unplanned site outages, with no data loss in a synchronous environment (PPRC) and minimal data loss in an asynchronous environment (XRC). maintains full data integrity across multiple volumes and storage subsystems. Currently, in synchronous mode (PPRC), known as /PPRC, is restricted to 40 km (24 fiber miles) between data centers. In the future, /PPRC is planned to be able to extend the distance between data centers to 100 km (60 fiber miles). In the interim, customers can implement for disaster recovery using XRC, known as /XRC. Both forms of disk copy can use FEDS for optical fiber transport - XRC over FEDS today, and PPRC over FEDS in the future. By using FEDS, the customer can protect their transport infrastructure investment. (Refer to figures 5 and 6).

Currently or FEDS Transition From /XRC -To- /PPRC Within 60 Fiber Miles RCMF/ XRC 100 km 2029 DWDM 2029 DWDM 9032-5 Director with Bridge Card Optical Amplifier 68GB FEDS capacity Optical Amplifier ESS Shark Storage with XRC RCMF/XRC SDM Additional Information: : (FCV Mode) Planning Guide, SG24-5445-00 S/390 Implementation Guide, SG24-59 IBM Fiber Saver: Fiber Saver (2029) Implementation Guide SG24-5608 Fiber Saver (2029) Planning and Maintenance SC28-6801 Primary DASD SITE 1 Secondary DASD SITE 2 : ibm.com/servers/eserver/zseries/pso/ Figure 5: Current /XRC Implementation RCMF/ PPRC Ficon Director Primary DASD Site 1 FEDS Transition From /XRC -To- /PPRC Within 60 Fiber Miles Future ETR Timer support ISC3 100 km 2029 134GB Feds capacity 2029 ESS Shark Storage PPRC Amplifier Amplifier RCMF/PPRC Director ESS Shark Storage PPRC Secondary DASD Site 2 Acknowledgments: David B. Petersen Enterprise Server Group petersen@us.ibm.com Noshir Dhondy Enterprise Server Group dhondy@us.ibm.com Ian R. Wright Enterprise System Connectivity Specialist iwright@us.ibm.com Figure 6: Future /PPRC Implementation Summary: The Extended Distance Solution (FEDS) has solved the traditional channel extension problems of insufficient bandwidth and scalability. The advantages provided by FEDS are: Scalable bandwidth upto 80 GB Consistent throughput independent of distance from 0 to 150 Km (90 fiber miles) Fewer I/O channels on the enterprise server Faster data rates Lower cost and simpler implementation A migration path from /XRC to /PPRC. 4

Copyright IBM Corporation 2001 IBM Corporation Integrated Marketing Communications, Server Group Route 100 Somers, NY 10589 Produced in the United States of America 09-01 All Rights Reserved References in this publication to IBM products or services do not imply that IBM intends to make them available in every country in which IBM operates. Consult your local IBM business contact for information on the products, features, and services available in your area. IBM, IBM logo, e-business logo,, Enterprise Storage Server,, Geographically Dispersed Parallel Sysplex,, and Parallel Sysplex, S/390 and z/series are trademarks or registered trademarks of IBM Corporation in the United States, other countries or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a registered trademarks of Linus Torvalds. Tivoli is a registered trademark of Tivoli Systems Inc.in the United States, other countries or both. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group. Windows NT is a registered trademark of Microsoft Corporation. Other trademarks and registered trademarks are the properties of their respective companies. IBM hardware products are manufactured from new parts, or new and used parts. Regardless, our warranty terms apply. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. This equipment is subject to all applicable FCC rules and will comply with them upon delivery. Information concerning non-ibm products was obtained from the suppliers of those products. Questions concerning those products should be directed to those suppliers. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. GM13-0092-00.