STORAGE AREA NETWORKS MEET ENTERPRISE DATA NETWORKS



Similar documents
Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

VERITAS Backup Exec 9.0 for Windows Servers

Storage Networking Foundations Certification Workshop

Chapter 13 Selected Storage Systems and Interface

Using High Availability Technologies Lesson 12

FICON Extended Distance Solution (FEDS)

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

SCSI vs. Fibre Channel White Paper

SAN Conceptual and Design Basics

Storage Area Network

Local-Area Network -LAN

EVOLUTION OF NETWORKED STORAGE

The proliferation of the raw processing

Storage Area Network and Fibre Channel Protocol Primer

IP Storage On-The-Road Seminar Series

Copyright 2002 Concord Communications, Inc. Network Health is a registered trademark of Concord Communications, Inc. Concord, the Concord logo,

Fibre Channel over Ethernet in the Data Center: An Introduction

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Block based, file-based, combination. Component based, solution based

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Chapter 9A. Network Definition. The Uses of a Network. Network Basics

Storage Area Networks (SANs) and iscsi Protocol An Introduction to New Storage Technologies

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

NAS or iscsi? White Paper Selecting a storage system. Copyright 2007 Fusionstor. No.1

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

SAN and NAS Bandwidth Requirements

iscsi: Accelerating the Transition to Network Storage

Gigabit Ethernet. Abstract. 1. Introduction. 2. Benefits of Gigabit Ethernet

Local Area Networks (LANs) Blueprint (May 2012 Release)

Network Design. Yiannos Mylonas

Unleashing the SAN in Your IP Network

ADVANCED NETWORK CONFIGURATION GUIDE

How To Build A Clustered Storage Area Network (Csan) From Power All Networks

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Quantum StorNext. Product Brief: Distributed LAN Client

Wireless Links - Wireless communication relies on radio signals or infrared signals for transmitting data.

Data Storage Solutions

Customer Education Services Course Overview

Computer Networking Networks

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

EMC Disk Library with EMC Data Domain Deployment Scenario

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE

Storage Area Network Design Overview Using Brocade DCX Backbone Switches

Lecture 1. Lecture Overview. Intro to Networking. Intro to Networking. Motivation behind Networking. Computer / Data Networks

ADSL or Asymmetric Digital Subscriber Line. Backbone. Bandwidth. Bit. Bits Per Second or bps

Layer 3 Network + Dedicated Internet Connectivity

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Chapter 2 - The TCP/IP and OSI Networking Models

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

FIBRE CHANNEL OVER ETHERNET

Evaluation Report. PACS Education. An Introduction to Storage Area Networks (SAN) Crown Copyright June MHRA Educational Report MHRA 03055

Leased Line + Remote Dial-in connectivity

IBM TotalStorage SAN Switch F16

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

How To Build A Network For Storage Area Network (San)

UPPER LAYER SWITCHING

PCI Express* Ethernet Networking

WAN Technology. Heng Sovannarith

Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International

Backup and Restore on Storage Area Network

Primary Data Center. Remote Data Center Plans (COOP), Business Continuity (BC), Disaster Recovery (DR), and data

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

Communication Networks. MAP-TELE 2011/12 José Ruela

CSE 3461 / 5461: Computer Networking & Internet Technologies

High Availability Server Clustering Solutions

Basic Networking Concepts. 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet

Chapter 7: Computer Networks, the Internet, and the World Wide Web. Invitation to Computer Science, C++ Version, Third Edition

How To Increase Network Performance With Segmentation

Top-Down Network Design

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

Optimizing Infrastructure Support For Storage Area Networks

MANAGEMENT INFORMATION SYSTEMS 8/E

Edges. Extending Data Center SAN Advantages to the Workgroups. Kompetenz in SCSI, GmbH. Fibre & Fiber

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

1 Which network type is a specifically designed configuration of computers and other devices located within a confined area? A Peer-to-peer network

WANs connect remote sites. Connection requirements vary depending on user requirements, cost, and availability.

Advanced Knowledge and Understanding of Industrial Data Storage

List of Figures and Tables

Network Attached Storage. Jinfeng Yang Oct/19/2015

Optimizing Large Arrays with StoneFly Storage Concentrators

High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF

Transcription:

51-20-92 DATA COMMUNICATIONS MANAGEMENT STORAGE AREA NETWORKS MEET ENTERPRISE DATA NETWORKS Lisa M. Lindgren INSIDE Rationale for SANs; SAN Evolution and Technology Overview; Fibre Channel Details; Accommodating SAN Traffic on the Enterprise Data Network Until now, people who manage enterprise data networks and people who manage enterprise storage have had little in common. Each has pursued a separate path, with technology and solutions that were unique to their particular environments. Enterprise network managers have been busy building a secure and switched infrastructure to meet the increasing bandwidth and access demands of corporate intranets and extranets. Storage management has been more closely related with particular applications like data backup and data mirroring. Enterprises have built standalone storage area networks (SANs) to manage the exponentially increasing volume of data that must be stored, retrieved, and safeguarded. With recent announcements, some enterprises will begin to merge storage-related networks with their data networks. This move, while making financial sense in some cases and providing tangible benefits, will create new challenges for the enterprise data network. This article provides a look at the rationale for SANs, the evolution of SANs, and the implications for the enterprise data network. A few definitions are in order. A storage area network (SAN) is a network that is built for the purpose of moving data to, from, or between storage devices, such as tape libraries and disk subsystems. A SAN is built of many of the elements common in data networks namely, switches, routers, and gateways. The difference is that PAYOFF IDEA Recent advances in technology allow the merging of storage area networks (SANs) with enterprise data networks. This article, written with the enterprise data network manager in mind, provides an overview of the rationale and the technology of SANs. It describes how the merger of these previously disjoint networks can occur and the implications for the enterprise data network. 08/00

these are not the same devices that are implemented in data networks. The media and protocols are different, and the nature of the traffic is different as well. A SAN is built to efficiently move very large data blocks and to allow organizations to manage a vast amount of SAN-attached data. By contrast, a data network must accommodate both large file transfers and small transactions such as HTTP requests and responses and 3270/5250-style transactions. A related term that one encounters when dealing with storage is network-attached storage, or NAS. This is not just a mixup of the SAN acronym. An NAS is a device, often called a filer or an appliance, that is a dedicated storage device. It is attached to a data LAN (or, in some cases, a SAN) and allows end users or servers to write data to its local storage. A NAS separates the storage of the data from the client s system and the typical LAN-based application server. The NAS implements an embedded or standard OS, and must mimic at least one network operating system and support at least one workstation operating system (WOS). Many NAS systems claim support for multiple NOSs and multiple WOSs. One common use of an NAS is to provide data backup without involving the CPU of a general-purpose application server. In summary, a SAN is a storage infrastructure designed to store and manage terabytes of data for the enterprise. An NAS is a low-end device designed to service a workgroup and store tens or hundreds of gigabytes. However, they share a common benefit to the enterprise. Both SANs and NAS devices separate the data from the file server. This important benefit is explored in more detail later. Exhibit 1 depicts a basic SAN and its elements in addition to the relationship between a SAN and a data network with NAS devices. RATIONALE FOR SANs SANs allow the decoupling of data storage and the application hosts that access and process the data. The concept of decoupling storage from the application host and sharing data storage devices between application hosts is not new. Mainframe-based data centers have been configured in this way for many years. The unique benefit of SANs, as compared to mainframe-oriented storage complexes, is that a SAN supports a heterogeneous mix of different application hosts. Theoretically, a SAN could be comprised of back-office systems based on Windows NT, Web servers based on Linux, ERP systems based on Sun Solaris, and customer service applications based on OS/390. All hosts could seamlessly access data from a pool of common storage devices, including NAS devices, JBOD (just a bunch of disks), RAID (redundant array of inexpensive disks), tape libraries, tape backup systems, and CD-ROM libraries. Decoupling the application host from the data storage can provide dramatically improved overall availability. Access to particular data is not

EXHIBIT 1 Conceptual Depiction of a Storage Area Network dependent on the health of a single application host. When there is a one-to-one relationship between host and data, the host must be active and have sufficient available bandwidth to respond to a request for the data. SANs allow storage-to-storage connectivity so that certain procedures can take place without the involvement of an application host. For example, data mirroring, backup, and clustering can be easily implemented without impacting the mission-critical application hosts or the enterprise LAN or WAN. This enhances an organization s overall high availability and disaster recovery abilities. SANs permit organizations to respond quickly to demands for increased storage. Without a SAN, the amount of storage available is pro-

portionally related to the number of servers in the enterprise. This is a critical benefit of SANs. Most organizations that have embarked upon E-commerce and E-business initiatives have discovered that their storage requirements are increasing almost exponentially. According to IBM, as organizations begin to perform business transactions via the Internet or extranet, they can expect to see information volume increase eightfold. SANs allow organizations to easily add new storage devices with minimal impact on the application hosts. SAN EVOLUTION AND TECHNOLOGY OVERVIEW Before SANs were around, the mainframe world and the client/server world had completely different storage media, protocols, and management systems. In the mainframe world, ESCON channels and ESCON directors provided a high-speed, switched infrastructure for data centers. An ESCON director is, in fact, a switch that allows mainframes and storage subsystems to be dynamically added and removed. ESCON operated initially at 10 MBps and eventually 17 MBps, which is significantly faster than its predecessor channel technology, Bus-and-Tag (4.5 MBps maximum). Networking of mainframe storage over a wide area network using proprietary protocols has also been available for many years, from vendors such as Network Systems Corporation and CNT. In the client/server world, Small Computer Systems Interface (SCSI) is an accepted and evolving standard. SCSI is a parallel bus that supports a variety of speeds, starting at 5 MBps for SCSI-1 and now supporting up to 320 MBps for the new Ultra320, although most devices installed operate at 20, 40, or 80 MBps. However, unlike the switched configurations possible with ESCON, SCSI is limited to a daisy-chaining configuration with a maximum of four, eight, or sixteen devices per chain, depending on which SCSI standard is implemented. There must be one master in the chain, which is typically the host server. It was the development and introduction of Fibre Channel technology that made SANs possible. Fibre Channel is the interconnect technology to allow organizations to build a shared or switched infrastructure for storage that parallels in many ways a data network. Fibre Channel: is a set of ANSI standards offers high speeds of 1 Gbps with a sustained throughput of 97 MBps (standard is scalable to up to 4 Gbps) supports point-to-point, arbitrated loop, and fabric (switched) configurations supports SCSI, IP, video, and raw data formats supports fiber and copper cabling supports distances up to 10 km

Fibre Channel is used primarily for storage connectivity today. However, the Fibre Channel Industry Association (www.fibrechannel.com) positions Fibre Channel as a viable networking alternative to Gigabit Ethernet and ATM. They cite CAD/CAE, imaging, and corporate backbones as good targets for Fibre Channel networking. In reality, it is unlikely that Fibre Channel will gain much of a toe-hold in the enterprise network because it would require a wholesale conversion of NICs, drivers, and applications the very reason that ATM has lost out to Gigabit Ethernet in many environments. SANs within a campus are built using Fibre Channel hubs, switches, and gateways. The hubs, like data networking hubs, provide a shared bandwidth approach. Hubs link individual elements together to form an arbitrated loop. Disk systems integrate a loop into the backplane and then implement a port bypass circuit so that individual disks are hot swappable. Fibre Channel switches are analogous to Ethernet switches. They offer dedicated bandwidth to each device that is directly attached to a single port in a point-to-point configuration. Like LAN switches, Fibre Channel switches are stackable so that the switch fabric is scalable to thousands of ports. Host systems (i.e., PC server, mainframes) support Fibre Channel host adapter slots or cards. Many hosts are configured with a LAN or WAN adapter as well for direct access to the data network (see Exhibit 2). Newer storage devices have direct Fibre Channel adapters. Older storage devices can be integrated into the Fibre Channel fabric by connecting to a SCSI-to-FC gateway or bridge. FIBRE CHANNEL DETAILS Fibre Channel has been evolving since 1988. It is a complex set of standards that is defined in approximately 20 individual standards documents under the ANSI standards body. Although a thorough overview of all of the details of this complex and comprehensive set of standards is beyond the scope of this article, the basics of Fibre Channel layers, protocols, speeds and media, topologies, and port types are provided. Like other networking technologies, Fibre Channel provides some of the services defined by the Open Systems Interconnect (OSI) seven-layer reference model. The Fibre Channel standards define the physical layer up to approximately the transport layer of the OSI model, broken down into five different layers: FC-0, FC-1, FC-2, FC-3, and FC-4. Fibre Channel itself does not define a particular transport or upper layer protocol. Instead, it defines mappings from several popular and common upper layer protocols (e.g., SCSI, IP) to Fibre Channel. Exhibit 3 summarizes the functions of the five Fibre Channel layers.

EXHIBIT 2 Components of a Storage Area Network EXHIBIT 3 Fibre Channel Layers Layer FC-0 FC-1 FC-2 FC-3 FC-4 Functions Signaling, media specifications, receiver/transmitter specifications 8B/10B character encoding, link maintenance Frame format, sequence management, exchange management, flow control, classes of service, login/logout, topologies, segmentation and reassembly Services for multiple ports on one node Upper Layer Protocol (ULP) mapping Small Computer System Interface (SCSI) Internet Protocol (IP) High Performance Parallel Interface (HIPPI) Asynchronous Transfer Mode Adaption Layer 5 (ATM-AAL5) Intelligent Peripheral Interface 3 (IPI-3) (disk and tape) Single Byte Command Code Sets (SBCCS) Future ULPs Source: University of New Hampshire InterOperability Lab.

Although its name may imply otherwise, the Fibre Channel standard supports transmission over both fiber and copper cabling for transmission up to the full-speed rate of 100 megabytes per second (MBps). Slower rates are supported, and products are currently available at half-, quarter-, and eighth-speeds representing speeds of 50, 25, and 12.5 MBps, respectively. Higher speeds of 200 and 400 MBps are also supported and implemented in today s products, but only fiber cabling is supported at these higher speeds. The Fibre Channel standards support three different topologies: pointto-point, arbitrated loop, and fabric. A point-to-point topology is straightforward. A single cable connects two different end points, such as a server and a disk subsystem. The arbitrated loop topology is analogous to a shared media LAN such as Ethernet or Token Ring. Like a LAN, the devices on an arbitrated loop share the total bandwidth. This is a complex topology because issues like contention for the loop must be resolved, but it is the most common topology implemented today. The devices in an arbitrated loop can be connected from one to another in a ring-type topology, or a centralized hub can be implemented to allow for an easier and more flexible starwired configuration. A single arbitrated loop can connect up to 127 devices, which is sufficient for many SAN implementations. The final topology is the fabric. This is completely analogous to a switched Fast Ethernet environment. The devices and hosts are directly attached, point-to-point, to a central switch. Each connection can utilize the full bandwidth of the connection. Switches can be networked together. The fabric can support up to 2 24 devices. The fabric is the topology that offers the maximum scalability and availability. Obviously, it is also the most costly of the three topologies. The Fibre Channel standards define a variety of different types of ports that are implemented in various products. Exhibit 4 provides a definition of the various types of ports. EXHIBIT 4 Fibre Channel Port Types Port Type N_Port F_Port L_Port NL_Port FL_Port E_Port G_Port GL_Port Definition Node port, implemented on an end node such as a disk subsystem, server, or PC Port of the fabric, such as on an FC switch Arbitrated loop port, such as on an FC hub Node port that also supports arbitrated loop Fabric port that also supports arbitrated loop Connect FC switches together A port that may act either as an F_Port or an E_Port A G_Port that also supports arbitrated loop

ACCOMMODATING SAN TRAFFIC ON THE ENTERPRISE DATA NETWORK Enterprises are widely implementing SANs to meet the growing demand for enterprise storage. The benefits are real and immediate. However, in some cases, the 10-kilometer limit of a SAN can be an impediment. For example, a disaster recovery scheme may require sending large amounts of data to a sister site located in another region of the country hundreds of miles away. For this and other applications, enterprises need to send SAN traffic over a WAN. This should not be done lightly, because WAN speeds are often an order of magnitude lower than campus speeds and the amount of data can be enormous. However, there are very real and valid instances in which it is desirable or imperative to send storage traffic over a WAN, including: remote tape backup for disaster recovery remove disk mirroring for continuous business operations use of a storage service provider for outsourced storage services Enterprises have two basic choices in extending the SAN to the wide area. They can either build a stand-alone WAN that is used only for storage traffic, or they can integrate it with the existing data WAN. A standalone WAN can be built with proprietary protocols over high-speed links or it can utilize ATM. The obvious downfall of this approach is its high cost of ownership. If the links are not fully utilized for a large portion of the day and week, it may be difficult to justify a separate infrastructure and ongoing telecommunication costs. The advantage of this approach is that it dedicates bandwidth to storage management. A shared network approach may be viable in certain instances. With this approach, the SAN traffic shares the WAN with the traditional enterprise data network. Various approaches exist to allow this to happen. As already detailed, the Fibre Channel standards define a mapping for IPover-FC so products that implement the IP will work natively over any IP-based data WAN. Other approaches encapsulate proprietary storageoriented protocols (e.g., EMC s proprietary remote data protocol, Symmetrix Remote Data Facility SRDF) within TCP/IP so that the traffic is seamlessly transported on the WAN. What does all this mean to networking vendors and enterprise network managers? First and foremost, it means that the data WAN, already besieged with requests for increased bandwidth to support new E-commerce and new E-business applications, may need to deal with a potentially huge new type of traffic not previously anticipated. The key in making a shared storage/data network work will be the cooperative planning between the affected IT organizations. For example, can the storage traffic only use the network during period of low transaction traffic? What is the amount of data, and what is the window in which the transfer of data must be completed? What bandwidth management, qual-

ity of service, and queuing tools are available to allow the two environments to coexist peacefully? These are the critical questions that the enterprise data manager must ask to begin the process of defining a solution that will minimize the impact on the regular data traffic. SUMMARY Storage area networks (SANs) are being implemented in enterprises of all sizes. The separation of the storage of data from the application or file server has numerous benefits. Fibre Channel, a set of standards defined over a period of years to support high speeds and ubiquitous connectivity, offers the enterprise a variety of different topologies. However, in some cases, the SAN must be extended over a wide area data network. When this happens, the impact to the data network can be severe if proper planning and tools are not put in place. The enterprise data manager must understand the type, quantity, duration, and timing of the storage traffic in order to effectively integrate the storage data with the enterprise data network while minimizing the impact on both operations. Lisa M. Lindgren is an independent consultant, freelance high-tech marketing specialist, and co-editor of Auerbach s Data Communications Management. She has more than 15 years of experience working for leading enterprise-networking vendors, most recently Cisco Systems. She has an MBA from the University of St. Thomas and a BA in Computer Science from the University of Minnesota.