Chapter 13 Selected Storage Systems and Interface



Similar documents
SCSI vs. Fibre Channel White Paper

Computer Organization & Architecture Lecture #19

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Using High Availability Technologies Lesson 12

EVOLUTION OF NETWORKED STORAGE

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Chapter 5 Busses, Ports and Connecting Peripherals

Hard Disk Drives and RAID

Chapter Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig I/O devices can be characterized by. I/O bus connections

Storage Area Network

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

PCI Express Overview. And, by the way, they need to do it in less time.

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

How PCI Express Works (by Tracy V. Wilson)

The proliferation of the raw processing

VERITAS Backup Exec 9.0 for Windows Servers

Computer Systems Structure Input/Output

Redundancy in enterprise storage networks using dual-domain SAS configurations

Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin

The Bus (PCI and PCI-Express)

CompTIA Storage+ Powered by SNIA

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

Storage Basics. Storage is a huge topic, but this article will give you a high-level look at how it all fits together.

Optimizing Large Arrays with StoneFly Storage Concentrators

SAN Conceptual and Design Basics

TCP/IP Network Communication in Physical Access Control

PCI Express and Storage. Ron Emerick, Sun Microsystems

Storage Area Network Architectures. Technology White Paper. Heng Liao Technical Advisor

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

High Availability Server Clustering Solutions

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option

WD Hard Drive Interface Guide

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

How To Evaluate Netapp Ethernet Storage System For A Test Drive

PCI Express* Ethernet Networking

Edges. Extending Data Center SAN Advantages to the Workgroups. Kompetenz in SCSI, GmbH. Fibre & Fiber

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Network Design. Yiannos Mylonas

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Introduction to MPIO, MCS, Trunking, and LACP

High Speed I/O Server Computing with InfiniBand

PCI Express: Interconnect of the future

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

3G Converged-NICs A Platform for Server I/O to Converged Networks

Storage Networking Foundations Certification Workshop

Computer Networks. Definition of LAN. Connection of Network. Key Points of LAN. Lecture 06 Connecting Networks

iscsi: Accelerating the Transition to Network Storage

Copyright 2002 Concord Communications, Inc. Network Health is a registered trademark of Concord Communications, Inc. Concord, the Concord logo,

Why Choose Native Fibre Channel Backup? Optimal protection for the SAN

Ways to Use USB in Embedded Systems

SAN and NAS Bandwidth Requirements

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

Introdução ao DAS e SCSI

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

IDE/ATA Interface. Objectives. IDE Interface. IDE Interface

Communicating with devices

Local-Area Network -LAN

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

3.4 Planning for PCI Express

Block based, file-based, combination. Component based, solution based

IP SAN Best Practices

Building Enterprise-Class Storage Using 40GbE

Gigabit Ethernet Design

How To Build A Clustered Storage Area Network (Csan) From Power All Networks

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

PCI EXPRESS TECHNOLOGY. Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

Chapter 10: Mass-Storage Systems

CMPD 323 DATA COMMUNICATION & COMPUTER NETWORKS. Chapter 5: Network Design and Troubleshooting Scenarios

Appendix A: Camera connectors

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

VDI Solutions - Advantages of Virtual Desktop Infrastructure

Serial ATA in Servers and Networked Storage

Customer Education Services Course Overview

VMWARE WHITE PAPER 1

SAS & FC COMPARED THE R/EVOLUTION DIFFERENCE. white paper. SAS vs. FC

DAS (Direct Attached Storage)

Auspex Support for Cisco Fast EtherChannel TM

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

Performance Analysis of RAIDs in Storage Area Network

STORAGE AREA NETWORKS MEET ENTERPRISE DATA NETWORKS

How To Increase Network Performance With Segmentation

Understanding the Performance of an X User Environment

Advanced Knowledge and Understanding of Industrial Data Storage

Serial ATA and Serial Attached SCSI technologies

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Computer buses and interfaces

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Affordable. Simple, Reliable and. Vess Family Overview. VessRAID FC RAID Storage Systems. VessRAID SAS RAID Storage Systems

Transcription:

Chapter 13 Selected Storage Systems and Interface

Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols. Understand the interplay between data communications and storage architectures. Become familiar with a variety of widely installed I/O interfaces.

13.1 Introduction This chapter is a brief introduction to several important mass storage systems. You will encounter these ideas and architectures throughout your career. The demands and expectations placed on storage have been growing exponentially. Storage management presents an array of interesting problems, which are a subject of ongoing research.

13.1 Introduction Storage systems have become independent systems requiring good management tools and techniques. The challenges facing storage management include: Identifying and extracting meaningful information from multi-terabyte systems. Organizing disk and file structures for optimum performance. Protecting large storage pools from disk crashes.

13.2 SCSI SCSI, an acronym for Small Computer System Interface, is a set of protocols and disk I/O signaling specifications that became an ANSI standard in 1986. The key idea behind SCSI is that it pushes intelligence from the host to the interface circuits thus making the system nearly self-managing. The SCSI specification is now in its third generation, SCSI-3, which includes both serial and parallel interfaces.

13.2 SCSI Classic SCSI-2 daisy chains the host and disk devices along a parallel cable. Depending on the implementation, the cable may contain as many as 68 pins.

13.2 SCSI The SCSI-2 protocol uses phases, as shown in this state diagram.

13.2 SCSI The SCSI-3 Architecture Model (SAM) is a layered architecture of specifications for numerous serial and parallel interfaces. Each layer interacts with a host-level command architecture called the SCSI-3 Common Access Method (CAM) that can interface with practically any type of storage device. The layers communicate with each other using protocol service requests, indications, responses, and confirmations. We show the SAM on the next slide.

13.2 SCSI The SCSI-3 Architecture Model

13.2 SCSI One of the interesting SAM serial protocols is the IEEE 1394 interface, which is also known as FireWire. IEEE 1394 isn t just a storage interface, it is a peer-to-peer storage network. Its salient features include: A 6-conductor sable, 4 for data and control, 2 for power. Up to 15 feet of cable between each device. Up to 63 daisy chained devices. Support of hot plugging. The next slide shows an example configuration.

13.2 SCSI An IEEE 1394 Tree Configuration

13.2 SCSI This is the IEEE 1394 protocol stack.

13.2 SCSI Serial Storage Architecture (SSA) is now of interest only in a historical sense. In the early 1990s, SSA proved the superiority of serial protocols in high demand environments. Until SSA, peripheral devices in the data center were interconnected using parallel cables. Its dual loop topology provided throughput at 40MBps (simplex) or 80MBps (duplex). The next slide shows an example configuration.

13.2 SCSI

13.2 SCSI SSA offered low attenuation over long cable runs, and was compatible with SCSI-2 commands. Devices were self-managing. They could exchange data concurrently if there was bandwidth available between the devices. There was no need to go through a host. SSA was the most promising storage architecture of its day, but its day didn t last long. SSA was soon replaced by much superior Fibre Channel technology.

13.2 SCSI Fibre Channel is one of the preferred storage interfaces employed by large data centers and server farms. Interconnection topologies can be any of three types: switched, point-to-point, or looped.

13.2 SCSI The most widely used form of Fibre Channel is Fibre Channel Arbitrated Loop (FC-AL) in basic loop or switched hub configurations. Switched hub configurations provide maximum throughput (100MBps over fiber) for a practically unlimited number of devices (up to 2 24 ).

13.2 SCSI Fibre Channel is as much a data network protocol as it is a storage network protocol. The lower three layers of its protocol stack (shown on the next slide) are the same for data networks and storage networks. Owing to its higher level protocol mappings, Fibre Channel networks do not require direct connection to a host, and they can fit seamlessly into a data network configuration.

13.2 SCSI

13.3 Internet SCSI Fibre Channel components are costly, and the installation and maintenance of Fibre Channel systems requires specialized training. Because of this, a number of alternatives are taking hold. One of the most widely deployed of these is Internet SCSI (iscsi). The general idea is to replace the SCSI bus with an Internet connection.

This diagram illustrates how a traditional parallel SCSI system processes an I/O request made by a program running on a host. 13.3 Internet SCSI

This diagram illustrates how an iscsi system processes an I/O request. 13.3 Internet SCSI

13.3 Internet SCSI Of course, there is considerable overhead involved with traversing so many protocol layers. By the time a packet of SCSI data gets to the Internet, it is encapsulated in numerous PDUs.

13.3 Internet SCSI In order to deal with such heavy overhead, iscsi systems incorporate special embedded processors called TCP offload engines (TOEs) to relieve the main processors of the protocol conversion work. An advantage of iscsi is that there are technically no distance limitations. But the use of the Internet to transfer sensitive data raises a number of security concerns that must be dealt with head on.

13.3 Internet SCSI Many organizations support both Fibre Channel and iscsi systems. The Fibre Channel systems are used for those storage arrays that support heavy transaction processing that requires excellent response time. The iscsi arrays are used for user file storage that is tolerant of delays, or for long-distance data archiving. No one expects either technology to become obsolete anytime soon.

13.4 Storage Area Networks Fibre Channel technology has enabled the development of storage area networks (SANs), which are designed specifically to support large pools of mass storage. SANs are logical extensions of hos storage buses. Any type of host connected to the network has access to the same storage pool. PCs, servers, and mainframes all see the same storage system. SAN storage pools can be miles distant from their hosts. The next slide shows an example SAN.

13.4 Storage Area Networks

13.4 Storage Area Networks SANs differ from network attached storage (NAS) because they can be isolated from routine network traffic. This isolation facilitates storage management and enables various security methods. SANs have a lower protocol overhead than NAS because only the storage protocols are involved. NAS is gaining popularity because of lower costs for equipment and training of personnel, most of whom are familiar with data networking protocols. The next slide shows an example NAS.

13.4 Storage Area Networks

13.5 Other I/O Connections There are a number of popular I/O architectures that are outside the sphere of the SCSI-3 Architecture Model. The parallel I/O buses common in microcomputers are an example. The original IBM PC used an 8-bit PC/XT bus. It became an IEEE standard and was renamed the Industry Standard Architecture (ISA) bus. This bus eventually became a bottleneck in small computer systems.

13.5 Other I/O Connections Several improvements to the ISA bus have taken place over the years. The most popular is the AT bus. Variations to this bus include the AT Attachment (ATA), ATAPI, FastATA, and EIDE. Theoretically, ATA buses run as fast as 100MBps. Ultra ATA supports a burst transfer rate of 133MBps. AT buses are inexpensive and well supported in the industry. They are, however, too slow for today s systems.

13.5 Other I/O Connections As processor speeds continue to increase, even Ultra ATA cannot keep up. Furthermore, these fast processors dissipate a lot of heat, which must be moved away from the CPU as quickly as possible. Fat parallel disk ribbon cables impede air flow. Serial ATA (SATA) is one solution to these problems. Its present transfer rate is 300MBps (with faster rates expected soon). SATA uses thin cables that operate with lower voltages and longer distances.

13.5 Other I/O Connections Serial attached SCSI (SAS) is plug compatible with SATA. SAS also moves data at 300MBps, but unlike SATA, it can (theoretically) support 16,000 devices. Peripheral Component Interconnect (PCI) is yet another proposed replacement for the AT bus. It is a data bus replacement that operates as fast as 264MBps for a 32-bit system, and 528MBps for a 64-bit system.

13.5 Other I/O Connections Universal Serial Bus (USB) is another I/O architecture of interest. USB isn t really a bus: It s a serial peripheral interface that connects to a microcomputer expansion bus. Up to 127 devices can be cascaded off of one another up to five deep. USB 2.0 supports transfer rates of up to 480Mbps. Its low power consumption makes USB a good choice for laptop and handheld computers.

13.5 Other I/O Connections The High Performance Peripheral Interface (HIPPI) is another interface that is outside of the SCSI-3 Architecture Model. It is designed to interconnect supercomputers and high-performance mainframes. Present top speeds are 100MBps with work underway for a 6.4GPps implementation. Without repeaters, HIPPI can travel about 150 feet (50 meters) over copper and 6 miles (10 km) over fiber.

Chapter 13 Conclusion We have examined a number of popular I/O architectures to include SCSI-2, FC-AL, ATA, SATA, SAS, PCI, USB, IEEE 1394, and HIPPI. Many of these architectures are part of the SCSI Architecture Model. Fiber Channel is most often deployed as Fiber Channel Arbitrated Loop, forming the infrastructure for storage area networks, although iscsi is gaining fast. The next slide summarizes the capabilities of the I/O architectures presented in this chapter.

Chapter 13 Conclusion