Evolution of the System z Channel
|
|
|
- Clare Hicks
- 10 years ago
- Views:
Transcription
1 Evolution of the System z Channel Howard Johnson [email protected] Patty Driever [email protected] 8 August 2011 (4:30pm 5:30pm) Session Number 9934 Room Europe 7
2 Legal Stuff Notice IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY U.S.A. Any references in this information to non-ibm Web sites are provied for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: IBM RedbooksTM System z10tm z/os zseries z10tm Other Company, product, or service names may be trademarks or service marks of others.
3 Abstract This session examines the evolution of the channel from the birth of the industry to the looming converged infrastructure. The speakers will discuss the designs and modifications that allowed System z solutions to move from ESCON to and beyond. Come see where the channel has been and where it's going. Special acknowledgement goes out to Dan Casper Mr. Channel in the halls of System z Development. Dan was instrumental in the design of ESCON,, and zhpf. Dan retired from IBM on July 31st and he and his contributions will be sorely missed.
4 Agenda Yesterday Review of the origins of the channel and its early evolution Parallel channels ESCON channel bridge Today Examine the current channel technology Native High Performance Looking Ahead Explore the possible future of channel technology 16G / 32G Fibre Channel over Ethernet
5 Yesterday The origins of the channel and its early evolution
6 The Early Years S/360 & S/370 I/O ARCHITECTURE
7 S/360 First real computer architecture Provided for ability to deliver a range of models with increasing price/performance characteristics Fastest models used hard-wired logic, while microcode technology used to deliver wide range of performance within the S/360 family Provide foundation that enabled application programs to migrate forward across models/system Application level compatibility maintained through today on System z processors Channels were specialized processors driven by a special instruction set that optimized the transfer of data between peripheral devices and system main memory Channel architecture defined mechanism to transfer data, but was independent of device architecture S/360 systems had one byte channel (channel 0) and one to six selector channels New family of I/O devices developed for S/360 that used new standardized bus & tag interfaces
8 S/360 & S/370 I/O Architecture Designed to provide value in key mainframe focus areas: security, resiliency & performance Security/Integrity Host-based configuration definition methodology introduced in S/370 Security controls based on host definitions (IOCDS) 4 digit device number = Channel + Unit Address CU enforced atomicity Reserve/release Extent checking Resiliency Channel set switching (S/370) Allowed OS to connect the set of channels of the failing CPU to another CPU Performance Bi-directional data transfers (reads/writes) and transfer of non-contiguous portions of the disk in a single I/O operation Command chaining Data chaining Indirect Data Address Words (S/370) Asynchronous notification for unsolicited events Busy/no longer busy status Main Store CPU CPU Channel 0 Channel 1 Channel 2 Channel 2 Channel 1 Channel 0
9 Parallel Channels Introduced with S/360 in 1964 Circuit connected (multi-drop CUs & devices) 1 device at a time (max 256 devices per channel) No dynamic switching Initially support ~200 ft distances between channel and device Typical bus & tag channel/cu communications: Initial selection sequence to establish a connection with a control unit Data transfer sequence (CU always controlled CHANNEL CU1 CU2 data transfer) Each byte of data sent requires a response Read: CU says I m sending a byte, and channel says I received a byte Write: CU says I m ready for a byte and channel says Here s a byte Ending sequence (when one side recognizes that all of the available or required bytes have been transferred) Parallel channels are distance/speed limited, because of skew between the transmission lines in the channel When a tag is received on one side, all of the bits of the data on the bus must be valid at that moment CU CU8 Device 20 Device 21 Device 2F If address not recognized, select out tag wraps back to sender as select in
10 Parallel Channels initial types Byte Multiplex Could have several devices performing a data transfer at a time because each device could disconnect between bytes of data transferred Selector These were used with devices that have high data transfer rates, requiring the channel to remain connected to the device until the entire chain of CCWs is executed Tape devices are typical examples
11 Parallel Channels evolving types Block Multiplex Devices could disconnect only after the entire block of data for the command is transferred When device is ready it presents request in to request to be serviced again Data streaming Added a new tag that enabled removal of the interlock between the channel and control unit before the next byte of data could be transferred Each byte was still acknowledged, but CU kept count of responses to know how much was transferred Enabled 400 ft. distances
12 The winds of change TRANSITION TO ESCON
13 S/370-XA (Extended Architecture) S/370 S/370-XA Operating system selected the channel, and then the device on that channel Channel can be addressed only by the single CPU to which it is connected and channel can interrupt only that CPU to which it is connected Once a chain of operations is initiated with a device, same path must be used to complete transfer of all data, commands and status Operating system selected a device number (subchannel), and the channel subsystem knew all the addressing to get to that device and chose the best path Any CPU can initiate an I/O function with any device and can accept an I/O interruption from any device Device can reconnect on any available path Main Store CPU CPU S/370 channels Main Store CPU CPU S/370-XA channel subsystem 0 Subchannels Subchannels
14 S/370-XA (Extended Architecture) Enabled improved performance Main Store All I/O instructions are executed asynchronously by the CPU with respect to the channel subsystem All I/O busy conditions and path selection are handled by the channel subsystem rather than the CPU Reduced CPU overhead in processing no-longer-busy conditions and encountering (possibly recurring) busy conditions on path selection Eliminated program differences required to manage channels by type (e.g. selector vs. multiplexor) Reduced the number of conditions that interrupted the CPU Channel-available, control-unit-end, and device-end no-longer-busy eliminated CPU CPU S/370 channels Subchannels Main Store CPU CPU S/370-XA channel subsystem Subchannels
15 A New Generation ESCON
16 ESCON Introduced in 1990 on the 3090 system Used fiber optic (serial) connectivity instead of copper Link technology supported 20MB/sec, but achieved ~18MB/sec Supported introduction of the first modern SAN with dynamic switching capabilities But still circuit-connect (communication with one device at a time) Distance limitations improved from 400ft to 9km (3km per link) Supported 1000 devices per channel I/O Configuration integrity checking with reset event architecture and self-describing devices (RNID) Fixed link addressing used in IOCDS controlled host access to CU resources SWITCH System CU 1 CHAN System 2 CHAN 23 1D 05 2E 72 CU CU
17 ESCON SWITCH CU CHAN CHAN 23 1D 05 2E 72 CU CU Resiliency State Change Notification Sent to each link level facility (channel or CU) which is potentially affected by state changes in switch ports or connected channels or control unit Security Read Node Identifier Enables host program to determine the specific physical device attached at the end of the link and revalidate attachments after link down conditions Performance Streamlined command chaining In parallel channels, when commands were chained together, after status was sent in for one command the channel would go through the selection sequence again With ESCON, the channel responded to status directly with the new command Streamlined data transfer Buffering in channels and control units removed some of the interlocks involved in data transfer
18 The winds of change TRANSITION TO
19 Late 1990s - What Was Going on Inside IBM Evolution from System/360 to System/390 saw a significant increase in MIPS, main storage, and I/O capacity I/O capacity had largely been improved through the continued addition of channel paths, thus adding cost, complexity, and overall bandwidth without significant improvement in the capacity of a single channel Original 7 channels provided by S/360 evolved to the 256 channels provided by S/390 at that time 256 was the architectural, programming, and machine limit Projected processor MIPS growth along with improved controller technology and increased I/O densities also drove the need to significantly improve the I/O throughput over a single channel path
20 What was going on in the Industry ANSI Fibre Channel had its beginnings about Initial motivation was for higher bandwidth I/O channels that operated efficiently at fiber optic distances (10s of kms) FC proponents argued early on that the requirements and technologies of LAN and channel applications were converging, and that a layered architecture could deliver excellent performance, cost, and migration characteristics for many applications By late 1990s FC found a toehold as a storage interface
21 IBM Goals Significantly improve the I/O throughput over a single channel path Decrease the execution time of a single channel program Significantly improve the data transfer rates of a single channel path Provide for more addressable units (devices) attached to a single channel path Accomplish the above in a fashion that preserves the S/390 investment in existing programming Provide connections that support ESCON equivalent distances with negligible performance penalties at the increased distances Provide a migration path to support existing controllers currently deployed in the field Provide the above using existing industry standards where applicable, and develop new industry standards where needed
22 Comparing Characteristics ESCON Channels Connection oriented Circuit Switching Read or Write Half-duplex data transfers Dedicated path preestablished When packet is sent, the connection is locked Synchronous data transfer One operation at a time Channels Connectionless Packet Switching Simultaneous read/write Full-duplex data transfers Packets individually routed When packet is sent, connection is released Asynchronous data transfer Pipelined and multiplexed operations
23 ESCON vs Frame Processing z900 or S/390 Processor ESCON Frame ESCON Frame ESCON Director I/O DATA CU CU ESCON Channel Half Duplex CU One ESCON channel, one I/O operation to one Control Unit CU z900 or S/390 Processor Frame Full Duplex Frame Frame Director I/O DATA I/O DATA CU CU Frame Channel Frame Frame Frame One channel, multiple I/O operations to multiple Control Units & Devices I/O DATA I/O DATA CU CU
24 Almost a bridge too far THE BRIDGE
25 The Converter (FCV) An ESCON to bridge card was needed to support existing ESCON devices. The bridge card was a feature of the ESCON Directors. Bridged gains some of the benefits of : 8 simultaneous transactions. Increased I/O (3,200 I/O/sec). The bandwidth is about half that of Native and is limited to 1 Gbps links. Bridge cards were used as a first step toward a native infrastructure. The bridge card has 8-ESCON connections. The 1-external port and 8-internal ESCON ports. Replacing an ESCON card with a Bridge card swapped 8-ESCON connections for 1- connection which attached to 8-ESCON devices.
26 ESCON vs Bridge Frame Processing z900 or S/390 Processor ESCON Frame ESCON Frame ESCON Director ESCON Frame CU-A CU-B ESCON Channel Half Duplex CU-C Frame Process via Bridge z900 or S/390 Processor Channel Frame Frame Frame Full Duplex Frame ESCON Director Bridge Card CU-D ESCON Control Units ESCON Frame ESCON Frame ESCON Frame ESCON Frame CU-A CU-B CU-C CU-D ESCON Control Units
27 Operating Modes B ridge Attachment: z900 G5/G6 Channel Channel FCS Links Bridge Bridge ESCON Director ESCON Links ESCON CU ESCON CU ESCON CU Exploit Channel with Existing ESCON Control Units Type=FCV Native Direct Attachment: z900 Channel FCS Link CU Native Control Units Type=FC Native Switched Connectivity: z900 G5/G6 Channel FCS Links Channel Director FCS Link FCS Link CU CU Full Dynamic Switching of Control Units Type=FC
28 ESCON & by the Numbers A comparison Capabilities ESCON Bridge Express 4 I/O operations at a time Only one Any eight 32 open exchanges 64 open exchanges Logically daisy-chained Control Units to a single channel Average I/Os per channel (4k blocks) Unit addresses per channel Unit addresses per control unit Any one I/O, take turns Any 8 I/Os concurrently Any number of I/Os concurrently 2,000-2,500 2,500 6,000 13,000 1,024 16, , , ,024 1,024 16, ,384 + Any number of I/Os concurrently Bandwidth degradation Beyond 9 km Beyond 100 km Beyond 100 km Beyond 100 km
29 Today Current channel technology
30 A Fibre Channel Standard
31 Security, Resiliency, Performance Security & Resiliency Frame Delivery High Integrity Fabrics FC fabrics will validate the WWPN of a re-established ISL before allowing any data to flow on it requires high integrity fabrics In Order Delivery Sequence numbers, IU numbers CCW numbers enables pipelining used to associate response with the particular command sent Bit Stream Integrity FC2 CRC created by adapter ensures that data as it flows on the link is not corrupted adds separate CRC to insure integrity of data at FC4 layer (endto-end between hosts) Carried over from ESCON: Link Incident Reporting (LIRR), State Change Notification, RNID
32 Security, Resiliency, Performance Performance Management and Droop Fibre Channel standard provides for link flow control through buffer-to-buffer credit scheme FC2 function that controls stream of frames between end points on a link (i.e. between nearest neighbors) Determines the distance two nodes can be apart and still maintain full link frame rate IU Pacing - architectural provision for end-to-end flow control Prevents flooding of target N-Port With command pipelining needed mechanism to prevent over-running the control unit Extended distance support enabled CU to dynamically change pacing count on each Command Response
33 Security, Resiliency, Performance Performance Management and Droop I/O Priority Separate priority mechanisms in I/O Subsystem, Channel and Control Unit Modified Indirect Data Addressing Words (MIDAWs) A method of gathering and scattering data into & from non-contiguous System z storage locations during an I/O operation Removed IDAW restriction that appended data must be on 2K storage boundary Improved performance of certain applications (e.g. DB2 sequential workloads) that process small records with Extended Format data sets Measurements granular to service class Allows algorithms for WLM based I/O priority, DCM & intelligent data placement Dynamic CHPID Management allows adding/removing bandwidth to a control unit as workload needs dictate
34 performance Start I/Os Historical Actuals I/Os per second 4k block size, channel 100% utilized Express2 ESCON G5/G z900 Express Express 7200 z900 z z990 z z9 EC z990 z890 Express4 z10 z9 Express z10 z196 Express8S z196 z114
35 Rev ing the Engine ZHPF
36 High Performance The host communicates directly with the control unit The channel is acting as a conduit No individual commands or state tracking The CCW program is sent to the control unit in one descriptor Uses the Fibre Channel FCP link protocol. The channel provides both the new and old protocols HPF provides increased performance for small block transfers & enables greater bandwidth exploitation Complex channel programs that are not easily converted to the new protocol still execute with the existing protocol Devices accessible using both old and new protocols
37 Link Protocols for 4K Read : Two Exchanges opened and closed Six Sequences opened and closed zhpf: One exchange opened and closed Three Sequences opened and closed CHANNEL CONTROL UNIT
38 performance on System z I/O driver benchmark I/Os per second 4k block size Channel 100% utilized ESCON 1200 Express4 and Express z10 z H PF Express4 and Express z10 Express z196 z10 z H PF Express z196 z10 Express8 S z196 z114 z H PF Express8 S z196 z114 77% increase I/O driver benchmark MegaBytes per second Full-duplex Large sequential read/write mix Express4 4 Gbps 350 z10 z9 z H PF Express4 4 Gbps 520 z10 Express 8 8 Gbps 620 z196 z10 z H PF Express 8 8 Gbps 770 z196 z10 Express8 S 8 Gbps 620 z196 z114 z H PF 1600 Express8 S 8 Gbps z196 z % increase
39 Looking Ahead But no reading of the tea leaves
40 How fast can this go? And over what roads?
41 What s Rumbling About in the Industry? 16Gb Fibre Channel Physical interface approved as an ANSI INCITS T11.2 standard in Sept Auto-negotiated backward compatibility to 4Gb Aimed at improved price-performance (as market matures) Twice the bandwidth of 8Gb Leapfrogs fiber channel storage SAN fabric over 10GbE 32Gb Fibre Channel Next logical step for Fibre Channel after 16Gb Work on-going in ANSI with proposals put forward Future will depend on market demand Fibre Channel over Ethernet (FCoE) See next charts for description Future will depend on market demand
42 What is FCoE? FCoE is a technology that straddles two worlds: Fibre Channel and Ethernet Fibre Channel Traffic FC View: FC gets a new transport in the form of lossless Ethernet (CEE) CEE Ethernet view: A new upper-layer protocol, or storage application, that runs over a new lossless Ethernet
43 What does FCoE look like? It s an encapsulation protocol Encapsulation protocol for transporting FC over Ethernet (Lossless Ethernet: DCB) DCB Traditional Ethernet IP Traffic FC Traffic IP Traffic Ethernet Frame, Ethertype = FCoE=8906h Same as a physical FC frame Ethernet Header FCoE Header FC Header FC Payload CRC EOF FCS Control information: version, ordered sets (SOF, EOF) FC frame remains intact: FC does not change Ethernet needs a larger frame: Larger than 1.5 KB Ethernet must become lossless to carry storage data with integrity
44 Where does FCoE fit in the stack? FC-4 FC-3 Fibre Channel Services FC-2 IP Legacy Ethernet Encapsulation Layer 3 Layer 2 CEE Layer Transport 1 Legacy Ethernet Today FCoE FC-1 FC-0 Fibre Channel Today Fibre Channel Transport
45 FCoE & CEE Benefits Fewer cables Reduce number of server ports Reduce number of switches Fewer ports adapters Primary Use Case Top of Rack Configuration SAN A SAN B LAN Reduce cabling Reduce power consumption Fewer switches Increase speed of links Increase utilization of links Benefits of FCoE Consolidating I/O Interfaces Lowers CapEx and OpEx
46 Primary FCoE Use Case Before Unified I/O LAN SAN A SAN B Storage After Unified I/O Disk array,, and tape will require Fibre Channel for years to come SAN Corporate LAN FC Switch FC Switch FCoE Switch Top-of-Rack Switches FCoE Switch FC HBAs (FC Traffic) NICs (Ethernet Traffic) Converged CNA s (CEE/FCoE Traffic) FC Ethernet FCoE and CEE
47 Storage Technology Hype Cycle Curve We are here! Source: Gartner,
48 As the Evolution Continues. System z continues focus on key Enterprise storage fabric attributes: Security Resiliency Performance through: Continued innovation in channel and control unit architectures Working within ANSI Standards Committees to drive requirements into the standards of evolving technologies (such as FCoE) Working with eco-system vendor partners to provide differentiating functions
49 SHARE Orlando August 2011 THANK YOU!
50 Fun Links
51 REFERENCES
52 Speaker Biography Patty Driever IBM System z I/O and Networking Technologist Contact Information [email protected]
53 Speaker Biography Howard L. Johnson BROCADE Technology Architect, 27 years technical development and management Contact Information
54 Stuff we didn t get to BONUS INFORMATION
55 Move to S/370-XA (Extended Architecture) During S/370 SIO processing, the CP was hung up the entire time the channel was connected to a device (~100 usec) On a 1MIP machine this was ~100 instructions IBM 3033 in 1978 introduced initial architectural concepts of a channel subsystem that allowed CP I/O instruction processing execution to be disconnected from the channel processing Main Store Buffering of status in subchannels Queuing of I/O requests in subchannels (Start-I/O-fast queuing) CP stored information in the subchannel control block and went on its way At the end of the operation, the CP received an interrupt indicating result of the operation Suspend-resume facility CPU CPU S/370 channels Subchannels Main Store CPU CPU S/370-XA channel subsystem Subchannels
56 ESCON Basic frame structure: Link Header 7 bytes Information Field bytes Link Trailer 5 bytes Contents of Frame Frame delineation Physical & logical addressing Link function & frame type Transfer Type Device Addressing Device Information CRC (2 bytes) Frame Delineation
57 Basic Building Block ESCON basic frame structure: Link Header 7 bytes Information Field bytes Link Trailer 5 bytes Frame delineation Physical & logical addressing Link function & frame type basic frame structure: Contents of Frame Transfer Type Device Addressing Device Information to 2112 Bytes CRC (2 bytes) Frame Delineation 4 4 FW Start of Frame Four byte SOF delimiter Frame Header Twenty-four byte fixed-format Frame Header Data Field (Optional Headers + Payload) Variable-size Data Field: 0 to a maximum of 2112 bytes ( ) May contain Optional Headers as well as payload CRC End of Frame Four byte CRC Idle FW Four byte EOF delimiter
58 ESCON to Comparison ESCON A somewhat proprietary protocol created by IBM Circuit switched, only a single switch allowed ANSI Standard Information Technology--Single-Byte Command Code Sets CONnection (SBCON) Architecture (formerly ANSI X ) A standard, Fibre Channel FC4 level protocol Packet switched, multiple switches are allowed ANSI Standards Information Technology - Fibre Channel - Single-Byte-2 (FC-SB-2) (formerly ANSI NCITS ) Fibre Channel - Single Byte Command Set 3 (FC-SB-3) (currently T11 Project 1569-D) Both feature I/O operations that are address-centric, definition oriented, and host assigned.
59 Operating Modes There are two operating modes: FCV and FC For System zseries and 9672 G5 and G6 servers, there were two modes supported: Bridge Mode referred to as FCV which is a channel mode designed to enable access to ESCON interfaces using the Bridge Adapter in the ESCON Director (9032-5). Native Mode referred to as FC which enables access to the channel mode from native control units in a pointto-point mode through a Director. (No cascaded.) For zseries and System z, support was added for: Native (FC) but not Bridge. Cascaded. Fibre Channel Protocol (FCP) which provided support for open systems via industry standard SCSI devices.
60 What does FCoE do? Lower CapEx and OpEx Consolidate and simplify server connectivity to LAN and SAN Reduce server adapters, cabling, switch ports, and power usage Lower CapEx and OpEx in Enterprise Data Centers FC HBA FC HBA NIC FC Traffic FC Traffic Enet Traffic CNA CNA Consolidated I/O Ports NIC Enet Traffic
61 Unified I/O Usage Case FC Storage FCoE Switch SAN FCoE Switch Corporate LAN Top-of-Rack Switches CNAs (FCoE and CEE Traffic) Unified I/O: One fabric infrastructure FC and CEE L2/L3 multipathing end to end Fewer components to deploy Lower TCO (future) Disk array,, and tape will require Fibre Channel for years to come FC Ethernet FCoE and CEE
62 Ethernet and FC Roadmaps Parallel Evolution & Potential for Convergence 100M 1G 10G 40G 100G? Enet (Ethernet) iscsi FCoE FC (Fibre Channel) 1G 2G 4G 8G 16G 32G? FC and Ethernet evolved in parallel paths with FC dominating storage SANs and Ethernet supporting IP networking Lossless Ethernet & FCoE open the door for server I/O consolidation
63 FCoE & DCB Industry Standards Activities Making 10GbE Lossless T11 FCoE & FIP standards FCIA approved standards INCITS approved and expected to publish soon IEEE draft DCB components 802.1Qbb priority-based flow control 802.1Qaz enhanced transmission selection DCBX capability exchange protocol 802.1Qau Congestion Notification Completion & approvals expected H2 of 2010 IETF status Transparent Interconnection of Lots of Links (TRILL) Expected completion H
64 Mainframe Channel Cards ESCON Channel Card Architecture of 20MBps; Sustainable Data Rate of about 12MBps 74 MBps First Channel Card did not meet customer expectations SC connector with 2 Ports 100 MBps Express (certified for 120MBps FD) The replacement for the poor performing original card LC connector with 2 Ports 200 MBps Express (certified for 270MBps FD) LC connector with 2 ports 200 MBps Express2 (certified for 270MBps FD) LC connector with 4 ports I/O Performance Improvements 400 MBps Express4 (certified for 350MBps FD) LC connector with 4 ports Probable I/O Performance Improvements
65 ESCON and Standards ESCON Circuit switched, only a single switch allowed ANSI Standard Information Technology--Single-Byte Command Code Sets CONnection (SBCON) Architecture (formerly ANSI X ) Packet switched, multiple switches are allowed ANSI Standards Information Technology - Fibre Channel - Single-Byte-2 (FC-SB-2) (formerly ANSI NCITS ) Fibre Channel - Single Byte Command Set - 3 (FC-SB-3) (currently T11 Project 1569-D)
66 Virtualization of Servers (1 of 2) Applications usually underutilize network and server 2 LAN 2 SAN A 2 2 SAN B -Application 1 20% Utilization -Application 2 12% utilization -Application 3 27% utilization -Application 4 8% utilization -Application 5 18% utilization -Application 6 4% utilization FC Link Ethernet Link
67 Virtualization of Servers (2 of 2) Consolidating servers through application virtualization Applications migrate between server 1 and 2 in the event of a failure LAN SAN A SAN B FC Link Ethernet Link Hypervisor for Server 2 Application 3 27% utilization Application 5 18% utilization Total Utilization 45% Hypervisor for Server 1 Application 1 20% Utilization Application 2 12% utilization Application 4 8% utilization Application 6 4% utilization Total Utilization 44%
68 Higher Speeds Most server links still classic 1 Gigabit Ethernet (1GE) Ethernet uplinks moving to 10GE Fibre Channel links moving to 8GFC FCoE moving to 10GE 1GE Copper Standard Ethernet and Fibre Channel Switches 1 or 10GbE Optical LAN 4 or 8GFC Optical 2 2 SAN A 2 2 SAN B FC Link Ethernet Link 8GFC Optical
69 FCoE Moving to 10G CEE at Server Reduce the server links to two 10G CEE links Uplinks are 10GE links with Link Aggregation Fewer ports and switches save power and money 10G CEE with FCoE SFP interface supports Twinax to 1, 3 or 5 meters or Optical to 300 meters FCoE Switches nx10ge Optical LAN SAN A 8GFC Optical SAN B FC Link Ethernet Link
70 THIS SLIDE INTENTIONALLY LEFT BLANK
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco
Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over
Unified Storage Networking
Unified Storage Networking Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure Fibre Channel:
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
FCoE Deployment in a Virtualized Data Center
FCoE Deployment in a irtualized Data Center Satheesh Nanniyur ([email protected]) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of
over Ethernet (FCoE) Dennis Martin President, Demartek
A Practical Guide to Fibre Channel over Ethernet (FCoE) Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
Virtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
WHITE PAPER. Best Practices in Deploying Converged Data Centers
WHITE PAPER Best Practices in Deploying Converged Data Centers www.ixiacom.com 915-2505-01 Rev C October 2013 2 Contents Introduction... 4 Converged Data Center... 4 Deployment Best Practices... 6 Testing
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
SCSI vs. Fibre Channel White Paper
SCSI vs. Fibre Channel White Paper 08/27/99 SCSI vs. Fibre Channel Over the past decades, computer s industry has seen radical change in key components. Limitations in speed, bandwidth, and distance have
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
SAN and NAS Bandwidth Requirements
SAN and NAS Bandwidth Requirements Exploring Networked Storage Scott Kipp Office of the CTO Brocade Inc. Categorizing Storage - DAS SAN - NAS Directly Attached Storage DAS Storage that is connected point-to-point
Top of Rack: An Analysis of a Cabling Architecture in the Data Center
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
Fibre Channel over Ethernet: Enabling Server I/O Consolidation
WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades
Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)
Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
Enterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,
Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development
Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
N_Port ID Virtualization
A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February
Lecture 02a Cloud Computing I
Mobile Cloud Computing Lecture 02a Cloud Computing I 吳 秀 陽 Shiow-yang Wu What is Cloud Computing? Computing with cloud? Mobile Cloud Computing Cloud Computing I 2 Note 1 What is Cloud Computing? Walking
Computer Organization & Architecture Lecture #19
Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of
Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments
W h i t e p a p e r Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments Introduction Windows Server 2012 Hyper-V Storage Networking Microsoft s Windows Server 2012 platform is designed for
FCoE Enabled Network Consolidation in the Enterprise Data Center
White Paper FCoE Enabled Network Consolidation in the Enterprise Data Center A technology blueprint Executive Overview This white paper describes a blueprint for network consolidation in the enterprise
Chapter 13 Selected Storage Systems and Interface
Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.
Storage Area Network
Storage Area Network 2007 Infortrend Technology, Inc. All rights Reserved. Table of Contents Introduction...3 SAN Fabric...4 Advantages of SAN Solution...4 Fibre Channel SAN vs. IP SAN...4 Fibre Channel
Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence. Technical Whitepaper
Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence Dell Technical Marketing Data Center Networking May 2013 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center
The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center Doug Coleman, Manager, Technology & Standards, Corning Cable Systems Optical connectivity with OM3/OM4 laser-optimized
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
FICON Extended Distance Solution (FEDS)
IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon [email protected] FEDS: The Optimal Transport Solution
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
10GBASE T for Broad 10_Gigabit Adoption in the Data Center
10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers
PCI Express Overview. And, by the way, they need to do it in less time.
PCI Express Overview Introduction This paper is intended to introduce design engineers, system architects and business managers to the PCI Express protocol and how this interconnect technology fits into
Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O
Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Introduction Data centers are growing at an unprecedented rate, creating challenges for enterprises. Enterprise-level
SCSI The Protocol for all Storage Architectures
SCSI The Protocol for all Storage Architectures David Deming, Solution Technology April 12, 2005 Abstract SCSI: The Protocol for All Storage Architectures This session will appeal to System Administrators,
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
iscsi: Accelerating the Transition to Network Storage
iscsi: Accelerating the Transition to Network Storage David Dale April 2003 TR-3241 WHITE PAPER Network Appliance technology and expertise solve a wide range of data storage challenges for organizations,
IP ETHERNET STORAGE CHALLENGES
ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents
David Lawler Vice President Server, Access & Virtualization Group
Data Center & Cloud Computing David Lawler Vice President Server, Access & Virtualization Group 2009 Cisco Systems, Inc. All rights reserved. 1 We Are Facing Unparalleled Growth 1.7 billion+ people on
High Speed Ethernet. Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF
High Speed Ethernet Dr. Sanjay P. Ahuja, Ph.D. Professor School of Computing, UNF Hubs and Switches Hubs and Switches Shared Medium Hub The total capacity in the shared medium hub configuration (figure
The proliferation of the raw processing
TECHNOLOGY CONNECTED Advances with System Area Network Speeds Data Transfer between Servers with A new network switch technology is targeted to answer the phenomenal demands on intercommunication transfer
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp
Server and Consolidation with iscsi Arrays David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use this
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
How To Design A Data Centre
DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
FIBRE CHANNEL PROTOCOL SOLUTIONS FOR TESTING AND VERIFICATION. FCTracer TM 4G Analyzer FCTracer TM 2G Analyzer
FIBRE CHANNEL PROTOCOL SOLUTIONS FOR TESTING AND VERIFICATION FCTracer TM 4G Analyzer FCTracer TM 2G Analyzer LeCroy, a worldwide leader in serial data test solutions, creates advanced instruments that
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Nexus Family Provides a Granular, Cost-Effective Path for Data Center Evolution What You Will Learn As businesses move
CompTIA Storage+ Powered by SNIA
CompTIA Storage+ Powered by SNIA http://www.snia.org/education/courses/training_tc Course Length: 4 days 9AM 5PM Course Fee: $2,495 USD Register: https://www.regonline.com/register/checkin.aspx?eventid=635346
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Fiber Channel Over Ethernet (FCoE)
Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
FC SAN Vision, Trends and Futures. LB-systems
FC SAN Vision, Trends and Futures LB-systems Legal Disclosure PLEASE SEE RISK FACTORS ON FORM 10-Q AND FORM 10-K FILED WITH THE SEC This presentation, with prepared remarks, includes forward-looking statements
Computer Systems Structure Input/Output
Computer Systems Structure Input/Output Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Examples of I/O Devices
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation
PCI Express Impact on Storage Architectures and Future Data Centers Ron Emerick, Oracle Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
A Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
CASE STUDY SAGA - FcoE
CASE STUDY SAGA - FcoE FcoE: New Standard Leads to Better Efficency by Fibre Channel over Ethernet replaces IP and FC networks and enables faster data transfer with speeds of up to 10 Gbps When we talk
Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
ESSENTIALS. Understanding Ethernet Switches and Routers. April 2011 VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK
VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK Contemporary Control Systems, Inc. Understanding Ethernet Switches and Routers This extended article was based on a two-part article that was
Optimizing Infrastructure Support For Storage Area Networks
Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle
Brocade FICON/FCP Intermix Best Practices Guide
Brocade FICON/FCP Intermix Best Practices Guide This guide discusses topics related to mixing FICON and FCP devices in the same Storage Area Network (SAN), focusing on issues that end users need to address.
Storage Area Network and Fibre Channel Protocol Primer
Storage Area Network and Fibre Channel Protocol Primer 1 INTRODUCTION...1 2 SATISFYING THE MARKET S INSATIABLE DEMAND FOR STORAGE...1 3 STORAGE AREA NETWORK (SAN) OVERVIEW...2 3.1 SCSI AND SAN DEVICE ADDRESSING,
