WHITE PAPER www.brocade.com STORAGE AREA NETWORK Solutions for iscsi Storage Area Networks solutions provide the flexibility, reliability, and ease of use for lower-cost Internet Small Computer Systems Interface (iscsi) Storage Area Networks (SANs) from small businesses to enterprise-class data centers with the FCX Series, TurboIron 24X Switch, BigIron RX Series, and VDX Series Ethernet switches.
In order to satisfy the ever-growing needs of enterprises for digital storage, computer data storage has evolved from Direct-Attached Storage (DAS) to Network-Attached Storage (NAS) and SANs. Traditionally, SANs have been based on Fibre Channel (FC) networks, but iscsi SANs are gaining greater acceptance in data centers because of lossless 10 Gigabit Ethernet (GbE), flexibility, low cost, ease of use, and ability to meet objectives for remote backup and disaster recovery. Challenges for iscsi storage networks include the need to ensure that the network performs to meet business requirements, the need to fine-tune switches for iscsi environments and, finally, the complexity involved with scaling Ethernet networks for storage. Four solutions address the need for simplicity, scalability, and performance: Low-cost, high-performance 1 GbE iscsi SAN 1 GbE to 10 GbE using stackable building blocks Highly scalable enterprise-class 1GbE and 10GbE for nonstop iscsi SAN availability Lossless solution with next-generation VCS Fabric technology FCX Series switches, TurboIron 24X, BigIron RX Series, and VDX Series switches are ideal platforms to deploy in iscsi SANs. Key technology features, such as IronStack, Symmetric Flow Control (SFC), sflow, large buffer, Data Center Bridging (DCB), L2 multipathing, and enterprise class reliability ensure a robust, high-performance storage network that exhibits many of the characteristics of Fibre Channel SANs. switches provide reliability, low latency, edge-to-core 10 GbE support, as well as the ability to grow as the environment grows. 2
THE EVOLUTION TO SHARED STORAGE Over the years organizations have initially met their data storage needs with DAS and, as their needs increase, NAS and finally SANs. Direct-Attached Storage is the most common method for connecting storage to a computer. Using this method, a hard drive or group of hard drives is directly connected to a server using a cable, as shown in Figure 1. The storage can be internal or external to the server. Server Figure 1. Direct-Attached Storage. Storage Network-Attached Storage (NAS) comprises storage arrays connected to servers that provide shared files access to the storage resources over a TCP/IP network, as shown in Figure 2. The NAS storage space or resources are presented in the form of network shared resources. TCP/IP End users Figure 2. Network-Attached Storage. Server NAS storage Whereas NAS provides file-based storage access, a SAN provides shared storage architecture for block data storage, where storage resources are shared among many servers. As shown in Figure 3, SAN switches provide the network to enable resource sharing. Figure 3. Storage Area Network. Servers SAN Switch SAN Storage 3
Why iscsi? Most large companies deploy Fibre Channel (FC) SANs because of their high performance, low latency, high reliability, and the many benefits of shared networked block data storage. FC SANs are specifically designed to meet the demanding requirements of storage applications, and they are the solution of choice for most business-critical applications. They do, however, create a specialized network separate from an organization s IT infrastructure, making the deployment of FC SANs better suited to larger organizations that have the IT resources needed to deploy and manage two separate networks. Some of the key reasons why the iscsi standard was developed for SANs include the following: Low cost: Small and medium-sized businesses choose iscsi SAN technology because of low deployment and maintenance costs. Since these businesses already have Ethernet networks in place, they need fewer additional employees and less training to deploy and manage an IP-based storage network and these cost savings extend into the server with the use of lower-cost network interface connections. Flexibility: Unlike purpose-built networks, iscsi relies on TCP/IP networks for reliable data transport. This provides growing businesses with the flexibility they need to grow their network storage infrastructure without having to worry about deploying or managing separate SANs. They can leverage a common Ethernet infrastructure for all their storage networking needs. Disaster recovery: Since iscsi uses IP for data transfer across networks, it is a very good choice for disaster recovery. Customers with two or more offices can use iscsi to transfer data over long distances to protect against unforeseen events. Ease of use: The process of enabling iscsi initiators such as specialized Network Interface Cards (NICs), Host Bus Adapters (HBAs), or software drivers on existing NICs is transparent to applications, file systems, and operating systems. Software initiators are built into all major operating systems today and need only to be configured. Different platforms can use the same storage pool in iscsi SANs, which improves utilization and efficiency as well. Introduction to iscsi iscsi is an IP-based encapsulation protocol that encapsulates SCSI commands so that they are transported over a TCP/IP network from a host/initiator to target SCSI devices. iscsi relies on TCP for flow control and IP routing to ensure reliable data transfers over long distances. It allows IT staff to utilize a single underlying Ethernet infrastructure to meet IT and storage networking needs. The iscsi protocol is formally defined in RFC 3720 of the Internet Engineering Task Force (IETF). It is based on the standardized SCSI Architectural Model (SAM) and fits between the SCSI Layer and TCP in the model Open Systems Interconnection (OSI) stack, shown in Figure 4. iscsi encapsulates SCSI Protocol Data Units (PDUs) in a TCP datagram in the TCP/IP model. The SCSI protocol is commonly used in data centers to connect SCSI devices, such as hard disks and tape drives. It defines commands, protocols, and interfaces for transferring data between computers and peripherals. As system interconnects move from the classical shared bus structures to serial network transports, SCSI has to be mapped to network transport protocols. This is done by iscsi, which maps SCSI commands onto TCP/IP protocols for data transfer. 4
iscsi Operating System SCSI Layer Figure 4. iscsi stack based on OSI model. iscsi Transmission Control Protocol (TCP) Internet Protocol (IP) Link Layer (L2) Ethernet CHALLENGES IN iscsi SANS iscsi SANs use TCP protocol for data transmission, which is a byte stream protocol. On the other hand, data storage transfers are typically associated with fixed-length data blocks for standard data transfers. When transferring large amounts of data, the byte stream feature of TCP results in inefficient data transfer and additional buffer overheads at the receiving nodes. An ideal Ethernet switch deployed in iscsi environments has deep buffers to accommodate the iscsi SAN requirements for large data transfers without data loss. Like Fibre Channel SANs, iscsi SANs utilize significant network bandwidth, which, in the case of iscsi, creates a burden on existing LANs. In order to meet organizational demands for performance and Service Level Agreements (SLAs), iscsi storage vendors recommend that iscsi SANs be physically and logically separate from the organization s primary LAN, for the following reasons: Performance: GbE connections have limited bandwidth, so multiple connections out of the storage array must be used to handle traffic and sometimes multiple connections out of the server, as well. One big 10 GbE pipe delivers additional needed bandwidth and also allows configuration simplicity that helps network administrators. Tuning: Because technology is adaptive, Ethernet switches that were not configured for iscsi environments have to be tuned by IT personnel eroding the simplicity advantages of an iscsi-based network. Scalability: Scaling Ethernet networks for storage can be complex and has always been a challenge. Stacking addresses this from a 1 GbE perspective, but challenges will continue to increase as the environment grows and corporate mandates escalate. Error-free data transport: iscsi traffic consists of large blocks of data sent in bursts. In order to achieve best performance, iscsi needs to consistently deliver error-free streams of ordered data. When iscsi encounters errors or congestion, it retries the transmissions and re-orders data, which adds to TCP/IP overhead and impacts SAN performance. Monitoring and traffic optimization: Storage traffic is sporadic and requires high bandwidth that may create congestion on certain links or switches within the iscsi SAN. The ability to monitor and collect this data in real time becomes critical in order to analyze and take corrective action to ensure storage traffic continuity. 5
In general, most Ethernet switch vendors do not engage in developing storage architectures. Their domain of expertise and resulting product offering is specialized for LAN-based traffic. Since the profile of LAN-based traffic is very different from that of storage-based traffic, selecting the right vendor and Ethernet switches is critical. storage networks are running in more than 90 percent of Global 1000 data centers. has been in the storage business for a decade and a half, so customers can be confident that iscsi solutions are backed by solid networking and storage networking experience and expertise. The biggest challenge for the network administrator in deploying an iscsi SAN is fine tuning the network; of course, this assumes that the switch has the architecture and the ability to be fine tuned for iscsi SAN. So, once the equipment arrives in the data center, it needs to be configured and optimized for performance. provides field-proven products and trusted advice on configuring both the devices and the network itself for stability, reliability, and performance. iscsi Solutions offers four iscsi solutions that address customer challenges: Implement a 1 GbE iscsi SAN featuring simple deployment and wire-speed performance Scaling a 1 to 10 GbE environment with stackable building blocks Deploy a highly scalable Enterprise-class 1GbE and 10GbE for non-stop iscsi SAN availability Deploy a lossless VCS Fabric solution with next-generation virtual cluster switching Simple, High-Performance, Scalable 1 GbE iscsi SAN Small to medium environments can take advantage of a simple, high-performance storage network that is easy to deploy and manage and that provides wire-speed performance. This pay-as-you-grow, building-block approach features seamless scalability with switch stacking and delivers wire-speed performance. Ethernet switches are purpose-built for the data center and deliver optimal power efficiency and power redundancy. Figure 5. 1 GbE iscsi SAN reference architecture. LAN FCX stack 1 GbE 1 GbE storage 1 GbE servers 6
The FCX Series of Ethernet switches, with symmetric flow control capabilities as described later in this paper, are ideal in this solution. Using IronStack technology, organizations can stack up to eight top-of-rack switches horizontally into a single logical switch, simplifying management in the network access layer. With the stacking capabilities of the FCX, administrators can focus on storage resources instead of tedious and complex networking operations. Scaling 1 GbE to 10 GbE iscsi SANs enable organizations to consolidate and virtualize storage resources, while leveraging standard Ethernet infrastructure to free up storage space that was previously tied to specific servers in DAS configurations, increasing storage utilization. With business application demands increasing continuously, along with the desire to virtualize all IT resources, the need for higher-performance storage is increasing. Storage solutions built with Ethernet switches allow businesses to seamlessly scale iscsi environments with high-performance 10 GbE storage in an easy-to-administer and cost-effective manner. LAN FCX stack LAG TurboIron 24X 1 GbE 10 GbE Trunk/LAG TurboIron 24X Figure 6. 10 GbE reference architecture showing both 1 and 10 GbE configurations. 1 GbE storage New 10 GbE storage 1 GbE servers New 1 GbE servers New 10 GbE servers IT organizations often leverage stackable 1 GbE Ethernet switches in their iscsi SANs, providing an easy-to-scale, virtualized network for storage traffic. Stacking is critical to ensure that 10 GbE connections can be added to the network when the need for 10 GbE performance arises. stackable switches provide high-performance 10 GbE ports, allowing the stack to be connected into a switch that predominantly includes high-speed 10 GbE ports. The network can be expanded efficiently with 10 GbE connectivity, removing the need to create a new network dedicated to 10 GbE iscsi storage. The new highperformance 10 GbE ports can also be used to connect both storage and servers into the existing iscsi network. 7
NetIron MLX The NetIron MLX supports Multi-Chassis Trunking (MCT), which allows two NetIron MLX switches to appear as a single entity to switches they are connected to. From a server perspective, connecting to two switches provides resiliency, but also with the configuration and management simplicity of a single switch. Since the VCS fabric appears as a single entity, a logical one-to-one connection is created between the edge fabric and the two NetIron MLX switches, eliminating the need for Spanning Tree Protocol (STP) and increasing network utilization. Highly scalable Enterprise-Class 1 GbE and 10 GbE for Nonstop iscsi SAN Availability In situations where IT organizations are looking to build highly scalable 1 GbE and 10 GbE iscsi SANs, a single enterprise-class chassis approach is more suitable for such deployments. A single BigIron RX chassis can scale up to 768 1 GbE or 256 10 GbE ports. The virtual output queuing (VOQ), scalable wire-speed performance, and very large packet buffer enable continuous traffic flow, avoiding packet drop in mismatched 1 GbE and 10 GbE server/storage environments. This yields to fewer pause requests, resulting in lower latency and faster delivery. In addition, IT organizations benefit from simplifying the infrastructure by reducing network tiers with a single chassis approach, as well as needing no optics or cabling for interconnect or stacking (see Figure 7). With half slot modules that can be leveraged across the three different chassis, IT has the flexibility to minimize spares and easily add new 1 GbE/10 GbE port capacity. The BigIron RX allows enterprises to build simplified iscsi SANs for medium to large deployments, as storage network architects can fully utilize the maximum capacity of the switch without running into backplane stacking bottlenecks. This provides ultimate flexibility in designing for various demanding applications and large 1 GbE/10 GbE port connectivity without concerns for traffic congestion, packet drop, or performance degradation. Lastly, the BigIron RX lets you architect a dedicated network for iscsi traffic or shared LAN/IP SAN with logical traffic separation. The hardware and software resiliency, hitless upgrade and failover, as well as Continuous System Monitoring improve network uptime and enable businesses to run more efficiently and run nonstop. Figure 7. Single BigIron RX chassis for iscsi 1 GbE/10 GbE and LAN connectivity. 1 GbE 1 GbE BigIron RX Switch 1 GbE Converged LAN/ SAN Network 10 GbE for Future Expansion fig01_dist 1 Gigabit Servers 1 Gigabit Storage 10 Gigabit Servers & Storage Further simplifying network management, Network Advisor allows the iscsi network and the rest of the data center LAN to be managed from a single pane of glass. provides the scalable and flexible 1 and 10 GbE network infrastructures to allow servers and storage to efficiently communicate in a cost-effective manner. In both 1 and 10 GbE environments, provides data integrity to the iscsi flow from server to storage in two ways: 32-bit Cyclic Redundancy Check (CRC) on Ethernet frames, which maintains data integrity across the wire enterprise-class switch architecture, which maintains data integrity across the switch See details under Data Security from Server to Storage on page 11. 8
Deploying a Lossless Next-Generation 10 Gbps iscsi SAN As described earlier, Ethernet is a best-effort network that may drop frames or deliver frames out of order when the network or devices are busy. In IP networks, transport reliability has traditionally been the responsibility of the transport protocols, such as TCP, which carry greater complexity and processing overhead along with the resulting impact on performance and throughput. Data Center Bridging (DCB) adds extensions to Ethernet to provide reliability without incurring the penalties of TCP. DCB makes Ethernet a viable lossless transport for storage and server cluster traffic. This lossless next-generation 10 Gbps iscsi SAN solution leverages IEEE 802.1Qbb Priority- Based Flow Control (PFC) to avoid dropped frames and maximize network efficiency. It optimizes SCSI communication and minimizes the effects of TCP to make the iscsi flow more reliably. As a result, iscsi flows are more balanced on high-bandwidth 10 Gbps links. For investment protection, the ability to carry iscsi and LAN IP traffic on the same link/network makes it possible to logically converse iscsi SANs and IP LAN traffic networks. The iscsi application Type-Length Value (TLV) instructs the server to place iscsi flows on any of the eight available PFC priorities, which separate storage traffic from other IP traffic. Enhanced Transmission Selection (ETS) 802.1Qaz allows network bandwidth to be allocated to iscsi traffic or specific iscsi flows dedicated to a priority. The NetIron MLX Series, shown in Figure 8, is key to this solution architecture. Designed to enable reliable converged infrastructures and support mission-critical applications, the NetIron MLX Series features advanced, redundant switch fabric architecture for very high availability. Converged Network Adapters (CNAs) leverage IEEE standards for DCB, providing a highly efficient, low-latency, lossless, and deterministic way to transport storage traffic over Ethernet links addressing the highly sensitive nature of storage traffic. 1 GbE 10 GbE Trunk/LAG NetIron MLX with MCT WAN Figure 8. Next-generation 10 GbE converged SAN reference architecture. LAG VCS Fabric Technology VCS Fabric Technology VCS Fabric Technology 1/10 Gbps servers 1 Gbps iscsi storage 10 Gbps DCB servers 10 Gbps DCB iscsi storage 9
Deploying iscsi SANs gives you two logically independent SANs with redundancy for High Availability (HA) without two physically separate networks. A new technology called Transparent Interconnection of Lots of Links (TRILL) is being developed by the IETF standards body to provide L2 multipath capability, which will further enhance the high availability of the physical single fabric and be a key factor for deploying VCS fabric in the data center. provides all the components for iscsi SAN solutions with wire speed on all ports, low latency, and high reliability. In addition, introduces the concept of fabric, which makes data flow more consistent and more predictable and shields the network from the wave effect of TCP. This solution, in fact, is the first phase in the introduction of the nextgeneration Ethernet technology, which includes VCS Fabric technology and TRILL. Virtual Cluster Switching Ethernet fabric, part of VCS Fabric technology, is a Layer 2 Ethernet technology that improves iscsi SAN utilization, maximizes application availability, increases scalability, and dramatically simplifies the iscsi SAN architecture. The VDX switches support Data Center Bridging (DCB) to provide lossless Ethernet storage traffic, combined with Ethernet fabric capabilities to build intelligent and efficient iscsi SAN architectures. The VDX switches automatically discover each other and create a fully functional, lossless iscsi SAN Ethernet fabric with no manual configuration. Scaling out network topology, and scaling up bandwidth across switches, is as easy as adding an interswitch cable. The embedded Layer-2 management intelligence of the SAN Ethernet fabric automatically reconverges the network, avoiding manual intervention from IT staff. THE BROCADE TECHNOLOGY ADVANTAGE The next sections explore several technology advantages of Ethernet to optimize performance and enhance security switches in an iscsi-based storage network. Smarter Buffer Manager Symmetric Flow Control (SFC): In addition to asymmetric flow control, FCX Series switches support symmetric flow control, meaning that they can both receive and transmit 802.3x PAUSE frames based on values that are pretested for optimal iscsi performance. Symmetric Flow Control, supported on standalone as well as on all FCX switches in an IronStack, addresses the requirements of a lossless service class in a dedicated iscsi SAN environment. This feature ensures that large sequential data transfers are accomplished reliably, efficiently, and with lowest latency through the dedicated iscsi SAN network. By default, IronWare software allocates a certain number of buffers to the outbound transmit queue for each port, based on Quality of Service (QoS) priority (traffic class). The buffers control the total number of packets permitted in the port s outbound transmit queue. For each port, the device defines the maximum outbound transmit buffers, also called queue depth limits. Total transmit queue depth limit: This refers to the total maximum number of transmit buffers allocated for all outbound packets on a port. Packets are added to the port s outbound queue as long as the number of buffers currently used is less than the total transmit queue depth limit. All ports are configured with a default number of buffers and PAUSE thresholds. 10
iscsi TLV: iscsi TLV interactions between VDX switches, servers, and storage instruct each iscsi node to place iscsi flows on any of the eight available PFC priorities, effectively separating storage traffic from other IP traffic, such as network management. Guaranteed bandwidth: Enhanced Transmission Selection (ETS) in VDX switches allocates bandwidth to iscsi traffic or specific iscsi flows based on assigned priorities, letting you ensure that mission-critical applications have the best performance. Buffer Management on IronStack FCX stackable switch architecture by default allocates fixed buffers on the basis of per-priority-queue per packet processor. In instances of heavy traffic bursts to aggregation links, such as the traffic in stacking configurations or mixed-speed environments, momentary oversubscription of buffers and descriptors may occur, which can lead to dropped frames during egress queuing. FCX stackable switches provide the capability to allocate additional egress buffering and descriptors to handle momentary bursty traffic, especially when other priority queues may not be in use or may not be experiencing heavy levels of traffic. This allows users to allocate and fine tune the depth of priority buffer queues for each packet processor. iscsi SAN Management Network Advisor: Network Advisor delivers the industry s first unified network management solution for data, storage, and converged networks from a single easyto-use application. It provides a comprehensive tool for configuring, managing, monitoring, and reporting for data center networks, with robust Role-Based Access Control (RBAC), automation, operational simplicity, and open architecture. This reduces Operating Expense (OpEx) and protects investments through a unified train-once user experience. Management Plug-In for VMware vcenter: This plug-in enables proactive SAN monitoring and helps administrators achieve end-to-end visibility. Also, the open architecture with industry-standard APIs (SMI-S, Web Services, NETCONF, and SNMP) enables seamless integration with Partner orchestration frameworks and service delivery platforms. sflow: The embedded sflow RFC 3176 in platforms allows Network Advisor to collect data for LAN and SAN traffic analysis and monitoring. This provides unprecedented visibility into the network, which results in reducing downtime. Because sflow is embedded in hardware, there is no impact on regular data traffic when collecting the data. Data Security from Server to Storage TCP/IP is used as the transport protocol for iscsi, because it offers the ability to deliver data in an error-free byte and ordered format. TCP/IP relies on error checking mechanisms that are not always strong enough to guarantee that the received data is, in fact, correct. This is not a frequent problem, but it is frequent enough that organizations relying on iscsi SANs for mission-critical data cannot ignore the potential for errors. TCP/IP CRC-16 checksum is implemented from the iscsi source (initiator or target) to the iscsi destination (initiator or target), whereas Ethernet CRC-32 checksum is implemented across the wire that is, between two network nodes (initiator and switch, switch and switch, switch and target). The CRC-32 checksum is a strong detector of errors and discards the packet if the data is incorrect, letting TCP/IP retransmit the packet. Unfortunately, neither protection mechanism detects errors that occur within a switch, a fact that highlights the value of reliable and enterprise-class switches. When data is corrupted within a switch, TCP/IP CRC-16 should detect most errors, but mission-critical SANs require a higher level of protection such as the optional CRC-32. When iscsi detects an error that TCP/IP has not detected, iscsi retransmits without help from TCP/IP, which generates additional overhead and significantly impacts the performance of SANs. 11
switches are built on enterprise-grade switch components, which ensure intraswitch data integrity as data moves from switch ingress to egress. The combination of Ethernet CRC-32 across Ethernet links, reliable data transport across the switches, and TCP/IP CRC-16 virtually eliminates the possibility that iscsi will need to attempt iscsi retry. BROCADE ETHERNET SWITCHES FOR iscsi SANS Choosing an Ethernet switch for iscsi SANs should be an easy decision, since Ethernet switches are standards-based, and any Ethernet switch can carry iscsi traffic. However, the iscsi switch you choose must ensure reliability, performance, and low latency and it should position the network well for future growth. offers a broad portfolio of Ethernet switches that are ideal for iscsi networks. Ethernet switches for iscsi are designed for wire speed with no oversubscription and are built for high-value environments. Featuring redundant management modules, redundant fans, redundant load-sharing switch fabrics, and power supply modules, products for enterprise data center are designed for maximum system availability. switches provide Layer 2 and Layer 3 capabilities that maximize high availability at the protocol level, restoring services in subsecond time to remove costly service disruption. These high availability capabilities allow network operators to deploy highly reliable network infrastructures that are resilient to and tolerant of network and equipment failures. FCX Series FCX Series switches can be stacked to seamlessly scale the network when more servers and storage are needed. The FCX is a wire-speed, non-blocking switch that offers 24 or 48 x 10/100/1000 Megabits per second (Mbps) Ethernet ports. It also provides an option for a redundant power supply and offers reversible air flow. The FCX switch offers 4-port, 10 Gigabit per second (Gbps) modules, which can be used for uplink connectivity or for stacking multiple FCX switches. For FCX Series product details, see www.brocade.com/fcx. Figure 9. FCX Series. TurboIron 24X Switch The TurboIron 24X Switch is a compact, high-performance, high-availability, and highdensity 10 GbE solution that meets mission-critical data center requirements. It can support 1 GbE servers until they are upgraded to 10 GbE-capable Network Interface Cards (NICs), simplifying migration to 10 GbE server farms. For TurboIron 24X Switch product details, see www.brocade.com/turboiron24x. Figure 10. TurboIron 24X Switch. 12
BigIron RX Series The BigIron RX Series of switches provides over one billion packet-per-second performance for cost-effective scaling in data center deployments, with hardware-based IP routing to 512,000 IP routes per line module. The high-availability design features redundant and hot-pluggable hardware, hitless software upgrades, and graceful BGP and OSPF restart. For BigIron RX Series product details, see www.brocade.com/bigironrx. Figure 11. BigIron RX Series. VDX Series Leveraging VCS technology, VDX 6720 Data Center Switches provide the foundation for Ethernet fabric revolutionizing the design of Layer 2 networks and enabling cloud-optimized networking. For VDX Series product details, see www.brocade.com/vdx6720. Figure 12. VDX 6720 Switches. NetIron MLX Series The NetIron MLX Series of advanced routers provides industry-leading 10 GbE and 1 GbE port density, wire-speed performance, and a rich set of Layer 3 and routing features as well as advanced Layer 2 switching capabilities. The NetIron MLX Series includes models that are available in 4-slot, 8-slot (shown in Figure 12), 16-slot, and 32-slot configurations. The series offers industry-leading port capacity and density with up to 256 x 10 GbE or 1536 x 1 GbE in a single system. For NetIron MLX Series product details, see www.brocade.com/mlx. Figure 13. NetIron MLX-8 switching router. 13
WHITE PAPER www.brocade.com CONCLUSION iscsi Storage Area Networks provide small, medium, and large businesses with the ability to deploy and manage a holistic SAN to fulfill their IT and storage requirements. iscsi SANs are low-cost networks that provide flexibility and ease of use. Since iscsi is based on TCP/IP protocols, standard Ethernet switches can be used in iscsi environments for connectivity. However, for best performance, the switches must provide reliability, low latency, edge-to-core 10 GbE support, as well as the ability to scale as the environment grows. FCX Series switches are ideal for iscsi-based storage networks in small to medium-sized environments and at the edge of large environments. The TurboIron 24X is a 1 GbE or 10 GbE combo switch that can provide 10 GbE migration for existing 1 GbE deployments. BigIron RX Series switches are suitable for mid-large 1 GbE /10 GbE Enterprise-class deployments. The VDX Series provides enhanced lossless storage traffic for small-to-very large 1 GbE /10 GbE iscsi SANs. The table below summarizes the solutions and capabilities available to meet the demands of small-to-very large iscsi SANs. Features FCX TurboIron 24X BigIron RX VDX Storage connectivity iscsi/nas iscsi/nas iscsi/nas iscsi/nas/fcoe (Fiber Channel over Ethernet) IP SAN connectivity 1 GbE 1 GbE/10 GbE 1 GbE/10 GbE 1 GbE/10 GbE IP SAN scalability Small-Medium Small-Medium Medium-Large Small-Very Large Flow control Per Port Per Port Per Port Per Flow Lossless iscsi Per Switch Per Switch Per Chassis Across Fabric sflow iscsi TLV X X X Ease of IP SAN expansion To learn more, visit www.brocade.com. Corporate Headquarters San Jose, CA USA T: +1-408-333-8000 info@brocade.com European Headquarters Geneva, Switzerland T: +41-22-799-56-40 emea-info@brocade.com Asia Pacific Headquarters Singapore T: +65-6538-4700 apac-info@brocade.com 2012 Communications Systems, Inc. All Rights Reserved. 01/12 GA-WP-1495-02, Assurance, the B-wing symbol, DCX, Fabric OS, MLX, SAN Health, VCS, and VDX are registered trademarks, and AnyIO, One, CloudPlex, Effortless Networking, ICX, NET Health, OpenScript, and The Effortless Network are trademarks of Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by. reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.