The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates how Arista's 7100 series of switches directly address this market. Arista's position is that: 1. 10 Gigabit Ethernet prevails as the mainstream interconnect technology for Cloud Storage with iscsi based block storage and network attached storage (NAS). With their full non-blocking throughput, record density, low latency, and leading TCO, Arista switches are ideal for cloud storage applications. 2. FCoE is aimed at organizations with a high-end FC SAN and interested in LAN and SAN convergence. FCoE is at an early stage, and needs special changes to standard Ethernet (such as Priority Flow Control). Arista switches support these special changes to standard Ethernet and are ideally suited to carry FCoE traffic. 3. Infiniband is a niche technology for storage with iser, solving tactical problems for those requiring the lowest latency or the highest performance. Introduction A wide range of storage solutions exist in the market today, utilizing various approaches and a wide range of technologies. The Table below summarizes these different options. Access Interconnect technology Interconnect method Packaging Block based, file-based, combination Ethernet, Fibre Channel, Infiniband Dedicated, shared Component based, solution based Depending on their access method, storage systems are categorized as Storage Area Networks (SAN) and Network Attached Storage solutions (NAS). In SANs, storage devices, although remote, appear as locally attached to the client, and access to storage is block-based. In contrast, in a NAS system, clients access files remotely using a network-based file system. 1
Storage Area Networks A Storage Area Network (SAN) is an architecture whereby servers access remote disk blocks across a dedicated Interconnect. Most SANs use the SCSI protocol to communicate between the servers and the disks. Various interconnect technologies can be used, each of them requiring a specific SCSI mapping protocol, as shown in the table below: Interconnect Technology Fibre Channel (FC) TCP/IP over Ethernet Ethernet Infiniband Mapping protocol FCP (Fibre Channel Protocol) iscsi FCoE iser Fibre Channel Protocol Today, the majority of SANs use FCP to map SCSI over a dedicated Fibre Channel (FC). (See Figure below.) Enterprises deploying Fibre Channel deploy multiple networks: the LAN network, which typically uses Ethernet technology (Ethernet is a basic component of 85% of all networks worldwide, and is one of the most ubiquitous network protocols in existence), and the dedicated FC network. 2
iscsi One reason for FCP s success lies in shortcomings the iscsi protocol suffered in its early deployments. Ethernet technology shortcomings in supporting storage applications only made matters worse. In the past few years, these shortcomings have been resolved through a series of improvements: 1. Ethernet technology has featured 10-fold improvements over the past few years a. The advent of 10 Gigabit Ethernet has increased the bandwidth of the Ethernet interconnect by a factor of 10. While One Gigabit Ethernet was at a disadvantage compared to 2 Gbps Fibre Channel, 10 Gigabit Ethernet runs faster than 8 Gbps Fibre Channel. The Arista 7100 switches supports non-blocking 10 Gigabit throughput on each and every port. b. While Fibre Channel pricing has remained high, 10 Gigabit Ethernet pricing has gone down by a factor of 10. A 10 Gigabit Ethernet port that cost thousands of dollars a few years ago only costs a few hundred dollars today. At this point in time, on a cost per gigabit measure, 10 Gigabit Ethernet is priced at one third to one half the cost of Fibre Channel. The Arista 7100 switch has a list price below $500 for a 10 Gigabit Ethernet port. c. The density of Ethernet switches have improved by a factor of 10: while a typical 10 Gigabit Ethernet chassis once housed 50 10Gigabit Ethernet ports in a 10U form factor, Arista s 7148SX switch today offers 48 10 Gigabit Ethernet ports in a 1U form factor or 2016 10 Gigabit Ethernet ports in a standard 42U rack. These improvements in density facilitate building scalable SAN networks connecting hundreds of servers to hundreds of disk devices. 2. The IETF has resolved iscsi s shortcomings by adding into the protocol a full error recovery feature required for storage applications. 3. NIC vendors have developed Network Interface Adapters that are fully optimized for iscsi. A large portion of the TCP/IP processing can be offloaded to a specialized chip in the adapter itself, thus significantly reducing CPU utilization during iscsi transfers. 4. Powerful modern multi-core CPUs can easily handle the heavy TCP/IP processing that occurs during iscsi transfers. As a result, iscsi (see Figure below) has become a suitable alternative for SANs, avoiding the need for a dedicated Fibre Channel network and dedicated Fibre Channel staff, hence significantly reducing operating expenses and, in turn, Total Cost of Ownership (TCO). In the future, Arista predicts iscsi will continue gaining ground as the preferred option for SANs. 3
FCoE More recently, FCoE was introduced as an alternative to iscsi, to address the needs of the high-end SAN Fibre Channel customers interested in SAN and LAN convergence. The FCoE protocol is essentially an encapsulation of FCP over Ethernet. FCoE enables enterprise customers accustomed to Fibre Channel to run the Fibre Channel Protocol directly over their LAN Ethernet network, hence allowing them to consolidate their LAN and Storage network over the same network infrastructure. For FCoE to work, enhancements to the Ethernet protocol are needed to ensure: 1. Storage traffic is adequately separated from other traffic running on the LAN. 2. No storage packets are dropped, as the Fibre Channel Protocol is notoriously slow at recovering from packet errors. The IEEE has created multiple working groups under the umbrella of CCE (Converged Ethernet), tasked with producing the standards that implement these enhancements: 1. 802.1Qbb Priority flow control 2. 802.1Qaz Enhanced Transmission Selection (bandwidth allocation amongst traffic classes) 3. 802.1Qau Congestion Notification These standards are expected to be ratified in 2010. Until then, Arista predicts that interoperability will be a challenge, and the market for FCoE will remain small. Arista switches will fully support these standards when they are ratified. In the meantime, Arista supports a subset of these enhancements in pre-standard implementations and is capable of carrying FCoE traffic today. 4
iser iser runs over Infiniband. While Infiniband provides a point advantage in terms of its performance/price ratio, it also suffers from a number of disadvantages, which make it less desirable in the context of a storage solution. Contrary to Ethernet, Infiniband is an exotic technology that requires specific expertise. Contrary to Ethernet, management tools are limited for Infiniband, which furthers installation complexity and in turn increases total cost of ownership. Contrary to Ethernet, Infiniband technology is single-sourced. Investment in single-sourced technology entails significant risks. For the reasons above, Infiniband is unlikely to play a role beyond the high-end HPC and Academic/Research markets. Network Attached Storage An increasingly popular method for consolidating storage resources is Network Attached Storage (NAS). A NAS appliance is a server which has the purpose of supplying file-based data storage services to other devices on the network. NAS is a remote file system I/O where the file request is redirected over a network (see figure). 5
NAS is recognized for three principal benefits, which in combination lower overall TCO: 1. Storage consolidation 2. Deployment simplicity 3. Ease of management NAS systems have evolved to support, via a standard Ethernet network, the storage tiering, high performance and high availability that had previously only been available in SANs. This combined with its TCO advantages has made NAS an increasingly adopted solution in the enterprise. Co-existing NAS and SAN Although NAS was traditionally considered as a dedicated appliance with its own internal storage (e.g., SATA or SAS drives with RAID support), increasingly organizations are choosing to implement a NAS gateway in place of an appliance - often when a SAN already exists. (see figure). Furthermore, NAS gateways can be clustered together physically via a high-speed interconnect such as 10 Gigabit Ethernet, providing the ability to scale storage horizontally by adding NAS heads. Logically, the NAS gateways are then interconnected via a clustered file system, providing a single global namespace for all storage elements associated with the cluster, which can still be accessed via file-based protocols like CIFS and NFS, of IBRIX Fusion. Arista predicts that the demands on NAS solutions for low-latency, high-throughput performance will be ever-increasing, driven by: The rise of new Web-based application architectures in the data center The increasing use of virtualization tools to consolidate servers, and The increasing use of HPC in core mission-critical applications. Arista's 7100 series of switches directly address these issues by providing non-blocking 10 Gbps Ethernet throughput with low-latency on all ports - dramatically improving overall network performance, and thus improving I/O throughput for Cloud Storage. 6
Conclusion 10 Gbps Ethernet is having an increasingly significant role in the SAN and NAS markets for three principal reasons: 1. An order of magnitude performance improvement over the previous generation of network connectivity, which has made iscsi-based SANs and NAS performance leaders versus traditional Fibre Channel implementations. 2. Significant reductions in network TCO due to the commoditization and ubiquity of Ethernet and IP, as well as the reduction in cabling cost and complexity due to the increased number of servers and storage elements that can be supported per link. 3. 10 Gbps Ethernet makes it possible to consolidate storage along with server and network traffic onto a single unified fabric in the data center, taking full advantage of the many tools and years of administrative experience associated with managing Ethernet networks in the enterprise environment. Arista's 7100 family of 10 Gbps Ethernet switches are uniquely suited to address the needs of Cloud Storage, featuring: 1. Non-blocking 10 Gbps Ethernet throughput on every port with sub-microsecond latency. 2. Lowest price per port on the market. 3. Highest port density per RU on the market (at 48 10GbE ports). 4. Highly resilient EOS software, featuring self-healing and live-patching capabilities. 5. Highly available hardware, featuring redundant, hot-swappable fans and power supplies. 7