Block based, file-based, combination. Component based, solution based



Similar documents
IP ETHERNET STORAGE CHALLENGES

iscsi: Accelerating the Transition to Network Storage

Unified Storage Networking

How To Evaluate Netapp Ethernet Storage System For A Test Drive

10 Gigabit Ethernet : Enabling Storage Networking for Big Data. Whitepaper. Introduction. Big data requires big networks.

SAN and NAS Bandwidth Requirements

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

ARISTA WHITE PAPER 10 Gigabit Ethernet: Enabling Storage Networking for Big Data

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

CASE STUDY SAGA - FcoE

over Ethernet (FCoE) Dennis Martin President, Demartek

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Unified Fabric: Cisco's Innovation for Data Center Networks

VERITAS Backup Exec 9.0 for Windows Servers

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

SMB Direct for SQL Server and Private Cloud

Virtualizing the SAN with Software Defined Storage Networks

Enterasys Data Center Fabric

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

3G Converged-NICs A Platform for Server I/O to Converged Networks

Data Center Networking Designing Today s Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

FIBRE CHANNEL OVER ETHERNET

Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

Data Center Convergence. Ahmad Zamer, Brocade

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Building a Scalable Storage with InfiniBand

Fibre Channel over Ethernet in the Data Center: An Introduction

Quantum StorNext. Product Brief: Distributed LAN Client

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

Cisco UCS B-Series M2 Blade Servers

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Juniper Networks QFabric: Scaling for the Modern Data Center

Building Enterprise-Class Storage Using 40GbE

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

BUILDING A NEXT-GENERATION DATA CENTER

HP iscsi storage for small and midsize businesses

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group

Storage Networking Foundations Certification Workshop

Cloud-ready network architecture

What Is Microsoft Private Cloud Fast Track?

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

How To Design A Data Centre

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Unified Computing Systems

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Michael Kagan.

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Affordable. Simple, Reliable and. Vess Family Overview. VessRAID FC RAID Storage Systems. VessRAID SAS RAID Storage Systems

UCS M-Series Modular Servers

Brocade One Data Center Cloud-Optimized Networks

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Architecting Your SAS Grid : Networking for Performance

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Ethernet, and FCoE Are the Starting Points for True Network Convergence

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Data Center Architecture Overview

SummitStack in the Data Center

LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

White Paper. Best Practices for 40 Gigabit Implementation in the Enterprise

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Lecture 02a Cloud Computing I

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Architecting Low Latency Cloud Networks

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Storage Area Network

Isilon IQ Network Configuration Guide

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Connecting the Clouds

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

3 Red Hat Enterprise Linux 6 Consolidation

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

High Speed I/O Server Computing with InfiniBand

CompTIA Storage+ Powered by SNIA

Cisco SFS 7000D Series InfiniBand Server Switches

Transcription:

The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates how Arista's 7100 series of switches directly address this market. Arista's position is that: 1. 10 Gigabit Ethernet prevails as the mainstream interconnect technology for Cloud Storage with iscsi based block storage and network attached storage (NAS). With their full non-blocking throughput, record density, low latency, and leading TCO, Arista switches are ideal for cloud storage applications. 2. FCoE is aimed at organizations with a high-end FC SAN and interested in LAN and SAN convergence. FCoE is at an early stage, and needs special changes to standard Ethernet (such as Priority Flow Control). Arista switches support these special changes to standard Ethernet and are ideally suited to carry FCoE traffic. 3. Infiniband is a niche technology for storage with iser, solving tactical problems for those requiring the lowest latency or the highest performance. Introduction A wide range of storage solutions exist in the market today, utilizing various approaches and a wide range of technologies. The Table below summarizes these different options. Access Interconnect technology Interconnect method Packaging Block based, file-based, combination Ethernet, Fibre Channel, Infiniband Dedicated, shared Component based, solution based Depending on their access method, storage systems are categorized as Storage Area Networks (SAN) and Network Attached Storage solutions (NAS). In SANs, storage devices, although remote, appear as locally attached to the client, and access to storage is block-based. In contrast, in a NAS system, clients access files remotely using a network-based file system. 1

Storage Area Networks A Storage Area Network (SAN) is an architecture whereby servers access remote disk blocks across a dedicated Interconnect. Most SANs use the SCSI protocol to communicate between the servers and the disks. Various interconnect technologies can be used, each of them requiring a specific SCSI mapping protocol, as shown in the table below: Interconnect Technology Fibre Channel (FC) TCP/IP over Ethernet Ethernet Infiniband Mapping protocol FCP (Fibre Channel Protocol) iscsi FCoE iser Fibre Channel Protocol Today, the majority of SANs use FCP to map SCSI over a dedicated Fibre Channel (FC). (See Figure below.) Enterprises deploying Fibre Channel deploy multiple networks: the LAN network, which typically uses Ethernet technology (Ethernet is a basic component of 85% of all networks worldwide, and is one of the most ubiquitous network protocols in existence), and the dedicated FC network. 2

iscsi One reason for FCP s success lies in shortcomings the iscsi protocol suffered in its early deployments. Ethernet technology shortcomings in supporting storage applications only made matters worse. In the past few years, these shortcomings have been resolved through a series of improvements: 1. Ethernet technology has featured 10-fold improvements over the past few years a. The advent of 10 Gigabit Ethernet has increased the bandwidth of the Ethernet interconnect by a factor of 10. While One Gigabit Ethernet was at a disadvantage compared to 2 Gbps Fibre Channel, 10 Gigabit Ethernet runs faster than 8 Gbps Fibre Channel. The Arista 7100 switches supports non-blocking 10 Gigabit throughput on each and every port. b. While Fibre Channel pricing has remained high, 10 Gigabit Ethernet pricing has gone down by a factor of 10. A 10 Gigabit Ethernet port that cost thousands of dollars a few years ago only costs a few hundred dollars today. At this point in time, on a cost per gigabit measure, 10 Gigabit Ethernet is priced at one third to one half the cost of Fibre Channel. The Arista 7100 switch has a list price below $500 for a 10 Gigabit Ethernet port. c. The density of Ethernet switches have improved by a factor of 10: while a typical 10 Gigabit Ethernet chassis once housed 50 10Gigabit Ethernet ports in a 10U form factor, Arista s 7148SX switch today offers 48 10 Gigabit Ethernet ports in a 1U form factor or 2016 10 Gigabit Ethernet ports in a standard 42U rack. These improvements in density facilitate building scalable SAN networks connecting hundreds of servers to hundreds of disk devices. 2. The IETF has resolved iscsi s shortcomings by adding into the protocol a full error recovery feature required for storage applications. 3. NIC vendors have developed Network Interface Adapters that are fully optimized for iscsi. A large portion of the TCP/IP processing can be offloaded to a specialized chip in the adapter itself, thus significantly reducing CPU utilization during iscsi transfers. 4. Powerful modern multi-core CPUs can easily handle the heavy TCP/IP processing that occurs during iscsi transfers. As a result, iscsi (see Figure below) has become a suitable alternative for SANs, avoiding the need for a dedicated Fibre Channel network and dedicated Fibre Channel staff, hence significantly reducing operating expenses and, in turn, Total Cost of Ownership (TCO). In the future, Arista predicts iscsi will continue gaining ground as the preferred option for SANs. 3

FCoE More recently, FCoE was introduced as an alternative to iscsi, to address the needs of the high-end SAN Fibre Channel customers interested in SAN and LAN convergence. The FCoE protocol is essentially an encapsulation of FCP over Ethernet. FCoE enables enterprise customers accustomed to Fibre Channel to run the Fibre Channel Protocol directly over their LAN Ethernet network, hence allowing them to consolidate their LAN and Storage network over the same network infrastructure. For FCoE to work, enhancements to the Ethernet protocol are needed to ensure: 1. Storage traffic is adequately separated from other traffic running on the LAN. 2. No storage packets are dropped, as the Fibre Channel Protocol is notoriously slow at recovering from packet errors. The IEEE has created multiple working groups under the umbrella of CCE (Converged Ethernet), tasked with producing the standards that implement these enhancements: 1. 802.1Qbb Priority flow control 2. 802.1Qaz Enhanced Transmission Selection (bandwidth allocation amongst traffic classes) 3. 802.1Qau Congestion Notification These standards are expected to be ratified in 2010. Until then, Arista predicts that interoperability will be a challenge, and the market for FCoE will remain small. Arista switches will fully support these standards when they are ratified. In the meantime, Arista supports a subset of these enhancements in pre-standard implementations and is capable of carrying FCoE traffic today. 4

iser iser runs over Infiniband. While Infiniband provides a point advantage in terms of its performance/price ratio, it also suffers from a number of disadvantages, which make it less desirable in the context of a storage solution. Contrary to Ethernet, Infiniband is an exotic technology that requires specific expertise. Contrary to Ethernet, management tools are limited for Infiniband, which furthers installation complexity and in turn increases total cost of ownership. Contrary to Ethernet, Infiniband technology is single-sourced. Investment in single-sourced technology entails significant risks. For the reasons above, Infiniband is unlikely to play a role beyond the high-end HPC and Academic/Research markets. Network Attached Storage An increasingly popular method for consolidating storage resources is Network Attached Storage (NAS). A NAS appliance is a server which has the purpose of supplying file-based data storage services to other devices on the network. NAS is a remote file system I/O where the file request is redirected over a network (see figure). 5

NAS is recognized for three principal benefits, which in combination lower overall TCO: 1. Storage consolidation 2. Deployment simplicity 3. Ease of management NAS systems have evolved to support, via a standard Ethernet network, the storage tiering, high performance and high availability that had previously only been available in SANs. This combined with its TCO advantages has made NAS an increasingly adopted solution in the enterprise. Co-existing NAS and SAN Although NAS was traditionally considered as a dedicated appliance with its own internal storage (e.g., SATA or SAS drives with RAID support), increasingly organizations are choosing to implement a NAS gateway in place of an appliance - often when a SAN already exists. (see figure). Furthermore, NAS gateways can be clustered together physically via a high-speed interconnect such as 10 Gigabit Ethernet, providing the ability to scale storage horizontally by adding NAS heads. Logically, the NAS gateways are then interconnected via a clustered file system, providing a single global namespace for all storage elements associated with the cluster, which can still be accessed via file-based protocols like CIFS and NFS, of IBRIX Fusion. Arista predicts that the demands on NAS solutions for low-latency, high-throughput performance will be ever-increasing, driven by: The rise of new Web-based application architectures in the data center The increasing use of virtualization tools to consolidate servers, and The increasing use of HPC in core mission-critical applications. Arista's 7100 series of switches directly address these issues by providing non-blocking 10 Gbps Ethernet throughput with low-latency on all ports - dramatically improving overall network performance, and thus improving I/O throughput for Cloud Storage. 6

Conclusion 10 Gbps Ethernet is having an increasingly significant role in the SAN and NAS markets for three principal reasons: 1. An order of magnitude performance improvement over the previous generation of network connectivity, which has made iscsi-based SANs and NAS performance leaders versus traditional Fibre Channel implementations. 2. Significant reductions in network TCO due to the commoditization and ubiquity of Ethernet and IP, as well as the reduction in cabling cost and complexity due to the increased number of servers and storage elements that can be supported per link. 3. 10 Gbps Ethernet makes it possible to consolidate storage along with server and network traffic onto a single unified fabric in the data center, taking full advantage of the many tools and years of administrative experience associated with managing Ethernet networks in the enterprise environment. Arista's 7100 family of 10 Gbps Ethernet switches are uniquely suited to address the needs of Cloud Storage, featuring: 1. Non-blocking 10 Gbps Ethernet throughput on every port with sub-microsecond latency. 2. Lowest price per port on the market. 3. Highest port density per RU on the market (at 48 10GbE ports). 4. Highly resilient EOS software, featuring self-healing and live-patching capabilities. 5. Highly available hardware, featuring redundant, hot-swappable fans and power supplies. 7