iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters



Similar documents
Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

3G Converged-NICs A Platform for Server I/O to Converged Networks

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Emulex s OneCore Storage Software Development Kit Accelerating High Performance Storage Driver Development

FIBRE CHANNEL OVER ETHERNET

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Doubling the I/O Performance of VMware vsphere 4.1

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Application Note. Introduction. Instructions

Flash Storage Gets Priority with Emulex ExpressLane

Configure Windows 2012/Windows 2012 R2 with SMB Direct using Emulex OneConnect OCe14000 Series Adapters

over Ethernet (FCoE) Dennis Martin President, Demartek

Emulex 16Gb Fibre Channel Host Bus Adapter (HBA) and EMC XtremSF with XtremSW Cache Delivering Application Performance with Protection

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Block based, file-based, combination. Component based, solution based

Fibre Channel over Ethernet in the Data Center: An Introduction

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Cloud-ready network architecture

BUILDING A NEXT-GENERATION DATA CENTER

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Virtualizing the SAN with Software Defined Storage Networks

Building the Virtual Information Infrastructure

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Unified Storage Networking

Unified Fabric: Cisco's Innovation for Data Center Networks

IP ETHERNET STORAGE CHALLENGES

OPTIMIZING SERVER VIRTUALIZATION

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Parallels Server 4 Bare Metal

Brocade One Data Center Cloud-Optimized Networks

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Building Enterprise-Class Storage Using 40GbE

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Virtualizing Exchange

FCoE Enabled Network Consolidation in the Enterprise Data Center

VERITAS Backup Exec 9.0 for Windows Servers

Evaluation of Enterprise Data Protection using SEP Software

Cloud Optimize Your IT

EMC Business Continuity for Microsoft SQL Server 2008

Server and Storage Virtualization with IP Storage. David Dale, NetApp

CASE STUDY SAGA - FcoE

New Features in SANsymphony -V10 Storage Virtualization Software

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

How To Understand And Understand The Power Of An Ipad Ios 2.5 (Ios 2) (I2) (Ipad 2) And Ipad 2.2 (Ipa) (Io) (Powergen) (Oper

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Data Center Convergence. Ahmad Zamer, Brocade

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

Building a Scalable Storage with InfiniBand

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Enterasys Data Center Fabric

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

A Platform Built for Server Virtualization: Cisco Unified Computing System

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Simple, Reliable Performance for iscsi Connectivity

Brocade Solution for EMC VSPEX Server Virtualization

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

SMB Direct for SQL Server and Private Cloud

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

High Availability with VMware vsphere and Emulex/Cisco SANs

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Juniper Networks QFX3500

Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide

Data Center Evolution without Revolution

ENABLING SDDC WITH XTREMIO & BROCADE

Dell High Availability Solutions Guide for Microsoft Hyper-V

David Lawler Vice President Server, Access & Virtualization Group

The skinny on storage clusters

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

RFP-MM Enterprise Storage Addendum 1

IP SAN Best Practices

Transcription:

W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters

Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized businesses or branch-office deployments. However, as shown below, advances in the iscsi infrastructure have accelerated growth of the iscsi market and led to wider adoption of iscsi storage for larger data centers. $6 iscsi Worldwide Revenue ($B) Source: IDC (2010) $5 $4 $3 $2 $1 $0 2008 2009 2010 2011 2012 2013 2014 This white paper presents a Top Ten list of key trends that are driving iscsi as a strategic technology for networked storage and highlights important benefits that are provided with Emulex OneConnect 10 Gigabit Ethernet (10GbE) iscsi adapters. 1. Cost Savings iscsi has earned a reputation as the low-cost SAN. Specifically, iscsi combines lower upfront capital expense with ongoing operational savings when compared to Fibre Channel. iscsi is based on standard Ethernet technology and can be used with existing switches and cables that support an IP based network. IT staff can also leverage their IP networking expertise to support an iscsi SAN, reducing training and personnel costs and simplifying management. 2. 10 Gigabit Ethernet iscsi was introduced for 1GbE networks with multipathing and link aggregation enabling support for up to 4Gbps with multiple adapters. At the same time, Fibre Channel storage transitioned from 2Gbps to 4Gbps, followed by 4Gbps to 8Gbps. Not surprisingly, iscsi was typically used for less demanding applications that did not require the bandwidth that Fibre Channel provided. 2

The bandwidth comparison between iscsi and storage has changed with the introduction of new 10GbE iscsi adapters and storage arrays. For the first time, iscsi can run with higher bandwidth than Fibre Channel, and iscsi bandwidth leadership should continue with future transitions to 40Gbps Ethernet and 16Gbps Fibre Channel. 3. Server Virtualization Server virtualization has become a mainstream technology for optimizing resources and reducing costs in the data center. A recent survey 1 of 1,602 data centers for midmarket (500 to 999 employees) and enterprise-class ( 1,000 employees or more) organizations showed nearly three-quarters (74%) use server virtualization. An additional 19% of organizations are in the evaluation or planning phase. Is your organization currently using x86 server virtualization technology? 7% 7% 12% 13% 61% With production servers With test/dev servers Evaluating or piloting Planning No plans The full benefits of server virtualization are only realized with networked storage. This includes key features such as virtual machine (VM) migration to optimize server resources and support for disaster recovery. iscsi can be used with all of the major server virtualization platforms and is fully supported with all storage-related features. When combined with 10GbE networks and new multi-core servers, data centers have the network and storage bandwidth, and the CPU resources, to reach much higher virtualization ratios. 4. Hardware iscsi A dedicated iscsi adapter provides significant performance improvements relative to a network interface card (NIC) and an iscsi software initiator. This includes reduced CPU usage with full offload and hardware acceleration for iscsi, TCP Offload Engine (TOE) and TCP/IP processing. iscsi adapters also use a separate software stack that optimizes performance for operating systems and hypervisors. In contrast, software initiators work with the same TCP software stack that s used for all network traffic, creating the potential for bottlenecks and contention for server resources. The performance benefits of a hardware iscsi adapter are particularly critical to optimize virtual servers deployments. To help quantify this benefit, Emulex Labs conducted a series of benchmark tests evaluating the maximum number of VMs that could run concurrently with a constant I/O rate using the Emulex OCe11102-I iscsi hardware adapter and a software initiator using a standard NIC. The maximum number of VMs was reached when the I/O throughput dropped below the constant rate of 10,000 I/Os per second (IOPS) for 4Kb block sizes and 5000 IOPS for 8Kb block sizes. Emulex OCe11102-I iscsi adapter ran 58% more VMs using a 4Kb block size and 10,000 IOPS per VM. With an 8Kb block size and 5,000 IOPS per VM, the OCe11102-I ran 53% more VMs. 1 Enterprise Storage Group, The Evolution of Server Virtualization, November 2010 3

30 25 20 VMs 15 10 Emulex OCe11102-I 10GbE NIC 5 0 4Kb block 10,000 IOPs 8Kb block 5,000 IOPs 5. Boot from SAN Boot-from-SAN (BFS) is a key iscsi feature that allows servers to boot off a remote disk. Although iscsi BFS is supported with iscsi initiators and NICs, it s much easier to deploy and manage with an iscsi hardware adapter. With an iscsi initiator, the NIC, passes parameters during the boot process that the host uses to establish a connection with the iscsi target. The process is somewhat different for each operating system on hypervisor With an iscsi adapter, BFS configuration is done in a pre-boot environment using the same technologies that have been perfected over many years with Fibre Channel SANs. Reliability is enhanced when the same connection and settings for the iscsi target are used before booting, during the boot process and when the operating system or hypervisor is running. Because BFS is managed pre-boot, it works the same way for any operating system or hypervisor. 6. iscsi over DCB As a best-efforts networking technology, Ethernet occasionally drops packets that require retransmission. Although not a concern for most Ethernet traffic, this can be a significant problem with applications that require consistently high performance. When iscsi traffic contends with other I/O on the network, there can be packet losses that lead to retransmissions and higher latency. The solution can be found with Data Center Bridging (DCB) standards that enable lossless Ethernet and virtually eliminate retransmission of dropped packets. DCB standards were developed by the Institute of Electrical and Electronics Engineers (IEEE) and were originally used to support Fibre Channel over Ethernet (FCoE). iscsi solutions are now available based on DCB standards that include the following: Feature Benefit IEEE Standard Data Center Bridging Capability Exchange capabilities and configuration of DCB features 802.1ab Exchange (DCBX) between devices Priority-Based Flow Control (PFC) Manage I/O between initiator and target on a multi-protocol 802.1Qbb Ethernet link (enables lossless Ethernet) Enhanced Transmission Selection (ETS) Allocate bandwidth based on priority groups 802.1Qaz iscsi TLV Separate iscsi traffic from LAN traffic in an Ethernet flow 802.1ab In addition to consistent low-latency performance, DCBX supports priority groups and bandwidth allocations that can be assigned in DCB-compliant switches. The switch communicates these settings to server adapters and storage arrays for automatic configuration, reducing the cost and complexity to maintain large-scale deployments. 4

7. Universal Multi-Channel NIC Partitioning Emulex OneConnect OCe11102 adapters support NIC partitioning with the switch-agnostic Universal Multi-Channel (UMC) capability that is based on the IEEE 802.1Qbg standard and allows multiple PCI functions to be created on each adapter port. With the OCe11102-I adapter, each port presents one iscsi function and three NIC functions to the operating system or hypervisor. UMC provides added granularity to bandwidth allocation provided by the ETS capability of DCB. With ETS, all of the iscsi traffic is included in the same priority group. With UMC, bandwidth can be allocated to specific network functions within the group. A typical use case would be VMware vsphere deployments with NIC and iscsi functions dedicated to high-demand VMs, cluster support, VM migration and system management. NIC, iscsi or FCoE NIC NIC NIC 8. Wide Area Networks Because it s based on Ethernet, iscsi is particularly well suited for several key deployment scenarios that require data to be transmitted over longer distances. These include: n Remote offices Allows storage resources to be managed at a central location with a full complement of IT staff. Remote servers or desktops can access data using high-speed WAN links. n Remote backup In some cases, remote offices may choose to use local, direct-attached storage but will rely on backups done by the data center. iscsi enables remote backup with TCP/IP networks that are easy to deploy and manage. n Disaster recovery Every organization should have a plan for disaster recovery. To support best practices, secondary sites should be located a safe distance away from the primary data center to ensure isolation and protection from natural and man-made disasters. 5

9. Unified Management The success of iscsi is grounded on the ubiquitous Ethernet infrastructure whose management and processes are well understood by IT professionals. As part of the OneConnect iscsi solution, Emulex is leveraging its expertise in scalable enterprise management and enabling it for iscsi deployments. Emulex OneCommand Manager provides a scalable centralized management platform for administration of Emulex OneConnect iscsi adapters across all major operating system and virtual server platforms. OneCommand Manager can also be used to manage Emulex Fibre Channel HBAs, FCoE CNAs and 10GbE network adapters from the same management console. Traditional iscsi management tools are native to the OS platform and provide local host management only. This means that the IT administrator must login to each and every server in order to perform any configuration of the iscsi adapter. With OneCommand Manager, IT operations can: n Manage all adapters throughout the data center from a single console regardless of OS. n Easily view iscsi ports using the OneCommand Manager Graphical User Interface (GUI), enabling storage data paths to be viewed from a server perspective. n Streamline management by scripting standard management activities using the OneCommand Manager Command Line Interface (CLI). 10. Growing Ecosystem With iscsi storage shipments forecast to approach $4B in 2012, the iscsi ecosystem continues to grow. Nearly every major storage vendor is shipping iscsi storage solutions including Dell, EMC, Fujitsu, Hitachi/HDS, HP, IBM, NetApp and Oracle. Another major sector in this growing ecosystem is the Ethernet switch, where a managed Layer 3 Ethernet switch is sufficient for most iscsi implementations. Ethernet switch products supporting iscsi are available from Arista, Cisco, Brocade, Extreme Networks, Force10 Networks, Fulcrum Microsystems, Juniper, Nortel and Voltaire. Emulex iscsi adapters are designed to be fully compatible with all of these products. 6

www.emulex.com World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600 Bangalore, India +91 80 40156789 Beijing, China +86 10 68499547 Dublin, Ireland+35 3 (0)1 652 1700 Munich, Germany +49 (0) 89 97007 177 Paris, France +33 (0) 158 580 022 Tokyo, Japan +81 3 5325 3261 Wokingham, United Kingdom +44 (0) 118 977 2929 12-0567 1/12