How To Evaluate Netapp Ethernet Storage System For A Test Drive



Similar documents
over Ethernet (FCoE) Dennis Martin President, Demartek

Unified Storage Networking

Using NetApp Unified Connect to Create a Converged Data Center

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

3G Converged-NICs A Platform for Server I/O to Converged Networks

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

FIBRE CHANNEL OVER ETHERNET

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Block based, file-based, combination. Component based, solution based

CASE STUDY SAGA - FcoE

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Unified Fabric: Cisco's Innovation for Data Center Networks

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Fibre Channel over Ethernet in the Data Center: An Introduction

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

FCoE Deployment in a Virtualized Data Center

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Deployments and Tests in an iscsi SAN

Data Center Bridging Plugfest

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Building Enterprise-Class Storage Using 40GbE

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

EMC Unified Storage for Microsoft SQL Server 2008

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT

Converged networks with Fibre Channel over Ethernet and Data Center Bridging

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Paul Doherty Technical Marketing Engineer Intel Corporation

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

Virtualizing the SAN with Software Defined Storage Networks

Ethernet Fabric Requirements for FCoE in the Data Center

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

Qsan Document - White Paper. Performance Monitor Case Studies

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Cisco-EMC Microsoft SQL Server Fast Track Warehouse 3.0 Enterprise Reference Configurations. Data Sheet

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA

How To Get 10Gbe (10Gbem) In Your Data Center

Oracle Database 11g Release 2 Performance: Protocol Comparison Using Clustered Data ONTAP 8.1.1

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Data Center Evolution without Revolution

Data Center Convergence. Ahmad Zamer, Brocade

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Private cloud computing advances

BUILDING A NEXT-GENERATION DATA CENTER

Linux NIC and iscsi Performance over 40GbE

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Michael Kagan.

Introduction to MPIO, MCS, Trunking, and LACP

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Next Generation Storage Networking for Next Generation Data Centers

Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Using Multipathing Technology to Achieve a High Availability Solution

KVM Virtualized I/O Performance

Brocade One Data Center Cloud-Optimized Networks

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

UCS M-Series Modular Servers

How To Understand And Understand The Power Of An Ipad Ios 2.5 (Ios 2) (I2) (Ipad 2) And Ipad 2.2 (Ipa) (Io) (Powergen) (Oper

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Analysis of VDI Storage Performance During Bootstorm

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Juniper Networks QFX3500

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

White Paper Solarflare High-Performance Computing (HPC) Applications

SAN Conceptual and Design Basics

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

10th TF-Storage Meeting

SAN and NAS Bandwidth Requirements

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers

Doubling the I/O Performance of VMware vsphere 4.1

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Oracle Database Scalability in VMware ESX VMware ESX 3.5

SPC BENCHMARK 1 EXECUTIVE SUMMARY

Transcription:

Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding Fibre Channel over Ethernet (FCoE), all running over a single connection between the host server and the target storage. NetApp introduced general availability in August 29 of native FCoE/Data Center Bridging (DCB) connectivity with its FAS and V-Series storage systems. Via the same 1GbE unified target adapter (UTA), NetApp also began demonstrating publicly support of FCoE, iscsi, NFS and CIFS over a single port, with general availability planned in 21. Demartek evaluated this storage solution for its flexibility and performance in satisfying all these storage protocols. NetApp, working closely with its partners, has provided a unified, end-to-end solution that supports Fibre Channel, FCoE, iscsi, NFS and CIFS at the same time all from the same storage solution. Evaluation Summary The NetApp FAS317 storage system proved to be a versatile storage system, handling multiple file and block protocols simultaneously, including FCoE, iscsi, NFS and CIFS on the same 1Gb converged networking connection. The NetApp storage system also serves traditional Fibre Channel data access on separate Fibre Channel connections within the same storage controller. We found that we were able to drive FCoE connections at nearly their maximum rated speed. Managing the FCoE connection in the NetApp storage system was exactly equivalent to managing a traditional Fibre Channel connection. This makes adopting FCoE technology as seamless as possible and we believe is encouraging for IT shops considering FCoE as a technology. NetApp is the first storage vendor to provide general availability for an FCoE storage target and the NetApp storage system is an excellent system for handling combined Fibre Channel, FCoE, iscsi, NFS and CIFS protocols in a single solution.

Page 2 of 14 What is Unified Storage and Why is it Important? Modern datacenters currently deploy multiple network interfaces running at different speeds and carrying different protocols and types of traffic over these networks. The two most common networking technologies deployed in commercial datacenters today are Ethernet and Fibre Channel. These two separate networks exist today because they each meet needs that the other does not. Each of these networks require their own adapters, switches and cables. Ethernet, commonly run at 1 Gigabit per second, carries a variety of traditional TCP/IP and UDP traffic and often is used for file serving functions using the NFS and CIFS protocols. In addition, Ethernet can also carry block storage traffic by using the iscsi protocol. Ethernet networks also support 1 Gigabit per second technology and this higher-speed Ethernet equipment is typically deployed in the core of the Ethernet network, but is not yet commonly found in edge devices. Fibre Channel, commonly run at 4 Gigabits per second, carries block storage traffic using the SCSI protocol within Fibre Channel frames. Fibre Channel networks also support 8 Gigabit per second technology today and this higher speed Fibre Channel equipment is beginning to be deployed in datacenters, with vendors such as NetApp being early with support of this higher speed today. As computing requirements increase, especially with the popularity of server virtualization and multi-core processors, the existing Ethernet and Fibre Channel networks must add connections and ports to keep pace. With the added concern of electric power consumption and the large numbers of cables required for each network in the datacenter, CIOs are facing challenges in scaling up the current technologies to meet the expected demands of future datacenters. As new racks in existing datacenters are deployed, or as new datacenters are built, one way to handle the increased requirements is to use one network that provides the best of Ethernet and Fibre Channel, supporting all the protocols over a single cable, single host adapter technology and single network switch technology. This simplifies the infrastructure by reducing the number of host adapters, cables and equipment required, while continuing to support today s protocols and applications. A new lossless Ethernet known as Data Center Bridging (DCB), also sometimes known as Converged Enhanced Ethernet (CEE), running at 1 Gigabits per second, supports a new standard known as Fibre Channel over Ethernet (FCoE). FCoE and DCB provide the lossless transport that storage protocols such as Fibre Channel require, while maintaining compatibility with all the existing Ethernet-based protocols.

Page 3 of 14 Overview of the FCoE technology Fibre Channel over Ethernet (FCoE) is a standard that has been approved by the T11 Technical Committee and works with Data Center Bridging. This new form of Ethernet includes enhancements that make it a viable transport for storage traffic and storage fabrics without requiring TCP/IP. These enhancements include Priority-based Flow Control (PFC), Enhanced Transmission Selection (ETS), and Congestion Notification (CN). These enhancements to Ethernet are defined in the following IEEE specifications: 82.1Qbb: Priority Flow Control (PFC) o Ability to control a flow (pause) based on a priority o Allows lossless FCoE traffic without affecting classical Ethernet traffic o Establishes priority groups using 82.1Q tags 82.1Qaz: Enhanced Transmission Selection (ETS) o Allows bandwidth allocation based on Priority Groups o Allows Strict Priority for low bandwidth / low latency traffic 82.1Qau: Congestion Notification (CN) o Allows for throttling of traffic at the edge of the network when congestion occurs within the network FCoE is designed to use the same operational model as native Fibre Channel technology. Services such as discovery, world-wide name (WWN) addressing, zoning and LUN masking all operate the same way in FCoE as they do in native Fibre Channel. FCoE hosted on 1 Gbps Enhanced Ethernet extends the reach of Fibre Channel (FC) storage networks, allowing FC storage networks to connect virtually every datacenter server to a centralized pool of storage. Using the FCoE protocol, FC traffic can now be mapped directly onto Enhanced Ethernet. FCoE allows storage and network traffic to be converged onto one set of cables, switches and adapters, reducing cable clutter, power consumption and heat generation. Storage management using an FCoE interface has the same look and feel as storage management with traditional FC interfaces.

Page 4 of 14 Evaluation Environment The evaluation was conducted at the NetApp facility in Research Triangle Park, North Carolina and the Demartek lab in Arvada, Colorado. For this evaluation, Demartek deployed host servers running Windows Server 28, 1GbE converged network adapters (CNAs), 1GbE DCB-capable switches with FCoE and a NetApp FAS storage system configured with the 1GbE unified target adapter. The components in the NetApp lab included: NetApp FAS317 storage system o One NetApp FAS317 controller o Two disk shelves each containing 14 144GB FC 1K RPM disk drives o One 1GbE unified target adapter Cisco Nexus 51 1GbE, FCoE Enhanced Ethernet Switch IBM x355 server o One Intel Xeon E5345 processor (2.33 GHz), 4 total cores, 4 logical processors o 8 GB RAM o Onboard IBM ServeRAID 8k-1 SAS RAIDcontroller o One 73GB 1K RPM SAS disk for boot o One QLogic QLE8152 dual-port 1Gb CNA configured as a host adapter o Windows Server 28 Enterprise x64 edition with Service Pack 2, with Microsoft MPIO enabled Test Process Figure 1 NetApp Ethernet Storage Configuration One objective of the tests was to run multiple protocols concurrently between the host server and the NetApp FAS317 storage system. Another of the objectives of these tests

Page 5 of 14 was to measure the performance of the unified network while running a mix of types of traffic. While the focus of this paper is on FCoE, we took the opportunity to configure multiple protocols (FCoE, iscsi, CIFS and NFS) to see how easy it is to configure and manage a mixed environment. It proved to be simple and straightforward using the single 1Gb unified target adapter (UTA), and would be familiar to anybody who has configured similar NetApp storage systems. For FCoE traffic, the UTA appeared to the NetApp system as a traditional Fibre Channel adapter and was managed exactly the same way as a traditional FC adapter. The LUNs assigned to the UTA ports were managed as traditional FC LUNs. A series of tests were run using IOmeter, an open-source I/O workload generator. IOmeter is a flexible I/O workload generator that can perform read and write operations of any block size and any number of simultaneous operations. IOmeter performs these I/O operations without regard to the underlying interface technology. A series of IOmeter tests were run over various types of connections, with selected data shown in the following graphs. The number of sectors read and written to each of the LUNs was limited so that I/O would fit into the cache of the FAS317 storage system. This was configured so that the speed of the connection could be tested, and not the speed of the disk drives in the storage system. All the Ethernet and FCoE traffic (using all the protocols) went through the QLogic CNA in the host server through the Cisco Nexus switch and the 1GbE UTA in the NetApp FAS317 Storage system. The server and storage were connected to four 1GbE ports on the Cisco Nexus switch. The systems were connected with OM-3 multi-mode 62.5 micron fiber optic cables with SFP+ connectors, certified for 1Gb line rate. Several LUNs and volumes were created on the NetApp FAS317 storage system. Four LUNs were created for Fibre Channel protocol, two LUNs were created for iscsi protocol and two volumes were created for NFS and CIFS access.

Page 6 of 14 Test Results The results shown below focus on the FCoE and mixed FCoE/iSCSI interface performance. As a reminder, the IOmeter tests were configured to read and write primarily to the cache of the NetApp FAS317 storage system. As a result, the sequential and random I/O results are very similar. The graphs below show the results of IOmeter tests performing random and sequential reads and writes to LUNs in three relatively simple configurations. All tests used formatted LUNs with the NTFS file system so that they would be equivalent to regular file system access. 1. FCoE only: one FCoE (FC) LUN 2. Mixed FCoE and iscsi: one FCoE LUN and one iscsi LUN. We noted the following characteristics of the FCoE traffic to the NetApp FAS317 storage system: The NetApp FAS317 treated the FCoE traffic the same way that it treats traditional Fibre Channel traffic. The FCoE ports were regarded as FCP ports. We achieved a maximum of 199 MBPS across one host port while running 1% FCoE traffic to one LUN. This is 92% of the theoretical line rate of a 1Gb connection. The host CPU utilization was best (lower and more efficient) when 1% FCoE workloads were performed, especially for the larger block sizes. This is not surprising because FCoE is handled the same way as traditional Fibre Channel traffic, which is largely offloaded onto the adapter. We noted the following characteristics of the iscsi traffic to the NetApp FAS317 storage system: The iscsi traffic was sensitive to various Ethernet settings in the Cisco Nexus switch. Under most conditions where we ran FCoE and iscsi traffic concurrently, the FCoE traffic seemed to have priority over Ethernet (iscsi) traffic. It is our understanding that the Cisco Nexus switch always reserves some bandwidth for FCoE traffic even under large Ethernet (including iscsi) traffic loads. iscsi performance is also sensitive to the features of the adapter. There are numerous settings in most server-class Ethernet adapters, and we did not attempt to optimize or test every possible combination of Ethernet adapter settings. The only setting we changed was to enable jumbo frames. If different adapters were used in the host server, we would expect different iscsi performance results.

Page 7 of 14 IOmeter Queue Depth Description For each block size tested, we ran a set of incrementing queue depths beginning at queue-depth = 4 and ending at queue-depth = 48, incrementing by 4. The queue depth indicates the number of simultaneous I/Os issued. This resulted in twelve data points for each block size. This data appears as a curve in the graphs for each block size. In some cases, this curve is flat because the through-put had achieved its maximum. It should be noted that in some cases, the mixed FCoE/iSCSI throughput was higher than the pure FCoE throughput. This is due, in part to having two connections, one for FCoE and one for iscsi. However, refer back to the comments on the previous page regarding iscsi traffic characteristics.

Page 8 of 14 IOPS I/O s per second IOps - Random Read 14 12 1 8 6 4 2 FCoE_LUN1 FCoEiSCSi_LUN2 IOps - Random Write 9 8 7 6 5 4 3 2 1 FCoE_LUN1 FCoEiSCSI_LUN2

Page 9 of 14 IOps - Sequential Read 14 12 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2 IOps - Sequential Write 9 8 7 6 5 4 3 2 1 FCoE_LUN1 FCoEiSCSI_LUN2

Page 1 of 14 MBPS MegaBytes per second At block sizes 64K and above, we observed nearly full line rate for FCoE traffic, especially sequential writes. 14 MBps - Random Read 12 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2 12 MBps - Random Write 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2

Page 11 of 14 14 MBps - Sequential Read 12 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2 12 MBps - Sequential Write 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2

Page 12 of 14 Percent CPU Utilization For host CPU Utilization, lower is better and more efficient. 12 1 8 % CPU Utilization - Random Read 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2 % CPU Utilization - Random Write 9 8 7 6 5 4 3 2 1 FCoE_LUN1 FCoEiSCSI_LUN2

Page 13 of 14 % CPU Utilization - Sequential Read 12 1 8 6 4 2 FCoE_LUN1 FCoEiSCSI_LUN2 % CPU Utilization - Sequential Write 9 8 7 6 5 4 3 2 1 FCoE_LUN1 FCoEiSCSI_LUN2

Page 14 of 14 Conclusion NetApp is the first storage vendor to support traditional Fibre Channel, FCoE, iscsi, NFS and CIFS in a single storage controller and high-availability system. Now storage administrators, datacenter managers and CIOs can deploy a storage product that supports all of today s needs, is compatible with today s technology, and will support the needs of future converged networks that support shared infrastructure architectures. NetApp is a registered trademark of NetApp, Inc. Demartek is a registered trademark of Demartek, LLC. All other trademarks are the property of their respective owners.