Steven Hill January, 2010



Similar documents
A Brand New Day for Data Center Storage The Sun Rises on Converged Network Adapters

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

over Ethernet (FCoE) Dennis Martin President, Demartek

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Windows Host Utilities Installation and Setup Guide

Doubling the I/O Performance of VMware vsphere 4.1

Building Enterprise-Class Storage Using 40GbE

3G Converged-NICs A Platform for Server I/O to Converged Networks

Windows Host Utilities 6.0 Installation and Setup Guide

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

What Is Microsoft Private Cloud Fast Track?

Linux NIC and iscsi Performance over 40GbE

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

Storage Systems Performance Testing

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Application Note. Introduction. Instructions

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

The 8Gb Fibre Channel Adapter of Choice in Oracle Environments

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Lab Validation Report

Introduction to MPIO, MCS, Trunking, and LACP

Emulex s OneCore Storage Software Development Kit Accelerating High Performance Storage Driver Development

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Fibre Channel Over and Under

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

QLogic 8Gb FC Single-port and Dual-port HBAs for IBM System x IBM System x at-a-glance guide

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

EmulexSecure 8Gb/s HBA Architecture Frequently Asked Questions

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Accelerating Microsoft Exchange Servers with I/O Caching

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

EMC Unified Storage for Microsoft SQL Server 2008

KVM Virtualized I/O Performance

Qsan Document - White Paper. Performance Monitor Case Studies

Deployments and Tests in an iscsi SAN

An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption

High Performance Tier Implementation Guideline

Optimizing Large Arrays with StoneFly Storage Concentrators

Configuration Maximums

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure

Industry Brief. Renaissance in VM Network Connectivity. An approach to network design that starts with the server. Featuring

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Lab Validation Report. Leo Nguyen. Month Year

How To Understand And Understand The Power Of An Ipad Ios 2.5 (Ios 2) (I2) (Ipad 2) And Ipad 2.2 (Ipa) (Io) (Powergen) (Oper

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

Unified Storage Networking

1000-Channel IP System Architecture for DSS

QuickSpecs. Models. HP StorageWorks 8Gb PCIe FC HBAs Overview. Part Number AK344A

Improving Application Performance, Scalability, and Availability using Microsoft Windows Server 2008 and NLB with Sanbolic Melio FS and SAN Storage

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

FC SAN Vision, Trends and Futures. LB-systems

Technical Operations Enterprise Lab for a Large Storage Management Client

I/O Virtualization The Next Virtualization Frontier

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

AIX NFS Client Performance Improvements for Databases on NAS

Enhancing the Dell iscsi SAN with Dell PowerVault TM Tape Libraries and Chelsio Unified Storage Router iscsi Appliance

Configuration Maximums VMware Infrastructure 3

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

HP iscsi storage for small and midsize businesses

SMB Direct for SQL Server and Private Cloud

Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Unified Computing Systems

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Accelerating Network Attached Storage with iscsi

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers

Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau

Private cloud computing advances

HP 4 Gb PCI-X 2.0 Host Bus Adapters Overview

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Table of Contents. Introduction Prerequisites Installation Configuration Conclusion Recommended Reading...

Lab Validation Report

Netapp Interoperability Matrix

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Transcription:

iscsi Test Results Steven Hill January, 2010 Test Methodology For this test series we traveled to the Emulex test lab at their corporate headquarters in Costa Mesa, California, to do some hands-on testing of three key CNA products, along with some additional products to compare iscsi, FCoE and 10GbE TCP/IP performance. In this Test Report we address iscsi testing. To ensure objectivity, we directly coordinated these tests in advance with the test group at Emulex and went to their location to verify their test beds. (See attached diagrams.) We also ran additional, random tests to audit the accuracy of their results and establish repeatable baseline performance. We are pleased to confirm that, not only was the testing methodology sound and impartial, but our support team at Emulex had gone to great lengths to ensure that all products under test were portrayed as fairly as possible. Substantial pre-test evaluations were done with each card to determine the highest possible performance capabilities, and only the best repeatable statistics for each device will be used for this comparison. We also ensured that the products tested were using the most current firmware, set to factory defaults and were in no way tuned or optimized to ensure that the results in this report were not skewed for any product. Picture 1 Tester and Test Report Author Steven Hill IT Brand Pulse 13 San Vincente Rancho Santa Margarita California, 92688 949.713.2313 www.itbrandpulse.com Steven Hill has been testing adapters and switches for Network Computing Magazine since 2003. 1

10GbE NICs Running iscsi All Ethernet-based devices already have native support for the iscsi protocol through the use of software such as Microsoft s iscsi Initiator, which means that all CNAs are iscsi-ready. Basic iscsi access is one thing, but if you have server-based applications running on iscsi storage moving at 10Gb speeds, it makes more sense than ever to look for an adapter capable of hardware-level iscsi acceleration. Diagram 1 Microsoft iscsi Initiator (source: Microsoft.com) An advantage of using a network adapter in the server is that network adapters are a standard component in all computers, and the Microsoft iscsi Software Initiator is a free download. iscsi Players The environment required to test full iscsi performance in a CNA included a 10Gb Ethernet switch; 10Gb CNAs from Chelsio, Brocade, Emulex, Intel and QLogic; and solid state disks from Third I/O Inc. 2

Table 1 iscsi Products Tested Product Chelsio N310 Brocade 1020 Description The N310E is the single-port, low profile 10GbE NIC based on the Terminator 3 ASIC. Also available in a dual-port configuration, this PCI-E 8x card is available with either SFP+ or 10GBASE-CX connectivity. It is primarily marketed as a 10GbE server adapter with iscsi acceleration capabilities. Currently offers only software-level iscsi running over the NIC device in addition to its FCoE capabilities. Emulex OCe10102-F Identical from a hardware perspective to the OCe10102 F, 10102-I is the iscsi acceleration-enabled version of the 10000-series. Based on their current modular strategy, you have the option to purchase the card optimized for either FCoE or iscsi acceleration, but not both at the same time. This doesn t mean that the FCoE version can t do iscsi using a software initiator; it just means that both protocols will not be accelerated concurrently. Intel X520 The X520 is Intel s flagship 10GbE adapter and is a lowprofile, single-asic, PCI 8x Gen2 card. This adapter is also available in single- and dual-port SFP+ configurations, supporting either fiber or copper connectivity and it does not support hardware-based iscsi acceleration. QLogic QLE8152 Currently offers only software-level iscsi support running over the NIC device in addition to its FCoE capabilities. 3

iscsi Test Setup To deliver 400k+ IOPS of performance that a full-speed 10GbE iscsi port could potentially require, we needed to find an iscsi target that would not restrict that capability. Since nothing on the market currently exists that could provide that level of performance, we utilized an iscsi target based upon the Emulex iscsi initiator running on multiple servers in target mode. This emulated the behavior of a top-performing iscsi disk array by providing storage LUNs that utilize system RAM memory for disk space while operating externally using the industry-standard iscsi protocol. For performance ratings, we used Iometer running locally on the initiator system. Diagram 2 iscsi Test Bench for Iometer 10Gb Ethernet Switch 10Gb Ethernet 4x 10Gb Ethernet Windows 2008 x86 Nehalem CPU 8xCores Iometer 7.27 iscsi Target Servers Internal Ram Disk 4 10GbE Target Ports 8-1GB Raw LUNs 4

iscsi Performance Testing iscsi storage is no different than any other system, we loaded the server and tested performance points at various data block sizes with each product. Here s what we found. iscsi IOPS Performance The IOPS capabilities of the Emulex card exceed those of the others by a substantial margin in small transfers. Even at 4K block sizes (used by Microsoft Exchange Information Store) the IOPS performance specs continued to out-distance the other products tested. Chart 1 iscsi IOPS Performance iscsi Read Throughput Performance This is where the differential starts to show up between products, with Emulex leading the way from the start. Interestingly, both the cards from Intel and QLogic were unable to perform at line rate, more than likely due to the limitations they may have with the software-based iscsi Initiator and their lack of iscsi acceleration capabilities. Chart 2 iscsi Read Throughput Performance 5

iscsi Write Throughput Performance The difference is much less noticeable on the write side, where all the products performed very well. Notice that all were able to reach full like rate at about the 8K block size. Chart 3 iscsi Write Throughput Performance iscsi Mixed Read/Write Throughput Performance This is where the differential starts to show up between products, with Emulex leading the way from the start. Interestingly, both the cards from Intel and QLogic were unable to perform at line rate, more than likely due to the limitations they may have with the software-based iscsi Initiator and their lack of iscsi acceleration capabilities. Chart 4 iscsi Mixed Read/Write Throughput Performance iscsi Performance Conclusions Again Emulex led the field in the iscsi tests, where only Emulex and Chelsio offer any hardwarebased iscsi acceleration capabilities. Brocade s card did surprisingly well considering that it was using the same iscsi initiator as everyone else, yet both Intel and QLogic showed some performance limitations due to their software-only iscsi approach. 6

CPU Efficiency (IOPS) This measurement is the ratio of IOPS/average CPU utilization during various block size transfers. Products that feature iscsi off-load engines perform well on this test. As noted, Emulex leads the tests with the highest CPU Efficiency number. This indicates that the host server is not being burdened with iscsi protocol processing and will have more CPU for applications - a critical requirement in virtualized environments. Both iscsi Read and Write operations show Emulex leads in CPU Efficiency. Chart 5 CPU Efficiency iscsi Read Chart 6 CPU Efficiency iscsi Write 7

User Management Considerations Although we do this all the time, we still try to explore the same out-of-box experience as the typical user. There s a substantial variance in the level of networking expertise to be found in most IT shops and not every company wants or needs to dig deeply into the minutiae of performance tweaking that can be done to their networks. Simplicity is good, and reducing management costs is a top priority, so we test devices using factory defaults for this reason; aside from the expected addressing and protocol setup, installing these cards shouldn t be an exercise in failure and frustration. Chelsio and Brocade only offered the basics when it came to driver installation and configuration and expected users to drop to the Windows Control Panel or Device Manager for basic options, while Intel provided the standard PROset Utility included with all their NICs. QLogic provided a slightly updated version of the SANsurfer management tool used for their other Fibre Channel and iscsi Host Bus Adapters (HBAs), which now supports the additional configuration options for FCoE and provides reporting, as well as an agent, to support discovery and management by third-party SAN applications. But of all that we tested, Emulex offered by far the richest and cleanest, single pane of glass management environment. The Emulex OneCommand Manager utility provides automated discovery, configuration, reporting and agentless performance analysis of any Emulex device in the network and a level of functionality usually reserved for expensive, third-party SAN management tools. A major feature of Emulex s management tool is the way it clearly associates the logical FCoE, iscsi or NIC ports to its actual physical port in order to eliminate configuration errors. Even more interesting is that - unlike the other management tools - only OneCommand Manager offers a GUIbased NIC teaming tool that supports other NIC brands as well as their own. Chart 7 iscsi Management - Competitive Comparison 8

About the Author: Steven Hill, Contributing Analyst IT Brand Pulse As the Technology Editor of Storage and Servers for Network Computing Magazine, Steven Hill was responsible for the coverage of emerging technologies for the modern datacenter; and he personally tested, analyzed, and reported on some of the newest enterprise-level hardware and software offerings available today. Prior to Network Computing, his 35-year career provided production and problem-solving experience in small business as well as Fortune 500 corporate environments. Steven now serves as an independent IT consultant, writer, analyst and speaker on numerous Enterprise IT topics. He currently operates out of his secret test facility based in the deep woods of Northeastern Wisconsin; along with his Hound Dog/Network Administrator Tucker and Sheltie Mix/Security Officer Mia. Steven, Tucker and Mia can be contacted at: shillpub@gmail.com COPYRIGHT NOTICE This IT Brand Pulse research document was published as part of an IT Brand Pulse unified networking and continuous brand intelligence service, providing written research, analyst interactions and telebriefings. Visit www.itbrandpulse.com to learn more about IT Brand Pulse research and brand development services. Please contact IT Brand Pulse at 949-300-8917 or frank.berry@itbrandpulse.com for information about redistribution rights. Copyright 2010 IT Brand Pulse. All rights reserved. 9