Accelerating Network Attached Storage with iscsi



Similar documents
ESG Lab Review. Authors: Copyright 2006, Enterprise Strategy Group, Inc. All Rights Reserved

EMC Unified Storage for Microsoft SQL Server 2008

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts

EMC Backup and Recovery for Microsoft SQL Server

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server

EMC Business Continuity for Microsoft SQL Server 2008

Enterprise-class Backup Performance with Dell DR6000 Date: May 2014 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Senior Lab Analyst

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

White. Paper. The Converged Network. November, By Bob Laliberte. 2009, Enterprise Strategy Group, Inc. All Rights Reserved

Lab Validation Report

Virtual Provisioning. Management. Capacity oversubscription Physical allocation on the fly to logical size. With Thin Provisioning enabled

EMC Integrated Infrastructure for VMware

Field Audit Report. Asigra. Hybrid Cloud Backup and Recovery Solutions. May, By Brian Garrett with Tony Palmer

Oracle Database Scalability in VMware ESX VMware ESX 3.5

WHITE PAPER SCALABLE NETWORKED STORAGE. Convergence of SAN and NAS with HighRoad. SPONSORED RESEARCH PROGRAM

Optimizing Large Arrays with StoneFly Storage Concentrators

Symantec OpenStorage Date: February 2010 Author: Tony Palmer, Senior ESG Lab Engineer

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

Quantum StorNext. Product Brief: Distributed LAN Client

Next Generation NAS: A market perspective on the recently introduced Snap Server 500 Series

EMC Invista: The Easy to Use Storage Manager

Scala Storage Scale-Out Clustered Storage White Paper

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

EMC Integrated Infrastructure for VMware

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Lab Validation Report

Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era

A High-Performance Storage and Ultra-High-Speed File Transfer Solution

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

A Storage Network Architecture for Highly Dynamic Virtualized and Cloud Computing Environments

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst

Server Virtualization: Avoiding the I/O Trap

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

File System Archiving

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform

Research Report. Abstract: Scale-out Storage Market Forecast February By Terri McClure

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA

EMC RECOVERPOINT FAMILY

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Integrated Grid Solutions. and Greenplum

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Implementing Storage Concentrator FailOver Clusters

iscsi-based IP Storage Area Networks

VBLOCK SOLUTION FOR SAP: SAP APPLICATION AND DATABASE PERFORMANCE IN PHYSICAL AND VIRTUAL ENVIRONMENTS

How To Write An Article On An Hp Appsystem For Spera Hana

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using NFS and DNFS. Reference Architecture

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

Building the Virtual Information Infrastructure

Introduction to MPIO, MCS, Trunking, and LACP

Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.

Lab Validation Report

Lab Validation Report

CloudByte ElastiStor Date: February 2014 Author: Tony Palmer, Senior Lab Analyst

ADVANCED NETWORK CONFIGURATION GUIDE

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

Product Brief. Overview. Analysis

10th TF-Storage Meeting

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Block based, file-based, combination. Component based, solution based

How To Design A Data Centre

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Global Headquarters: 5 Speen Street Framingham, MA USA P F

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

EMC s Enterprise Hadoop Solution. By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst

Doubling the I/O Performance of VMware vsphere 4.1

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

A Survey of Shared File Systems

EMC Celerra NS Series/Integrated

AIX NFS Client Performance Improvements for Databases on NAS

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

White. Paper. Extracting the Value of Big Data with HP StoreAll Storage and Autonomy. December 2012

Scalable NAS for Oracle: Gateway to the (NFS) future

Uncompromised business agility with Oracle, NetApp and VMware

Information Infrastructure for Vmware

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Lab Validation Report

System Requirements. Version 8.2 November 23, For the most recent version of this document, visit our documentation website.

Lab Validation Report

Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)

Boost Database Performance with the Cisco UCS Storage Accelerator

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

White. Paper. Improving Backup Effectiveness and Cost-Efficiency with Deduplication. October, 2010

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image

The functionality and advantages of a high-availability file server system

Isilon OneFS. Version OneFS Migration Tools Guide

Transcription:

ESG Lab Review EMC MPFSi: Accelerating Network Attached Storage with iscsi A Product Review by ESG Lab May 2006 Authors: Tony Asaro Brian Garrett Copyright 2006, Enterprise Strategy Group, Inc. All Rights Reserved

Table of Contents Introduction... 3 ESG Lab Review... 4 Transparent to Users and Applications...5 The MPSFi Performance Advantage...6 MPFSi Performance Scalability...9 ESG Lab Review Highlights... 10 ESG Lab s View... 10 Appendix... 11 ESG Validation Reviews The goal of the ESG Lab reports is to educate customers about specific storage-related products including storage systems, backup-to-disk solutions, storage management applications, backup/recovery software, storage virtualization platforms, etc. ESG Lab reports are not meant to replace the necessary evaluation process that end user customers should conduct. ESG Lab reports are designed to provide insight to what is compelling about various products and how they can solve customer problems. ESG Lab provides third-party expert perspective based on ESG Lab analysis and interviews with customers using these products in production environments. All trademark names are property of their respective companies. Information contained in this publication has been obtained by sources The Enterprise Strategy Group (ESG) considers to be reliable but is not warranted by ESG. This publication may contain opinions of ESG, which are subject to change from time to time. This publication is copyrighted by The Enterprise Strategy Group, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of the Enterprise Strategy Group, Inc., is in violation of U.S. Copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact ESG Client Relations at (508) 482.0188. Enterprise Strategy Group 2

Introduction Network Attached Storage Systems (NAS) understand files and metadata and SAN storage systems only understand block data, which has no meaning for users or applications. Users can share, recover, move, and access data more easily with NAS storage. However, because NAS is file aware, it operates at a higher layer and therefore introduces more latency into the read/write process. Because SAN storage operates at a lower layer it is blazingly fast. For years the question has been: what to use, SAN or NAS? For many companies, the answer is both. While it is not a hard and fast rule, many companies use SAN for their database and e-mail applications and NAS for file storage. However, with the introduction of the EMC Multipath File System (MPFSi), companies can now use SAN and NAS not as discrete functions but as a truly unified solution. MPFSi combines the performance benefits of the iscsi SAN protocol with the intelligence of NAS using a single Ethernet network. EMC invented the MPFS protocol over six years ago and named the product HighRoad, which was developed for EMC customers with extreme performance requirements. HighRoad used the Fibre Channel SAN protocol to accelerate NAS data transfers. Due to the additional cost and complexity of running Fibre Channel connections to NAS clients, early adoption was limited to a select few customers who required more performance from a single NAS system than was available at the time. However, the iscsi protocol has emerged as an alternative to Fibre Channel for block based storage area networking. EMC was involved heavily in the development of the iscsi specification and has added iscsi support to their entire line of SAN and NAS storage systems. Over the past two years, the adoption of iscsi in production systems in companies of all sizes and industries has grown dramatically. ESG believes that iscsi is entering the early mainstream phase of market adoption. EMC announced MPFSi in January 2006. MPFSi uses the iscsi protocol instead of Fibre Channel to accelerate NAS access. While the combination of FC SAN and NAS requires two networks, iscsi and NAS can share a single network infrastructure. Therefore ESG believes that MPFSi will ultimately address a much greater market providing SAN level performance with NAS level intelligence over a unified storage network. Further, EMC has made available the MPFSi client software to the open-source community to encourage the building of applications using the technology. EMC is promoting industry standardization of the MPFSi protocol in the form of an extension to the NFSv4 specification called parallel NFS. MPFSi is innovative technology that combines the intelligence and ease of use of NAS with the speed of a SAN with the following benefits: Faster than traditional NFS Runs over existing Ethernet infrastructure Transparent to users and applications Windows and Unix client support State-less clients which enable mixed NFS and MPFSi access Existing Celerra NAS systems can be upgraded to use MPFSi Celerra blades support more clients, users and applications than with NFS ESG Lab reviewed an MPFSi solution at an EMC facility in Southborough Massachusetts. The primary goal of the testing was validation of the performance boost of between two to four times compared to NFS with excellent performance scalability. Enterprise Strategy Group 3

ESG Lab Review Before we examine the results of the ESG Lab testing, let s take a quick look at how MPFSi accelerates network attached storage with iscsi. As shown in Figure One, a Windows or UNIX client accesses an EMC MPFSi solution over standard Ethernet wiring and switches. The moving parts of an MPFSi solution are: 1. MPFSi agent software installed on the client 2. A Celerra blade which maintains the file system and handles the NAS protocol 3. A Connectrix MDS storage switch with an IP Storage Service module that handles iscsi data transfer 4. A CLARiiON or Symmetrix storage system NAS protocol handling and file system awareness is managed by the Celerra. iscsi data transfers are handled by an IP Storage Services module in a Connectrix MDS storage switch. IP Storage Services modules, available in four and eight port packages, move data to and from the clients using the iscsi protocol, and to storage systems using the Fibre Channel protocol. Blades and a storage system are standard items included in every Celerra NAS solution. Celerra file systems are upgraded to MPFSi by adding agent software and one or more IP Storage Services modules. The most significant cost associated with MPFSi for most customers will be the IP Storage Services Module. Figure One: MPFSi Accelerates Network Attached Storage with iscsi Enterprise Strategy Group 4

Transparent to Users and Applications ESG Lab testing began on a pre-wired and configured test bed as shown in Figure Two 1. Sixteen Dell servers running Red Hat Linux were connected over Ethernet to a single Celerra NSX X Blade, two Connectrix MDS IP Storage Service modules, and three storage systems (a CLARiiON CX700 and a pair of CX500 s) 2. MPFSi agent and iscsi initiator software was installed on each of the Linux clients so that file systems could be accessed using either the standard NFS protocol or the EMC MPFSi protocol. Switching back and forth between NFS and MPFSi was easily performed many times during testing. An option on the mount command (-mpfs) was used to turn on the MPFSi protocol. A remount of the file system without the option switched the protocol access method back to standard NFS. Operating system utilities and applications worked seamlessly and transparently with MPFSi compared to NFS. The only noticeable difference was the speed of MPFSi which will be examined next. Figure Two: ESG Lab Test Configuration Why This Matters Users and applications can benefit from the performance boost of MPFSi without making any changes. Leveraging existing Ethernet infrastructure, administrators can manage which applications and users are configured for MPFSi. Mixed mode access with some clients using MPFSi and some using traditional NFS is supported so that administrators can get comfortable with the technology on a test client or two. Later when the administrator is ready, a remount of an existing production file system is all that is needed to realize the performance benefits of MPFSi. 1 Configuration details are documented in the Appendix 2 The configuration is shown in Figure Two and documented in the Appendix. Enterprise Strategy Group 5

The MPFSi Performance Advantage NAS is fundamentally slower than iscsi due to the chatty nature of high level NAS protocol handling. iscsi operates over the same Ethernet infrastructure and is a fast and efficient data transfer protocol. The combination of iscsi for fast data transfers, and NAS for file system protocol awareness is used in the MPFSi protocol to make a NAS system run faster. The following diagram shows how this works for a typical file access. The NAS system is burdened by metadata protocol overhead for each data transfer. In comparison, the MPFSI protocol incurs protocol overhead only at the beginning of each file access. After that initial handshake, data transfers run quickly and efficiently over iscsi. Because MPFSi can transfer more file data over a shorter period of time, MPFSi has a performance advantage over traditional NAS access. Figure Three: The MPFSi Performance Advantage The performance advantage of MPFSi was measured using the industry standard Unix data dump (dd) utility. The utility was used to time how long it took to read files that had been previously created with a test utility. Sixteen Linux clients were used to read 80 MB files in one Megabyte intervals. As shown in the following diagram, the aggregate throughput rate of MPFSi was 300% faster than NFS. Enterprise Strategy Group 6

Figure Four: The MPFSi Bandwidth Boost The results above compare the aggregate throughput capabilities of MPFSi to NFS. These results indicate that many users, applications and processes can do more in parallel with MPFSi than with NFS. Another way of looking at the same performance results is shown below. This is how a user perceives the performance advantage of MPFSi opening, saving and copying files on a shared network attached drive is three times faster with MPFSi. Figure Five: MPFSi Moves More Data in Less Time Enterprise Strategy Group 7

MPFSi is not only faster than NAS, it is more efficient. NAS protocol handshakes between NAS clients and an NAS servers consumes a lot of CPU cycles. Switching to the MPFSi protocol reduces the burden on NAS clients and the EMC NAS server. As a result, more clients can share the same Celerra NAS system. ESG Lab reviewed the results of EMC testing which indicates that MPFSi delivers a dramatic reduction in CPU utilization as shown in the following tables. Table One: MPFSi Reduces NAS Server CPU Utilization File Access Pattern NFS CPU Utilization MPFSi CPU Utilization Reads 40% 3.0% Writes 58% 1.2% Table Two: MPFSi Reduces NAS Client CPU Utilization File Access Pattern NFS CPU Utilization MPFSi CPU Utilization Reads 14% 2.0% Writes 25% 6.3% Why This Matters MPFSi combines the performance benefits and efficiency of the block based iscsi protocol with the simplicity of NAS using a single Ethernet network. ESG observed significant CPU efficiency gains which can be used to connect more clients to a single Celerra data mover. An MPFSi performance boost of 300% compared to NAS was measured. The bottom line - with MPFSi you can do more, with less data movers, and you can do it faster. Enterprise Strategy Group 8

MPFSi Performance Scalability ESG Lab performed a series of test results with a goal of determining the performance scalability of MPFSi. The EMC fstest utility was run from 16 Linux clients connected to a single Celerra NSX blade and three CLARiiON storage systems as shown previously in Figure Two. The utility was used to create files of various sizes which were then exercised using a variety of access patterns. The performance scalability of reads and writes, over large and small files, using random and sequential data access patterns was reviewed. In all cases, performance scaled in a near-linear fashion as the number of clients and Ethernet interfaces was increased. The following diagram shows the performance scalability of large files accessed sequentially. ESG Lab noticed that near wire speed throughput for large sequential reads and writes over a single Gigabit Ethernet interface scaled well as the number of interfaces using the MPFSi protocol was increased to 16. Figure Six: MFPSi Performance Scalability Why This Matters Companies that rely on NAS storage systems that are shared by a large number of users and applications need predictable performance scalability. The performance scalability limits of legacy NAS systems have forced some companies to deploy many NAS systems to meet this need which increases complexity and cost. ESG Lab verified that the modular architecture of MPFSi delivers predictable performance scalability from a single system. The number of blades, Connectrix IP storage services modules, storage systems and hard drives can be varied to meet a wide range of performance and capacity requirements. Enterprise Strategy Group 9

ESG Lab Review Highlights ESG Lab verified that MPFSi is transparent to users and applications. A mount operation is all that was needed to switch between traditional NAS access and MPFSi. NAS and MPFSi were used to access the same file system. ESG Lab measured a 300 percent MPFSi performance advantage compare to NFS. Near wire speed throughput for large sequential reads and writes over a single GigE interface scaled well as the number of interfaces using the MPFSi protocol was increased to 16. MPFSi significantly reduced CPU utilization compared to NFS. EMC Celera NSX CPU utilization for random reads was reduced from 40% to 3%, and NFS client utilization was reduced from 14% to 2%. ESG Lab s View There is brilliance and innovation in MPFSi. MPFSi leverages the ease of use and more importantly the intelligence of NAS with the speed of a SAN - all over the same Ethernet infrastructure. Offloading NAS data transfer handling from an EMC Celerra blade to iscsi is fast, efficient, and frees up CPU cycles on Celerra. ESG Lab measured an MPFSi performance boost of 300% compared to NFS when accessing the same file system. Freed from the burden of managing data transfers, Celerra data movers can handle more clients, users, applications and processes while delivering near line speed throughput with excellent performance scalability. Early adopters with extreme NAS performance requirements will be the first to adopt MPFSi. Those users can pave the way for broader market adoption. Industry standardization and the open source availability of the MPFSi client will also be important and EMC is pushing this heavily. ESG believes that the standardization of MPFSi, along with the inevitable commoditization and consolidation of iscsi processing currently provided by Connectrix MDS IP Storage Services modules, has the potential to drive NAS usage into high performance environments that today can only be served by SANs. MPFSi has the potential to change the way we do storage networking. ESG believes that the world of storage will not be NAS. And it will not be SAN. It will be an amalgamation of both leveraging the strengths of each. Enterprise Strategy Group 10

Appendix Test Configuration Clients 16 Dell 2850 s, Dual 3 GHz, 4 GB RAM Client operating system Red Hat Enterprise Server, version 3 Data Movers One Celerra NSX, DART version 5.5 Connectrix MDS IP Storage Modules Two 8 port modules, 16 GigE ports total Storage Systems 1 CLARiiON CX700, 2 CX500 s, Flare 19 Hard Drives 120 FC drives, 10K RPM, 300 GB Enterprise Strategy Group 11