The Evolution of Solid State Storage in Enterprise Servers

Similar documents
ACCELERATE SQL SERVER 2014 WITH BUFFER POOL EXTENSION ON LSI NYTRO WARPDRIVE

The Revival of Direct Attached Storage for Oracle Databases

White Paper: M.2 SSDs: Aligned for Speed. Comparing SSD form factors, interfaces, and software support

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

Transitioning to 6Gb/s SAS (Serial-Attached SCSI) A Dell White Paper

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

Intel RAID SSD Cache Controller RCS25ZB040

Data Center Solutions

StorageBox High Performance NVMe JBOF

The Transition to PCI Express* for Client SSDs

Serial ATA in Servers and Networked Storage

AHCI and NVMe as Interfaces for SATA Express Devices - Overview

OCZ s NVMe SSDs provide Lower Latency and Faster, more Consistent Performance

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Boost Database Performance with the Cisco UCS Storage Accelerator

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

HP Z Turbo Drive PCIe SSD

Enterprise SSD Interface Comparisons

Data Center Storage Solutions

Flash Memory Technology in Enterprise Storage

QuickSpecs. PCIe Solid State Drives for HP Workstations

Accelerating Microsoft Exchange Servers with I/O Caching

Scaling from Datacenter to Client

Chapter Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig I/O devices can be characterized by. I/O bus connections

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Building a Flash Fabric

ioscale: The Holy Grail for Hyperscale

MS Exchange Server Acceleration

SCSI Standards and Technology Update: SAS and SCSI Express

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card

Solid State Drive Architecture

LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0

Data Center Performance Insurance

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Make A Right Choice -NAND Flash As Cache And Beyond

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Benefits of Solid-State Storage

WHITE PAPER Addressing Enterprise Computing Storage Performance Gaps with Enterprise Flash Drives

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

VTrak SATA RAID Storage System

Data Center Storage Solutions

How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server

Performance Brief: MegaRAID SAS 9265/9285 Series

Nexenta Performance Scaling for Speed and Cost

RAID technology and IBM TotalStorage NAS products

PCI Express SATA III RAID Controller Card with Mini-SAS Connector (SFF-8087) - HyperDuo SSD Tiering

Memory Channel Storage ( M C S ) Demystified. Jerome McFarland

4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card

Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions

Native PCIe SSD Controllers A Next-Generation Enterprise Architecture For Scalable I/O Performace

Speed and Persistence for Real-Time Transactions

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Data Center Solutions

Understanding Data Locality in VMware Virtual SAN

PCI Express* Ethernet Networking

Optimizing Large Arrays with StoneFly Storage Concentrators

The proliferation of the raw processing

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

3 Port PCI Express 2.0 SATA III 6 Gbps RAID Controller Card w/ msata Slot and HyperDuo SSD Tiering

SOLID STATE DRIVES AND PARALLEL STORAGE

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Enterprise Disk Storage Subsystem Directions

QuickSpecs. HP Z Turbo Drive

2 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card w/ 2 msata Slots and HyperDuo SSD Tiering

Serial ATA technology

SSD Performance Tips: Avoid The Write Cliff

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

HP Smart Array Controllers and basic RAID performance factors

PCI Express 2.0 SATA III RAID Controller Card with Internal Mini-SAS SFF-8087 Connector

Disk-to-Disk Backup & Restore Application Note

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

RocketRAID Product Guide

Intel X58 Express Chipset

EMC Unified Storage for Microsoft SQL Server 2008

Frequently Asked Questions March s620 SATA SSD Enterprise-Class Solid-State Device. Frequently Asked Questions

BCDVideo TM offers HP's full portfolio of SAS drives meeting all these demands.

Accelerating Server Storage Performance on Lenovo ThinkServer

4 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card with HyperDuo SSD Tiering

Intel RAID Controllers

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

Transcription:

The Evolution of Solid State Storage in Enterprise Servers Title Subhead: 14/16 Myriad Pro Regular Network inefficiency is a critical issue for today s virtualized datacenters. This paper examines the major challenges facing virtualized datacenter networks and discusses control plane solutions to enhance their network performance. Introduction Solid state drives (SSDs) and PCI Express (PCIe ) flash memory adapters are growing in popularity in enterprise, service provider and cloud datacenters owing to their ability to cost-effectively improve application-level performance. A PCIe flash adapter is a solid state storage device that plugs directly into a PCIe slot of an individual server, placing fast, persistent storage near server processors to accelerate application-level performance. By placing storage closer to the server s CPU, PCIe flash adapters dramatically reduce latency in storage transactions compared to traditional hard disk drive (HDD) storage, but the configuration lacks standardization and critical storage device attributes like external serviceability with hot-pluggability. To overcome these limitations, various organizations are developing PCIe storage standards that extend PCIe onto the server storage mid-plane to provide external serviceability. These new PCIe storage standards take full advantage of flash memory s low latency, and provide an evolutionary path for its use in enterprise servers. This whitepaper introduces the new standards in the context of flash memory s evolutionary integration into the enterprise. The material is organized into five sections followed by a brief summary. The first section provides some background by describing the role flash memory plays in the overall server-storage hierarchy. Sections two and three cover how flash memory is deployed today in SSDs and PCIe flash adapters, respectively. Sections four and five highlight the emerging PCIe storage standards, as well as how and when devices based on these stand- ards are likely to be commercially available. The Need for Speed Many applications benefit considerably from the use of solid state storage owing to the enormous latency gap that exists between Processor L1 Cache L2 Cache Main Memory T1 Storage T2 Storage Near Line 1 ns 10 ns 100 ns the server s main memory and its direct-attached HDDs. Flash storage enables database applications, for example, to experience a 4 to 10 times improvement in performance. The reason, as shown in Figure 1, is that access to main memory takes about 100 nanoseconds, while input/output (I/O) to traditional rotating storage is on the order of 10 milliseconds or more. This access latency difference approximately five orders of magnitude has a profound adverse impact on application-level performance and response times. Latency Penalty for leaving Memory Hierarchy 10,000,000 ns (10 ms) 20,000,000 ns (20 ms) >20,000,000 ns (>20 ms) Figure 1: NAND flash memory fills the gap in latency between a server s main memory and fast-spinning HDDs. (cont. p.2)

Latency to external storage area networks (SANs) and network-attached storage (NAS) is even higher owing to the intervening network infrastructure (e.g., Fibre Channel or Ethernet). Flash memory provides a new high-performance storage tier that fills the gap between a server s dynamic random access memory (DRAM) and Tier 1 storage consisting of the fastest-spinning HDDs. This new Tier 0 of solid state storage, with latencies from 50 to several 100 microseconds, delivers dramatic gains in application-level performance, while continuing to leverage rotating media s costper-gigabyte advantage in all lower tiers. Because the need for speed is so pressing in many of today s applications, IT managers could not wait for new flash-optimized storage standards to be finalized and become commercially available. For this reason, SSDs supporting the existing SAS and SATA standards, as well as proprietary PCIe-based flash adapters, are already being deployed in datacenters. Because these existing solid state storage solutions utilize very different configurations, each is addressed separately in the next two sections. The norm today for direct-attached storage (DAS) is a rack-mount server with an externally accessible chassis having multiple 9W storage bays capable of accepting a mix of SAS and SATA drives operating at up to 6 gigabits per second (Gb/s). As shown in Figure 2, the storage mid-plane typically interfaces with the server motherboard via a PCIe-based host redundant array of independent disks (RAID) adapter that has an embedded RAIDon-Chip (ROC) controller. While originally designed for HDDs, this configuration is ideal for SSDs that utilize 2.5 and 3.5 HDD disk form factors. Support for SAS and SATA HDDs and SSDs in various RAID configurations provides a number of benefits in DAS configurations. One such benefit is the ability to mix high-performance SAS SAS and SATA SSDs drives with low-cost SATA drives in tiers of storage directly on the server. The fastest Tier 0 can utilize SAS SSDs, while the slowest tier utilizes SATA HDDs (or external SAN or NAS). In some configurations, firmware on the RAID adapter can transparently cache application data onto SSDs. Figure 2: SAS and SATA SSDs are supported today in standard storage bays with a ROC controller on the server s PCIe bus. Being externally accessible and hot-pluggable, the configuration of disks can be changed as needed to improve performance by adding more SSDs, or to expand capacity in any tier, as well as to replace defective drives to restore full RAID-level data protection. Because the arrangement is fully standardized, any bay can support any SAS or SATA drive. Device connectivity is easily scaled via an in-server SAS expander, or via SAS connections to external drive enclosures (commonly called JBODs for Just a Bunch of Disks ). The main advantage of deploying flash in HDD form factors using established SAS and SATA protocols is that it significantly accelerates application performance while leveraging mature standards and the existing infrastructure (both hardware and software). For this reason, this configuration will remain popular well into the future in all but the most demanding latency-sensitive applications. And enhancements continue to be made, including RAID adapters getting faster with PCIe version 3.0, and 12 Gb/s SAS SSDs that are poised for broad deployment beginning in 2013. Even with continual advances and enhancements, though, SAS and SATA cannot capitalize fully on flash memory s performance potential. The most obvious constraints are the limited power (9W) and channel width (1 or 2 lanes) available in a storage bay that was initially designed to accommodate rotating magnetic media, not flash. These constraints limit the performance possible with the amount of flash that can be deployed in a typical HDD form factor, and are the driving force behind the emergence of PCIe flash adapters. PCIe Flash Adapters Instead of plugging into a storage bay, a flash adapter plugs directly into a PCIe bus slot on the server s motherboard, giving it direct access to the CPU and main memory. The result is a latency as low as 50 microseconds for (buffered) I/O operations to solid state storage. (cont. p.3) 2

Flash Cache Acceleration Cards Caching content to memory in a server is a proven technique for reducing latency, and thereby improving application-level performance. But because the amount of memory possible in a server (measured in gigabytes) is only a small fraction of the capacity of even a single disk drive (measured in terabytes), achieving performance gains from this traditional form of caching is becoming difficult. Flash memory breaks through the cache size limitation imposed by DRAM to again make caching a highly effective and cost-effective means for accelerating application-level performance. Flash memory is also non-volatile, giving it another important advantage over DRAM caches. For these reasons, PCIe-based flash cache adapters, such as the LSI Nytro XD application acceleration solution, have already become a popular solution for enhancing performance. Solid state memory typically delivers the highest performance gains when the flash cache is placed directly in the server on the PCIe bus. Embedded or host-based intelligent caching software is used to place hot data (the most frequently accessed data) in the lowlatency, high-performance flash storage. Even though flash memory has a higher latency than DRAM, PCIe flash cache cards deliver superior performance for two reasons. The first is the significantly higher capacity of flash memory, which dramatically increases the hit rate of the cache. Indeed, with some flash cards now supporting multiple terabytes of solid state storage, there is often sufficient capacity to store entire databases or other datasets as hot data. The second reason involves the location of the flash cache: directly in the server on the PCIe bus. With no external connections and no intervening network to a SAN or NAS (that is also subject to frequent congestion and deep queues), the hot data is accessible in a flash (pun intended) in a deterministic manner under all circumstances. PCIe Flash Adapters Because there are no standards yet for PCIe storage devices, flash adapter vendors must supply a device driver to interface with the host s file system. (In some cases vendor- specific drivers are bundled with popular server operating systems.) Figure 3: PCIe flash adapters overcome the limitations imposed by legacy storage protocols, but must be plugged directly into the server s PCIe bus. Unlike storage bays that provide 1 or 2 lanes, server PCIe slots are typically 4 or 8 lanes wide. An 8-lane (x8) PCIe (version 3.0) slot, for example, is capable of providing a throughput of 8GB/s (8 lanes at 1GB/s each). By contrast, a SAS storage bay can scale to 3GB/s (2 lanes at 12Gb/s or 1.5GB/s each). The higher bandwidth increases I/O operations per second (IOPs), which reduces the transaction latency experienced by some applications. Another significant advantage of a PCIe slot is the higher power available, which enables larger flash arrays, as well as more parallel read/write operations to the array(s). The PCIe bus supports up to 25W per slot, and if even more is needed, a separate connection can be made to the server s power supply, similar to the way high-end PCIe graphics cards are configured in workstations. For half-height, half-length (HHHL) cards today, 25W is normally sufficient. Ultra high-capacity full-height cards often require additional power. A PCIe flash adapter can be utilized either as flash cache or as a primary storage solid state drive. The more common configuration today is flash cache to accelerate I/O to DAS, SAN or NAS rotating media (see sidebar on Flash Cache Acceleration Cards). Adapters used as an SSD are often available with advanced capabilities, such as host-based RAID for data protection, but the PCIe bus is not an ideal platform for primary storage owing to its lack of external serviceability and hot-pluggability. Although the use of PCIe flash adapters can dramatically improve application performance, PCIe was not designed to accommodate storage devices directly. PCIe adapters are not externally serviceable, not hot-pluggable, and are difficult to manage as part of an enterprise storage infrastructure. And the proprietary nature of PCIe flash adapters is an impediment to a robust, interoperable multi-party device ecosystem. Overcoming these limitations requires a new industry-standard PCIe storage solution. (cont. p.4) 3

Express Bay Support for the PCIe interface on an externally accessible storage mid-plane is just now emerging based on the new Express Bay standard with the new SFF-8639 connector. The Express Bay provides four dedicated PCIe lanes and up to 25W of power to accommodate ultra-high-performance, high-capacity enterprise PCIe SSDs (essds) in a 2.5 or 3.5 disk drive form factor. As a superset of today s standard disk drive bay, the Express Bay maintains backwards compatibility with existing SAS and SATA devices. The Express Bay standard is being created by the SSD Form Factor Working Group (www.ssdformfactor.org) in cooperation with the SFF Committee, the SCSI Trade Association, the PCI Special Interest Group and the Serial ATA International Organization. A copy of the Enterprise SSD Form Factor 1.0 Specification is available at www.ssdformfactor.org/docs/ssd_form_ Factor_Version1_00.pdf. Enterprise SSDs for the Express Bay will initially use vendor-specific protocols enabled by vendor-supplied host drivers. Enterprise SSDs compliant with the new NVM Express (NVMe) flash-optimized host interface protocol will emerge in 2013. NVMe is being defined by the NVMe Work Group (www.nvmexpress.org) for use in PCIe devices targeting both clients (PCs, ultrabooks, etc.) and servers. By 2014, standard NVMe host drivers should be available in all major operating systems, eliminating the need for vendor-specific drivers (except when a vendor supplies a driver to enable unique capabilities). Also in 2014, essds compliant with the new SCSI Express (SCSIe) host interface protocol are expected to make their debut. SCSIe SSDs will be optimized for enterprise applications, and should fit seamlessly under existing enterprise storage applications based on the SCSI architecture and command set. SCSIe is being defined by the SCSI Trade Association (www.scsita.org) and the Inter-National Express Bay Committee for Information Technology Standards (INCITS) Technical Committee T10 for SCSI Storage Interfaces (www.t10.org). As shown in Figure 4, most mid-planes supporting the new Express Bays will interface with the server via two separate PCIe-based cards: a PCIe switch to support high-performance essds; and a RAID adapter to support legacy SAS and SATA devices. Direct support for PCIe (through the PCIe switch) makes it possible to put flash cache acceleration solutions in the Express Bay, and this configuration is expected to become preferable over the flash adapters now being plugged directly into the server s PCIe bus. Nevertheless, PCIe flash adapters may continue to be used in ultra-high-performance or ultra-high-capacity applications that justify utilizing the wider x8 PCIe bus slots and/or additional power available only within the server. Figure 4: The new Express Bay fully supports the low latency of flash memory with the high performance of PCIe, while maintaining backwards compatibility with existing SAS and SATA HDDs and SSDs. Because it is more expensive to provision an Express Bay than a standard drive bay, server vendors are likely to limit deployment of Express Bays until market demand for essds increases. Early server configurations may support perhaps two or four Express Bays, with the remainder being standard bays. Server vendors may also offer some models with a high number of (or nothing but) Express Bays to target ultra-high-performance and ultra-high-capacity applications, especially those that require little or no rotating media storage. SATA Express The discussion thus far has focused on servers, but PCIe flash storage is also expected to become common in client devices beginning in 2013 with the advent of the new SATA Express (SATAe) standard. Like SATA before it, SATAe devices are expected be adopted in the enterprise owing to the low cost that inevitably results from the economics of high-volume, client-focused technologies. The SATAe series of standards includes a flash-only M.2 form factor (previously called the next-generation form factor or NGFF) for ultrabooks and netbooks, and a 2.5 disk drive compatible form factor for laptop and desktop PCs. SATAe standards are being developed by the Serial ATA International Organization (www.sata-io.org). (cont. p.5) 4

Initial SATAe devices will use the current AHCI protocol in order to leverage industry-standard SATA host drivers, but will quickly move to NVMe once standard NVMe drivers become incorporated into major operating systems. The SATAe 2.5 form factor is expected to play a significant role in enterprise storage. It is designed to plug into either an Express Bay or a standard drive bay. In both cases, the PCIe signals are multiplexed atop the existing SAS/SATA lanes. As depicted in Figure 5, this enables either bay to accommodate a SATAe SSD, or a SAS or SATA drive. (Of course, the Express Bay can additionally accommodate x4 essds as previously discussed.) The configuration implies future RAID controller support for SATAe drives to supplement existing support for SAS and SATA drives. Note that although SATAe SSDs will outperform SATA SSDs, they will lag 12Gb/s SAS SSD performance (two lanes of 12Gb/s are faster than two lanes of 8Gb/s PCIe 3.0). The SATAe M.2 form factor will also be adopted in the enterprise in situations where a clientclass PCIe SSD is warranted, but the flexibility and/or external serviceability of a storage form factor is not required. Summary Flash memory, with its ability to bridge the large gap in I/O latency between main memory and HDDs, has exposed some limitations in existing storage standards. These standards have served the industry well, and SAS and SATA HDDs and SSDs will continue to be deployed in enterprise and cloud applications well into the foreseeable future. Indeed, the new standards being developed all accommodate today s existing and proven standards, making the integration of solid state storage seamless and evolutionary, and not disruptive or revolutionary. To take full advantage of flash memory s ultra-low latency, proprietary solutions that leverage the high performance of the PCIe bus have emerged in advance of the new storage standards. But while PCIe delivers the performance needed, it was never intended to be a storage architecture. In effect, the new storage standards extend the PCIe bus onto the server s externally accessible mid-plane, which was designed as a storage architecture. Yogi Berra famously observed, It s tough to make predictions, especially about the future. But because the new standards all preserve backwards compatibility, there is no need to predict a winner among them. In fact, all are likely to coexist, perhaps in perpetuity, because each is focused on specific and different needs in client and server storage. Fortunately, the Express Bay supports both new and legacy standards, as well as proprietary solutions, all concurrently. It is this freedom of choice down to the level of an individual bay that eliminates the need for the industry to choose only one as the standard. SATA Express Figure 5: Although designed for client PCs, new SATAe drives will be supported in a standard bay by multiplexing the PCIe protocols atop existing SAS/SATA lanes. (cont. p.6) 5

For more information and sales office locations, please visit the LSI web sites at: lsi.com lsi.com/contacts North American Headquarters San Jose, CA T: +1.800.372.2447 (within U.S.) T: +1.408.954.3108 (outside U.S.) LSI Europe Ltd. European Headquarters United Kingdom T: [+44] 1344.413200 LSI KK Headquarters Tokyo, Japan Tel: [+81] 3.5463.7165 LSI, the LSI & Design logo, Storage.Networking.Accelerated. and Nytro are trademarks or registered trademarks of LSI Corporation in the United States and/or other countries. All other brand or product names may be trademarks or registered trademarks of their respective companies. LSI Corporation reserves the right to make changes to any products and services herein at any time without notice. LSI does not assume any responsibility or liability arising out of the application or use of any product or service described herein, except as expressly agreed to in writing by LSI; nor does the purchase, lease, or use of a product or service from LSI convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property rights of LSI or of third parties. Copyright 2013 by LSI Corporation. All rights reserved.