White Paper June 2013. 12Gb/s SAS: Key Considerations For Your Next Storage Generation



Similar documents
Enterprise SSD Interface Comparisons

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Transitioning to 6Gb/s SAS (Serial-Attached SCSI) A Dell White Paper

High Performance Tier Implementation Guideline

NVM Express TM Infrastructure - Exploring Data Center PCIe Topologies

Chapter 13 Selected Storage Systems and Interface

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

White Paper: M.2 SSDs: Aligned for Speed. Comparing SSD form factors, interfaces, and software support

Redundancy in enterprise storage networks using dual-domain SAS configurations

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Building a Flash Fabric

3 Port PCI Express 2.0 SATA III 6 Gbps RAID Controller Card w/ msata Slot and HyperDuo SSD Tiering

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

Data Center Solutions

Scaling from Datacenter to Client

The Transition to PCI Express* for Client SSDs

The Benefit of Migrating from 4Gb to 8Gb Fibre Channel

Storage Architectures. Ron Emerick, Oracle Corporation

4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card

PCI Express 2.0 SATA III RAID Controller Card with Internal Mini-SAS SFF-8087 Connector

Data Center Solutions

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

PCI Express SATA III RAID Controller Card with Mini-SAS Connector (SFF-8087) - HyperDuo SSD Tiering

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Q & A From Hitachi Data Systems WebTech Presentation:

High Availability Server Clustering Solutions

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

HP Z Turbo Drive PCIe SSD

Technologies Supporting Evolution of SSDs

Data Center Storage Solutions

SAN Conceptual and Design Basics

UCS M-Series Modular Servers

PCI Express Overview. And, by the way, they need to do it in less time.

Intel RAID SSD Cache Controller RCS25ZB040

How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel

StorageBox High Performance NVMe JBOF

Fibre Channel Disk Storage System Interface Speed Roadmap

Windows 8 SMB 2.2 File Sharing Performance

How To Test Nvm Express On A Microsoft I7-3770S (I7) And I7 (I5) Ios 2 (I3) (I2) (Sas) (X86) (Amd)

Performance Brief: MegaRAID SAS 9265/9285 Series

SAS & FC COMPARED THE R/EVOLUTION DIFFERENCE. white paper. SAS vs. FC

Enterprise Disk Storage Subsystem Directions

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

OCZ s NVMe SSDs provide Lower Latency and Faster, more Consistent Performance

SSD Server Hard Drives for IBM

PCI Express* Ethernet Networking

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

SCSI vs. Fibre Channel White Paper

4 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card with HyperDuo SSD Tiering

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

MS Exchange Server Acceleration

iscsi: Accelerating the Transition to Network Storage

PCIe SATA 6G Raid Card

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

Performance Report Modular RAID for PRIMERGY

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

HP Smart Array Controllers and basic RAID performance factors

2 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card w/ 2 msata Slots and HyperDuo SSD Tiering

HBA Virtualization Technologies for Windows OS Environments

Serial ATA technology

Oracle Database Scalability in VMware ESX VMware ESX 3.5

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

SOLID STATE DRIVES AND PARALLEL STORAGE

Frequently Asked Questions March s620 SATA SSD Enterprise-Class Solid-State Device. Frequently Asked Questions

HP 80 GB, 128 GB, and 160 GB Solid State Drives for HP Business Desktop PCs Overview. HP 128 GB Solid State Drive (SSD)

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

The Bus (PCI and PCI-Express)

HP reference configuration for entry-level SAS Grid Manager solutions

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

NetApp E-Series Storage Systems

white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

6Gb/s MegaRAID SAS RAID Controllers

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Mixed All-Flash Array Delivers Safer High Performance

HP 128 GB Solid State Drive (SSD) *Not available in all regions.

Using High Availability Technologies Lesson 12

How To Build A Cisco Ukcsob420 M3 Blade Server

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

SUPERTALENT PCI EXPRESS RAIDDRIVE PERFORMANCE WHITEPAPER

Serial ATA in Servers and Networked Storage

CompTIA Storage+ Powered by SNIA

QuickSpecs. Models. HP StorageWorks 8Gb PCIe FC HBAs Overview. Part Number AK344A

Optimizing Large Arrays with StoneFly Storage Concentrators

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

White Paper Solarflare High-Performance Computing (HPC) Applications

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

All-Flash Storage Solution for SAP HANA:

Transcription:

White Paper June 2013 12Gb/s SAS: Key Considerations For Your Next Storage Generation

12Gb/s SAS: Key Considerations / Table of Contents Table of Contents Introduction...1 SAS Foundations...1 SAS Advances... 2 SAS vs. SATA... 3 SAS vs. PCI Express... 4 Next-Gen SAS Today and Tomorrow...5

Introduction Enterprise storage continues to demand higher performance to meet rising workload levels and higher reliability to push down total cost of ownership. At the same time, IT departments remain budget-constrained. Storage tiering provides an efficient approach to meeting these three, occasionally conflicting, goals. While buyers often discuss such factors as media technology (solid state drive vs. hard drive) and cost when planning their tiering strategy, interfaces tend to get less attention. The battle between SATA, PCI Express, and SAS is already raging in data centers around the world, but which is appropriate in your organization? This paper will explore the up and coming 12Gb/s SAS interface and compare it against the alternative interfaces vying for supremacy in storage today. SAS Foundations Since the SCSI standard first materialized in 1978, it has defined a variety of interfaces, commands, and protocols used to govern how storage peripherals should connect to a host computer. Over time, SCSI outgrew the limitations of its parallel interfaces while the command set continued to deepen. Many of SCSI s basic components, such as initiators, targets, and device IDs carried forward into the serial interface era. The International Committee for Information Technology Standards, which dates back to 1960, maintains stewardship over multiple storage technologies interfaces via various technical committees. These include T10 (SCSI and SAS), T11 (Fibre Channel), and T13 (AT Attachment). The original SAS-1 specification was published in November 2001, a time when parallel SCSI designs still dominated enterprise storage. SCSI Ultra-320 (320 MB/s) wouldn t arrive until mid-2002, but vendors already knew that parallel SCSI was almost at the end of its run. Signal skew and other problems made further acceleration of parallel SCSI with usable cable lengths nearly impossible. Serial Attached SCSI (SAS) solved this problem, trading parallel signals for much higher serial data rates. SAS-1 leveraged the SCSI command set but reached speeds of 1.5Gb/s, soon followed by SAS-1.1 at 3Gb/s (300 MB/s). When LSI delivered the first SAS controller silicon in 2004, and SAS-1.1 target devices followed in 2005, parallel SCSI was effectively dead. By 2008, SAS-2 devices (6Gb/s, or 600 MB/s) were reaching mainstream volumes in the enterprise marketplace. SAS-2 matches the interface speed promised by third-generation SATA, but, as we ll see, there are many key differences between the two, not the least of which is that SAS has a future roadmap and SATA does not. In 2013, SAS-3 (12Gb/s, or 1.2GB/s) devices will begin shipping. Not surprisingly, the technical minutia surrounding SAS is deep and diverse. However, IT generalists would do well to understand a handful of key points regarding SAS architecture: Initiators and targets. As with SATA, the SAS bus uses a point-to-point architecture wherein the device on one end is known as the initiator and the other as the target. While SAS devices can connect to controllers via a dedicated link, most implementations involve the use of multiple devices connecting to the controller via expander(s). Expanders. Whereas parallel SCSI topped out at 16 devices per channel, SAS can leverage a hub-like approach, allowing one end device to communicate with up to 65,535 other end devices through a single SAS PHY. Whether the SAS controller has the compute horsepower to accommodate that many devices is a different issue. SAS domains and port identifiers. A SAS domain establishes a set of SAS devices able to intercommunicate by way of a service delivery network. Each device registers itself within the domain through its use of a globally unique identification code (functionally similar to a network address) associated with its port. 1

SATA compatibility. SAS ports are physically and electrically similar to SATA ports. While these technologies differ in their command sets (SCSI and ATA, respectively), flow control mechanisms, and other ways, they are compatible in various ways, including electrically. Thus, if a SAS controller supports the ATA command set, it can then control both SAS and SATA drives. SATA Tunneling Protocol (STP) effectively allows two end devices to speak the full SATA protocol within a SAS connection. This provides for some compelling tiering opportunities in certain situations where multiple LUNs or arrays are involved. Connector range. Adopters can choose between internal and external SAS cabling that vary in the number of devices supported as well as the connector s physical dimensions. Mini-SAS cables are generally preferred in SAS-only environments, although they are not SATA-compatible. In addition, as noted above, SAS preserves the SCSI command set. This point cannot be understated because the vast majority of modern enterprise installations are built on storage solutions using SCSI. By keeping this protocol consistent across continuing generations of storage solutions, a large measure of consistency and compatibility applies that help safeguard IT from a range of risks and issues. SAS Advances With fourth-generation SAS (SAS-3), we will again see a doubling of interface bandwidth, now reaching 12Gb/s. At such speeds, performance above 1 million IOPS becomes possible, albeit under very particular and largely impractical circumstances. The key point is that SAS-3 will provide ample headroom for storage devices to advance in speed. In addition, 12Gb/s SAS-3 addresses signal quality through transmitter training. This gives one of the receiver device s key interconnects, its PHY, the ability to modify the settings of the transmitter device s PHY. Essentially, the receiver analyzes the attached transmitter s data signal and uses coded commands and messages to adjust the attached transmitter s settings. This is necessary to establish an error-free link between the two no small challenge at 12Gb/s data rates. Businesses stand to reap several rewards from 12Gb/s SAS, including: Higher throughput. Today s fastest 6Gb/s SSDs consistently show sequential data benchmarks in the 500 to 550 MB/s range but no higher. This is because, after accounting for overhead factors, this is the maximum throughput the 6Gb/s interface will allow. Without a 12Gb/s interface to break open this bottleneck, businesses will find their ability to improve drive performance crippled. SAS-3 will effectively double the maximum throughput capacity of SAS-2. No compromise on distance. Unlike how top-end parallel SCSI interfaces curtailed maximum cable lengths, SAS-3 integrates additional signal conditioning measures, such as transmitter training, to enable this speed doubling over the same run lengths as SAS-2. Specifically, this means 10 meters for passive copper, 10 to 25 meters for active copper, and 100 meters for optical. Superior value in SANs. For several years, Fibre Channel has been the interconnect technology of choice in storage area networks, largely owing to its excellent speed and use of the robust SCSI command set. At 12Gb/s, SAS-3 surpasses conventional 8Gb/s Fibre on performance and only falls slightly shy of 16Gb/s Fibre, which came available in 2011. With its switched fabric architecture, though, Fibre Channel remains prohibitively expensive for many would-be SAN adopters. SAS-3 offers competitive performance against 16Gb/s Fibre, with the same robustness and reliability, but with significantly lower costs. 2

Investment protection. Many businesses already have sizable deployments of SAS-1 and SAS-2 storage devices, including controllers, RAID enclosures, and other equipment. A SAS-3 drive can drop into any existing SAS infrastructure and function seamlessly as a SAS-2 device. When the organization is ready to upgrade its storage infrastructure to the faster interface, it is free to do so and leverage the higher-end capabilities of its SAS-3 drives at that time. Meanwhile, the business can begin preparing for that future infrastructure with compatible drives today. SAS vs. SATA When examining the difference between SATA and SAS hard drives, many people gravitate to highlighting how SAS drives often have higher RPM rates and MTBF/AFR specs. With solid state drives, there are also a few bullet points that most people focus on when trying to get a quick assessment of device quality. In either case, though, the most significant differences between the two technologies lie under the spec sheet s surface. First and foremost, while the two interfaces may be plug-compatible, SATA lacks the SCSI command set. Historically, SCSI was the far more powerful interconnect, but ATA now supports such features as command queuing and direct memory access, both of which help ATA storage function more effectively and efficiently within the host system. SCSI remains the safer protocol for enterprises, though, as it supports more reliability features than ATA to better protect data integrity. Additionally, while ATA does support queuing to a depth of 32, most SAS drives extend queue depth to 128. (The protocol s limit is 65,536.) This gives SAS drives more commands to choose from so that they can execute them in the most efficient manner possible. This, in turn, yields higher throughput. Storage fabrics require the ability to utilize resources in a mesh-like fashion, with drives accepting and processing requests from multiple sources. Because SATA lacks multiple initiators, the technology can t accommodate this. In contrast, SAS topologies can have multiple ports able to interface with both internal and external targets. Wide SAS ports can use up to eight physical links, and different initiator ports can connect to different domains. Many IT groups use this feature for redundancy and fail-over. As noted earlier, SATA does not currently have a future roadmap beyond 6Gb/s, whereas SAS is already mapped to 24Gb/s and may go considerably farther. No doubt, this will be a key factor in SATA s expected market share erosion. As the main interface choice for consumer and mainstream storage, no one expects SATA to vanish overnight (Figure 1). However, it seems reasonable to expect that SATA will continue to migrate away from the mid-range of the market and settle increasingly in the low-end as higher performing alternatives proliferate. While experts expect SATA to show negative growth going forward, PCI Express and SAS will see double-digit compounded annual growth for at least the next five years. Figure 1: Enterprise SSD TAM forecast, by interface. 3

SAS vs. PCI Express In terms of performance and functionality, SAS and PCI Express (PCIe) share quite a bit in common. PCIe is the newest entrant into SSD storage, and products such as Intel s SSD 910 provide a representative glance at where the category now stands. The 910 joins three PCBs into a single PCIe 2.0 x8 device, yielding up to 800GB of capacity and up to 1.5Gb/s of sequential writes in Intel s high performance mode, which comes disabled by default since it exceeds the conventional PCI Express slot power specifications and requires special cooling. PCI Express has some notable advantages. Perhaps topping the list, PCIe does away with the need for host bus adapters (HBAs). Putting storage right on the bus gives drives direct access to system memory and removes some overhead in the process. Depending on the configuration and application, this may or may not be an advantage. Many SAS controllers integrate onboard storage processors, and without these, the heavy lifting tasks of RAID computation fall back on the CPU. Given that many servers never utilize more than 20% of their capacity, letting the host shoulder this load can make sense. In storage-based apps, though, computation offload to a storage processor can be critical. This is one reason why storage solutions continue to prefer SAS and both consumer and some server platforms show increasing interest in PCIe. PCI Express is a physical interconnect and accompanying protocol, but it is not natively a storage protocol. Other protocols are needed to handle storage tasks across this bus. The two such leading protocols for enterprise environments are NVM Express (NVMe) and SCSI over PCIe (SOP, also called SCSI Express). Unfortunately, the two approaches to PCIe storage are incompatible. Of the two rival technologies, NVMe is ready now and may serve to popularize the PCI Express bus as a storage solution. The NVM Express specification is governed by the NVMe Workgroup, which is comprised of over 80 industry vendors, yet NVMe is not an industry standard and the NVMe interface remains largely untried by the enterprise world. In the other corner, SOP is based on the SCSI command set and is backed by the T10 and SCSI Trade Association. Enterprises around the world have long histories of qualifying their drivers and storage solutions for SCSI. All of that work vanishes under a new command set. That s not to say that NVMe won t work. Undoubtedly, it will work at least most of the time. The question is whether, in its opening years of availability, will it work all of the time, especially with many enterprise custom applications and solutions. Companies concerned about this might do well to wait slightly longer for SOP. That said, SOP isn t perfect yet, either. For example, SOP drives may pull data from queues in a different order than they were placed by the host. While this can help accelerate performance, in practice it can lead to some data integrity errors unless the host is careful, and still needs to be proven out for those who want to carry forward with SCSI. Those wanting stable dual-port PCIe SSDs today will only find the feature in proprietary designs. Neither SOP nor NVMe have perfected their dual-port capabilities, although NVMe 1.1 has improved dual port handling and SOP s dual port implementation is now very well-defined. Additionally, PCIe in general still has some potential security concerns. While direct memory access has some performance benefits, it also opens a possible security hole that HBA-tethered SAS drives don t share. Having multiple drives exposed to system memory can also present a larger drain on system resources than a single storage adapter. Also, SAS drives working through HBAs are going to be leveraging qualified drivers, not fledgling code from new PCIe drives. For that matter, the only code a system sees with SAS is the HBA s, regardless of the number, make, or model of drives attached to it. 4

Next-Gen SAS Today and Tomorrow For a glimpse of where the SAS market is heading, look to HGST s forthcoming Ultrastar SSD800M 12Gb/s SAS SSD generation. The drive supports simultaneous dual-port transfers of up to 2400 MB/s (Instantaneous Burst Rate). HGST supports two discrete yet simultaneous data streams, one on each port. Multi-hour sustained throughput tests with Ultrastar SSD800M reveal the following benchmark results: 6Gb, Dual Port, 11W 12Gb, Dual Port, 11W MH MM MR MH MM MR 64KB Transfer Length Seq Read (MB/sec) 1000 1000 1000 1200 1200 1200 Seq Write (MB/sec) 700 700 700 750 700 700 12Gb, Single Port, 11W 12Gb, Single Port, 11W 4KB QD64 100% Read (IOPS) 110,000 110,000 110,000 145,000 145,000 145,000 100% Write (IOPS) 73,000 68,000 20,000 100,000 70,000 20,000 70/30 R/W (IOPS) 94,000 92,000 36,000 120,000 110,000 36,000 Note that these numbers were achieved with Ultrastar SSD800M using MLC NAND rather than the historically faster but cost-prohibitive SLC alternative. Also important are the very close performance levels of the high endurance (HE), mainstream endurance (ME), and read intensive (RI) Ultrastar SSD800M models, rated at 25, 10, and two drive writes per day, respectively. These endurance levels correspond with capacity points of 800GB, 400GB, and 200GB. Buyers will be able to purchase according to endurance needs and/or capacity without sacrificing throughput. Expect 12Gb/s volumes to ramp in the second half of 2013 and Ultrastar SSD800M to ship in volume with it. As quantities enter mainstream levels, HGST expects little to no price premium for 12Gb/s SAS compared to its 6Gb/s predecessor. Also look for HGST to support the new SCSI Express Bay technology. This flexible architecture features backplanes able to accommodate SATA, SATA Express, 12Gb/s SAS, Multilink SAS, and PCIe SSDs, including NVMe, SOP, and even proprietary drives via the SFF-8639 connector. Enterprise users still unsure about which way to jump on NVMe, SAS, and/or SOP should keep a close watch on SCSI Express Bay (Figure 2) as it will accommodate all three approaches and perhaps provide a smoother path to SOP in the future when the technology gains share. SCSI Express Bay provides 25W per device, just like a PCI Express slot, so 12Gb/s SAS drives will have plenty of power overhead Figure 2: SCSI Express Bay ports. with which to drive more NAND channels. 5

Whichever way the winds blow, to 12Gb/s, 24Gb/s, and beyond, HGST will be there. The company s legacy in storage reaches all the way back into the 1950s, when IBM invented and built the first disk drive. These six decades of experience prove themselves invaluable over and over to enterprise users needing support in designing and adapting storage solutions capable of scaling to their present and future needs. The road from older storage technologies to the cutting-edge solutions of 2013 and beyond is pocked with complexity and pitfalls for the unwary. HGST will help guide enterprise users through the transition with solutions that maximize return on investment and keep storage systems scaling with ever greater performance. HGST trademarks are intended and authorized for use only in countries and jurisdictions in which HGST has obtained the rights to use, market and advertise the brand. Contact HGST for additional information. HGST shall not be liable to third parties for unauthorized use of this document or unauthorized use of its trademarks. References in this publication to HGST s products, programs, or services do not imply that HGST intends to make these available in all countries in which it operates. Product specifications provided are sample specifications and do not constitute a warranty. Information is true as of the date of publication and is subject to change. Actual specifications for unique part numbers may vary. Please visit the Support section of our website, www.hgst.com/support, for additional information on product specifications. Photographs may show design models. 2013 HGST, Inc. All rights reserved. HGST, Inc. 3403 Yerba Buena Road San Jose, CA 95135 USA Produced in the United States 7/13. Ultrastar is a trademark of HGST, Inc. WP12G13EN-01 www.hgst.com