SSD Performance Tips: Avoid The Write Cliff



Similar documents
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Flash Memory Technology in Enterprise Storage

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Accelerating Server Storage Performance on Lenovo ThinkServer

Microsoft SQL Server 2014 Fast Track

Deploying Flash in the Enterprise Choices to Optimize Performance and Cost

Optimizing SQL Server Storage Performance with the PowerEdge R720

Solid State Drive (SSD) FAQ

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Data Center Storage Solutions

Addressing Fatal Flash Flaws That Plague All Flash Storage Arrays

All-Flash Arrays: Not Just for the Top Tier Anymore

EMC XTREMIO EXECUTIVE OVERVIEW

Benchmarking Cassandra on Violin

Intel RAID SSD Cache Controller RCS25ZB040

Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions

Flash-optimized Data Progression

Understanding endurance and performance characteristics of HP solid state drives

Everything you need to know about flash storage performance

Solid State Drive Technology

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

DELL SOLID STATE DISK (SSD) DRIVES

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Data Center Performance Insurance

EMC VFCACHE ACCELERATES ORACLE

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

ioscale: The Holy Grail for Hyperscale

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst

Building a Business Case for Decoupling Storage Performance from Capacity

The Data Placement Challenge

Server Virtualization: Avoiding the I/O Trap

FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1

Big Fast Data Hadoop acceleration with Flash. June 2013

A Dell Technical White Paper Dell Compellent

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Understanding Microsoft Storage Spaces

Data Center Solutions

Getting the Most Out of Flash Storage

The Revival of Direct Attached Storage for Oracle Databases

Seeking Fast, Durable Data Management: A Database System and Persistent Storage Benchmark

How it can benefit your enterprise. Dejan Kocic Netapp

MS Exchange Server Acceleration

SSDs tend to be more rugged than hard drives with respect to shock and vibration because SSDs have no moving parts.

OCZ s NVMe SSDs provide Lower Latency and Faster, more Consistent Performance

The Technologies & Architectures. President, Demartek

Nimble Storage for VMware View VDI

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1

Application-Focused Flash Acceleration

Supreme Court of Italy Improves Oracle Database Performance and I/O Access to Court Proceedings with OCZ s PCIe-based Virtualized Solution

Advantages of Intel SSDs for Data Centres

NetApp FAS Hybrid Array Flash Efficiency. Silverton Consulting, Inc. StorInt Briefing

WHITE PAPER Addressing Enterprise Computing Storage Performance Gaps with Enterprise Flash Drives

Bringing Greater Efficiency to the Enterprise Arena

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

Nasir Memon Polytechnic Institute of NYU

Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

White paper. QNAP Turbo NAS with SSD Cache

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology

Database!Fatal!Flash!Flaws!No!One! Talks!About!!!

VIOLIN CONCERTO 7000 ALL FLASH ARRAY

Desktop Virtualization and Storage Infrastructure Optimization

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung)

Real-Time Storage Tiering for Real-World Workloads

Flash In The Enterprise

Comparison of NAND Flash Technologies Used in Solid- State Storage

DataStax Enterprise, powered by Apache Cassandra (TM)

IBM FlashSystem storage

Technologies Supporting Evolution of SSDs

All-Flash Storage Solution for SAP HANA:

Oracle Aware Flash: Maximizing Performance and Availability for your Database

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

Transforming Application Performance with Persistent Memory

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

Solid State Drive Architecture

Solid State Technology What s New?

Building a Flash Fabric

Flash Storage: Trust, But Verify

Accelerating Database Applications on Linux Servers

NV-DIMM: Fastest Tier in Your Storage Strategy

Benefits of Solid-State Storage

Calsoft Webinar - Debunking QA myths for Flash- Based Arrays

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

SOLUTION BRIEF. Resolving the VDI Storage Challenge

SOLID STATE DRIVES AND PARALLEL STORAGE

Transcription:

ebook 100% KBs/sec 12% GBs Written SSD Performance Tips: Avoid The Write Cliff An Inexpensive and Highly Effective Method to Keep SSD Performance at 100% Through Content Locality Caching Share this ebook Start

Introduction Solid state disk (SSD) based on flash memory has emerged as a popular storage medium. Because SSD uses semi-conductor chips, it provides great advantages over mechanical disk in random read speed, power consumption, size, and shock resistance. However, SSD has limitations in write performance and durability because of the physical properties of flash. These limitations are exacerbated when the SSD hits the so called write cliff, the point at which SSD write-performance slows down significantly. The SSD hardware vendors attempt to delay the SSD write cliff through overprovisioning, a costly solution. This paper discusses a simple, inexpensive, and non-disruptive solution that greatly minimizes the SSD write cliff effect. Contents What is the SSD Write Cliff? Why Does it Occur? 2 Overprovisioning: the SSD Hardware Vendors Method to Postpone the Write Cliff 3 Optimize Data for SSD and Avoid the Write Cliff Without Overspending on Unused SSD 4 VeloBit Helps Avoid the Write Cliff While Improving Performance by 10x 6 Summary 7 Share this ebook 1 / 7

What is the SSD Write Cliff? Why Does it Occur? What is the SSD Write Cliff? Why Does it Occur? SSD write cliff is the effect where SSD write performance drops off after all the free SSD (Flash) cells have been initially written to and the device cannot provide enough free blocks to keep up with write requests. The performance drop can be dramatic, often in the range of 6-10x reduction in write speed. The less Flash capacity a system has the sooner this effect occurs and the more significant the drop off. Let s explore the design of flash-based SSD so that we better understand what s causing these limitations. A typical NAND-gate array flash memory chip consists of a number of blocks, each of which contains a number of pages (e.g. a block with 64/128 pages of 4KB/8KB each). In NAND flash, blocks are the smallest erasable units whereas pages are the smallest programmable (i.e., writable) units. When a write operation is performed, the SSD needs to first find a free page for the write. If there is no free page available, an erase operation is necessary to make free pages. A read operation usually takes a few or tens of microseconds, whereas a write takes tens to hundreds of microseconds and an erase operation takes 1.5 to 3 milliseconds. After an SSD is filled with data, unused ( stale ) pages must be erased before new data is written. Since a block is the smallest erasable unit in an SSD, in order to erase the stale pages the SSD controller must erase all pages on the block (stale or active). Therefore, before a block is erased, all pages with active data must be re-written into another block with free cells. So, writing a single page to SSD can result in multiple corresponding re-write operations. More-over, the re-writes can trigger additional erase operations. This write amplification is a major cause of SSD wear, and is the primary cause of the SSD write performance cliff. 100% KBs/sec 12% GBs Written SSD Write Cliff Share this ebook 2 / 7

Overprovisioning: the SSD Hardware Vendors Method to Postpone the Write Cliff Garbage collection is a background process that occurs when there are spare SSD controller cycles. Garbage collection makes use of quiet time for SSDs to reclaim previously used SSD blocks and free pages for new writes. During garbage collection, the SSD controller pro-actively scans the contents of the SSD blocks to identify stale pages, re-writes any active pages found in the same block, and then erases the block. Unfortunately, in the 7x24x365 enterprise data center there is rarely a quiet time for an SSD to perform its housekeeping duties while not having to process critical data requests at the same time. When garbage collection cannot keep pace with new writes, the SSD hits the write cliff. Overprovisioning: the SSD Hardware Vendors Method to Postpone the Write Cliff SSD hardware vendors delay the write cliff effect through overprovisioning. The basic principle of overprovisioning is to put more capacity (NAND) on the SSD than the drive actually reports (some high end SSD drives overprovision by 25% or more). Even though there may be 400GB of physical capacity on the SSD, the SSD tells the operating system that it has only 320GB. The SSD controller uses the hidden capacity to scatter and stage data during housekeeping functions like garbage collection. Because there is extra (overprovisioned) capacity, it is less likely that the SSD will run out of free pages for new writes and the likelihood of the write cliff is reduced. Overprovisioning enables SSD vendors to delay the write cliff effect but this benefit comes with added expense for unused capacity. And, overprovisioning does not guarantee that the SSD will not experience a write cliff when it is used in a 24/7 production environment. Share this ebook 3 / 7

Optimize Data for SSD and Avoid the Write Cliff Without Overspending on Unused SSD Optimize Data for SSD and Avoid the Write Cliff Without Overspending on Unused SSD If the system were able to separate static from dynamic data (i.e. separate data that doesn t change from data that changes often), write amplification would be reduced and garbage collection would be simplified. Furthermore, if data were optimized to maximize reads and minimize random writes to SSD, both performance and reliability would be improved. This is where content locality caching helps. Content locality caching optimizes the performance of solid state disk (SSD) and overcomes two of the main limitations of SSD write speed and durability. The content locality caching algorithm stages data blocks based on the popularity of their contents. There are two types of data blocks stored in cache. Popular blocks are designated as reference blocks. Blocks that are sufficiently similar to reference blocks are defined as associated blocks, and are stored in a compressed form using a pointer to a reference block and a delta that describes the differences between the two blocks. When the application requests data and a requested data block is in cache, either a reference block is delivered directly or an associated block is re-constructed at line speed. Reconstructing an associated block from its compressed form consumes CPU cycles. However, since CPU speed is much faster than storage disk I/O speed, associated blocks are reconstructed and delivered much faster than if they were fetched from disk. Share this ebook 4 / 7

Common Types of Caching Content locality caching optimizes data for SSD in three major ways that reduce the likelihood of a write cliff: Separates data into read-only vs. actively changing data blocks The content locality caching algorithm effectively separates data into read-only reference blocks and actively changing associated blocks. Since read-only data is stored in separate blocks from actively changing data, fewer active pages have to be re-written during garbage collection. This means that the garbage collection process can complete all housekeeping tasks faster and keep up with new incoming writes. Converts SSD into (primarily) read cache Reference blocks are written once but read many times (as they are used to describe associate blocks). Associated blocks have a smaller footprint since they are stored in a compressed form. Because a large portion of the SSD is used for read-only operations, an SSD used with content locality caching is faster than an SSD used for random reads and writes. Content locality caching also reduces the likelihood that the SSD will hit the write cliff. Simplifies data staging and maintenance operations on SSD Compressed associated blocks are easier to scatter and write in existing open page slots. As a result, the SSD controller needs to perform fewer data re-organization tasks and fewer erase operations. Therefore, the SSD is less likely to experience the write cliff. Click Here Learn how Content Locality Caching optimizes data for SSD As a result, both SSD performance and the predictability of SSD performance are improved. Content locality caching can be used as cost-effective complement or alternative to overprovisioning for eliminating the write cliff and improving SSD performance and reliability. Share this ebook 5 / 7

VeloBit Helps Avoid the Write Cliff While Improving Performance by 10x VeloBit Helps Avoid the Write Cliff While Improving Performance by 10x VeloBit is the first solution that leverages content locality caching to improve performance. VeloBit is plug & play caching software that uses SSD to create a transparent acceleration layer that eliminates I/O bottlenecks and improves performance by 10x. VeloBit employs a number of other proprietary features, such as flash drive optimization, horizontal architecture, and automatic configuration, to enhance performance while reducing cost and minimizing disruption and complexity. The software installs seamlessly in <10 minutes and automatically tunes for fastest application speed. VeloBit deploys and operates without disruption to applications, storage, or data. SATA HDD High-End PCle SSD VeloBit + High-End PCle SSD VeloBit + Commodity SSD VeloBit Increases Performance by 10x Applications that can benefit from VeloBit include: Financial analysis, research, simulation, or modeling applications Databases, such as SQL Server, MySQL, Oracle, PosgreSQL, Cassandra, etc. Enterprise applications such as enterprise search, Apache Solr (Lucene), etc. E-commerce and social networking applications VeloBit can also be a benefit when reducing storage latency or increasing storage IOPS will reduce the amount of hardware required to support a given workload. Specific examples include: Distributed applications such as Hadoop Share this ebook 6 / 7

Summary Summary SSD write cliff is the effect where SSD write performance drops by 6-10x after the SSD is full and the device cannot provide enough free blocks to keep up with the write requests. SSD hardware vendors delay the write cliff effect through overprovisioning reserving up to 25% or more of overall drive capacity for internal housekeeping operations. Overprovisioning delays the write cliff effect but this benefit comes with added expense for unused capacity. Plus, overprovisioning does not guarantee that the SSD will not experience a write cliff when it is used in a 24/7 production environment. Learn more about VeloBit Free Trial See a Demo Content locality caching offers a simple, inexpensive means to reduce the likelihood of the SSD write cliff. The content locality caching algorithm structures data into separate read-only blocks and actively-changing blocks, converts most of the SSD into read-only cache, and simplifies data staging and maintenance operations. As a result the SSD runs faster, does not hit the write cliff, and the user experiences 10x acceleration in storage IOPS and application speed. VeloBit is the industry s first solution that leverages content locality caching to improve performance while reducing cost and minimizing disruption. VeloBit is a software only solution that seamlessly installs, automatically configures, and is transparent to applications and primary storage. 2011, VeloBit, Inc. All rights reserved. Information described herein is furnished for informational use only and is subject to change without notice. The only warranties for VeloBit products and services are set forth in the express warranty statements accompanying such products and services and nothing herein should be construed as constituting an additional warranty. VeloBit, the VeloBit Logo, and all VeloBit product names and logos are trademarks or registered trademarks of VeloBit, Inc. Share this ebook 7 / 7