IMEX. Is Solid State Storage Ready for Enterprise & Cloud Computing Systems IMEX. Are SSDs Ready for Enterprise Storage Systems



Similar documents
IMEX. Solving the IO Bottleneck in NextGen DataCenters & Cloud Computing IMEX. Are SSDs Ready for Enterprise Storage Systems

Solving the I/O Bottleneck in NextGen Data Centers & Cloud Computing. Anil Vasudeva, President & Chief Analyst, IMEX Research

Are SSDs Ready for Enterprise Storage Systems. Anil Vasudeva, President & Chief Analyst, IMEX Research

System Architecture. In-Memory Database

Solid State Storage: Key to NextGen Enterprise & Cloud Storage

The New Storage Platform

NVMe: Next Generation SSD Interface. Anil Vasudeva, President & Chief Analyst IMEX Research

Cloud Storage: Key to Future of IT Infrastructure. Anil Vasudeva President & Chief Analyst imex@imexresearch.com

Solid State Drive (SSD) FAQ

Make A Right Choice -NAND Flash As Cache And Beyond

NAND Flash Architecture and Specification Trends

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung)

Bringing Greater Efficiency to the Enterprise Arena

Important Differences Between Consumer and Enterprise Flash Architectures

Benefits of Solid-State Storage

How it can benefit your enterprise. Dejan Kocic Netapp

Scaling from Datacenter to Client

Accelerating Server Storage Performance on Lenovo ThinkServer

The Technologies & Architectures. President, Demartek

Comparison of NAND Flash Technologies Used in Solid- State Storage

SOLID STATE DRIVES AND PARALLEL STORAGE

Flash Memory Technology in Enterprise Storage

SATA II 3Gb/s SSD. SSD630 Benefits. Enhanced Performance. Applications

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE. Matt Kixmoeller, Pure Storage

How SSDs Fit in Different Data Center Applications

An Overview of Flash Storage for Databases

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1

Redefining Flash Storage Solution

Flash for Databases. September 22, 2015 Peter Zaitsev Percona

Understanding endurance and performance characteristics of HP solid state drives

SUN STORAGE F5100 FLASH ARRAY

Solid State Architectures in the Modern Data Center

Data Center Storage Solutions

Solid State Drive Technology

ZD-XL SQL Accelerator 1.5

Why Hybrid Storage Strategies Give the Best Bang for the Buck

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

SSDs tend to be more rugged than hard drives with respect to shock and vibration because SSDs have no moving parts.

A Balanced Approach to Optimizing Storage Performance in the Data Center

The Evolving NAND Flash Business Model for SSD. Steffen Hellmold VP BD, SandForce

Data Center Solutions

Flash Solid State. Are we there yet?

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Oracle Aware Flash: Maximizing Performance and Availability for your Database

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect

WHITE PAPER. Drobo TM Hybrid Storage TM

SSD Server Hard Drives for IBM

Database!Fatal!Flash!Flaws!No!One! Talks!About!!!

The Flash Transformed Data Center & the Unlimited Future of Flash John Scaramuzzo Sr. Vice President & General Manager, Enterprise Storage Solutions

NAND Flash Architecture and Specification Trends

The Value of IMEX. Clients. Your advisors to profitable technology markets. Seasoned Industry Analysts. Your Business Partner.

The Data Placement Challenge

Solid State Technology What s New?

NEXSAN NST STORAGE FOR THE VIRTUAL DESKTOP

SSDs: Practical Ways to Accelerate Virtual Servers

Data Center Solutions

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

Flash-optimized Data Progression

Getting the Most Out of Flash Storage

Enabling the Flash-Transformed Data Center

Automated Data-Aware Tiering

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

SSDs: Practical Ways to Accelerate Virtual Servers

Testing SSD s vs. HDD s. The Same But Different. August 11, 2009 Santa Clara, CA Anthony Lavia President & CEO

Solid State Drive Architecture

This page is a hidden page. To keep from printing this page, uncheck the checkbox for printing invisible pages in the printing dialog box.

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Flash at the price of disk Redefining the Economics of Storage. Kris Van Haverbeke Enterprise Marketing Manager Dell Belux

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

Nasir Memon Polytechnic Institute of NYU

HP 80 GB, 128 GB, and 160 GB Solid State Drives for HP Business Desktop PCs Overview. HP 128 GB Solid State Drive (SSD)

Intel RAID SSD Cache Controller RCS25ZB040

Maximizing SQL Server Virtualization Performance

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (

SSDs tend to be more rugged than hard drives with respect to shock and vibration because SSDs have no moving parts.

Technologies Supporting Evolution of SSDs

ioscale: The Holy Grail for Hyperscale

Calsoft Webinar - Debunking QA myths for Flash- Based Arrays

Increasing Storage Performance

OCZ s NVMe SSDs provide Lower Latency and Faster, more Consistent Performance

HP 128 GB Solid State Drive (SSD) *Not available in all regions.

ebay Storage, From Good to Great

Intel Solid-State Drive 320 Series

SNIA Solid State Storage Performance Test Specification. Easen Ho CTO, Calypso Systems, Inc.

21 st Century Storage What s New and What s Changing

Advantages of Intel SSDs for Data Centres

The Use of Flash in Large-Scale Storage Systems.

Chapter Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig I/O devices can be characterized by. I/O bus connections

FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1

<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures

Open Source Flash The Next Frontier

Technology Trends in the Storage Universe

Understanding Flash SSD Performance

COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University

Choosing Storage Systems

Comparison of Hybrid Flash Storage System Performance

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Solid State Storage in Massive Data Environments Erik Eyberg

NV-DIMM: Fastest Tier in Your Storage Strategy

Transcription:

Is Solid State Storage Ready for Enterprise & Cloud Computing Systems Are SSDs Ready for Enterprise Storage Systems Anil Vasudeva, President & Chief Analyst, Research 2007-11 Research All Rights Reserved Copying Prohibited Please write to for authorization Anil Vasudeva President & Chief Analyst imex@imexresearch.com 408-268-0800

Agenda (1) Enterprise Needs for Storage Data Centers Infrastructure & IT Roadmap to Clouds Storage Usage in Data Centers & in Cloud Mega-Centers Applications mapped by Key Workload & Storage Metrics Drivers for Flash & SSDs (2) State of SSDs A New SCM Filling Price Performance Gaps Advantage SSD: vs. HDDs Choosing SLC vs. MLC SSDs for the Enterprise (3) SSD Challenges & Solutions Paranoia vs. Reality: Wearout, Life, Retention, Pricing.. Testing Stds, Eco System/ Flash, Controllers, Systems, $8.5B Mkt.. (4) SSD Market Segments by I/F Host based vs. Stg AddOns PCIe vs. SAS/SATA/FC (5) New Intelligent Controllers Meeting Enterprise Requirements - AlwaysOn24x7, Fast I/O IOPS, BW 5-Yr Life, Failure Rates, MTBF, Power (6) Best Practices: Best Usage of SSD in the Enterprise Using Stg.Tiering SW: To Improve Query Response Time w SSDs Which Apps most benefit from using SSDs (OLTP/BI/HPC/Web2.0/Data Streaming) (7) Key Take-Aways

Data Centers & Cloud Infrastructure Public CloudCenter Enterprise VZ Data Center On-Premise Cloud Vertical Clouds IaaS, PaaS SaaS Servers VPN Switches: Layer 4-7, Layer 2, 10GbE, FC Stg Supplier/Partners Remote/Branch Office Cellular Wireless ISP ISP ISP ISP Internet Core Optical Edge ISP Web 2.0 Social Ntwks. Facebook, Twitter, YouTube Cable/DSL ISP Home Networks ISP Caching, Proxy, FW, SSL, IDS, DNS, LB, Web Servers Tier-1 Edge Apps Application Servers HA, File/Print, ERP, SCM, CRM Servers FC/ IPSANs Database Servers, Middleware, Data Mgmt Tier-3 Data Base Servers Tier-2 Apps Directory Security Policy Management Middleware Platform Source:: Research - Cloud Infrastructure Report 2009-11 3

IT Industry s Journey - Roadmap SIVAC Cloudization Integration/Consolidation On-Premises > Private Clouds > Public Clouds DC to Cloud-Aware Infrast. & Apps. Cascade migration to SPs/Public Clouds. Integrate Physical Infrast./Blades to meet CAPSIMS Cost, Availability, Performance, Scalability, Inter-operability, Manageability & Security Standardization Automation Automatically Maintains Application SLAs (Self-Configuration, Self-Healing, Self-Acctg. Charges etc) Virtualization Pools Resources. Provisions, Optimizes, Monitors Shuffles Resources to optimize Delivery of various Business Services Standard IT Infrastructure- Volume Economics HW/Syst SW (Servers, Storage, Networking Devices, System Software (OS, MW & Data Mgmt SW) Source:: Research - Cloud Infrastructure Report 2009-11 4

Market Segments by Applications 1000 K 100 K IOPS* 10K Transaction Processing (RAID - 1, 5, 6) OLTP ecommerce Data Warehousing Business Intelligence OLAP (RAID - 0, 3) 1K Scientific Computing HPC 100 TP HPC Imaging Audio Video Web 2.0 10 1 5 10 50 *IOPS for a required response time ( ms) *=(#Channels*Latency-1) 100 MB/sec 500 Source:: Research - Cloud Infrastructure Report 2009-11

Data Storage Usage In Corporate Data Centers 100% 95% I/O Access Frequency vs. Percent of Corporate Data 75% 65% % of I/O Accesses SSD Cache Logs Journals Temp Tables Hot Tables Disk Arrays Tables Indices Hot Data Tape Libraries Back Up Data Archived Data Offsite DataVault 1% 2% 10% 50% 100% % of Corporate Data Source:: Research - Cloud Infrastructure Report 2009-11 6

Data Storage Usage In Cloud MegaDataCenter I/O Access Frequency vs. Percent of Corporate Data 95% 75% 65% % of I/O Accesses SSD Logs Journals Temp Tables Hot Tables FCoE/ SAS Arrays Tables Indices Hot Data Cloud Storage SATA Back Up Data Archived Data Offsite DataVault 1% 2% 10% 50% 100% % of Corporate Data Source:: Research - Cloud Infrastructure Report 2009-11 7

Data Storage Usage Patterns Data Access vs. Age of Data 80% of IOPs 80% of TB Data Access Performance Data Protection Scale Cost Data Reduction Storage Growth SSDs 1 Day 1 Week 1 Month 2 Mo. 3 Mo. 6 Mo. 1 Year 2 Yrs Age of Data Source:: Research - Cloud Infrastructure Report 2009-11

NAND Flash Enabling New Markets - Consumer to Enterprise NAND - $/GB SSD 64-300 GB Component Density, Gb 2002 2004 2006 2008 2010 2012 Source: Samsung & Research

Enterprise SSDs Trends - Prices 100K 10K 1K 100 $/GB 10 1 0.1 0.01 1990 1995 2000 2005 2010 2015 Source: IBM Journal R&D Price Erosion Trends Driven by an explosion in the use of cost-sensitive handheld mobile devices, MLC NAND has seen an explosive growth. On enterprise side Clustered low cost servers used in multiple environments from DB to BI to HPC applications besides being driven by Cloud Service Providers are providing an overall growth of 107% cagr in Computing SSDs GB SSD units are forecasted to grow at 86% cagr during the 2010-14 time frame. 10

SSD Filling Price/Perf Gaps in Storage Price $/GB CPU SDRAM NAND NOR SCM DRAM PCIe SSD DRAM getting Faster (to feed faster CPUs) & Larger (to feed Multi-cores & Multi-VMs from Virtualization) Tape HDD SATA SSD HDD becoming Cheaper, not faster SSD segmenting into PCIe SSD Cache - as backend to DRAM & SATA SSD - as front end to HDD Source: Research SSD Industry Report 2011 Performance I/O Access Latency 11

Performance Data Reliability Environment Resistance SCM: A new Storage Class Memory SCM (Storage Class Memory) Solid State Memory filling the gap between DRAMs & HDDs Marketplace segmenting SCMs into SATA and PCIe based SSDs Key Metrics Required of Storage Class Memories Device - Capacity (GB), Cost ($/GB), - Latency (Random/Block RW Access-ms); Bandwidth BW(R/W- GB/sec) Integrity - BER (Better than 1 in 10^17) - Write Endurance (No. of writes before death); Data Retention (Years); MTBF (millions of Hrs), - Power Consumption (Watts); Volumetric Density (TB/cu.in.); Power On/Off Time (sec), - Shock/Vibration (g-force); Temp./Voltage Extremes 4-Corner (oc,v); Radiation (Rad) 12

Advantage: Enterprise SSDs vs. HDDs 600 8 HDD IOPS/GB SSDs IOPS/GB 300 SSD SSD Units (M) HDD Units (M) 4 Units (Millions) 0 IOPS/GB HDDs 2009 2010 2011 2012 2013 2014 Note: 2U storage rack, 2.5 HDD max cap = 400GB / 24 HDDs, de-stroked to 20%, 2.5 SSD max cap = 800GB / 36 SSDs 0 Source: Research SSD Industry Report 2011

Advantage: Enterprise SSDs vs. HDDs HDD SSD Parameter Improvement SSD vs. HDD Concurrent Access 900 % Data Access Time ms <1 % IOPS 475 % Read Speed 500% 1.0 MTBF (Million Hrs) * 2.1 110 % <5% Failure Rate (AFR%) ** <=3% 40 % 10^(-14) UBER ** 10^ (-16) 16 % 11.4 GB/W Power Efficiency 570 GB/W 5,000 % 43.1 IOPS/W Performance/Power 42,850 IOPS/W 100,000 % 6.8 Watts Idling Power 0.5 Watts 93 % 10.1 Watts Load Power 0.9 Watts 91 % 1.0 GB/in3 Storage Density 16 GB/in3 1600 % 4.2 IOPS/in3 Performance Density 1,250 IOPS/in3 30,000 % Shock/Vibration/Noise 800/1600%/30dBLess Weight 50 % Maintenance/Op.Time # 50 % #Reduced -Booting Up, -Virus Scan, -Defrag, -RAID Build, -Patching, -Data Restoration** JEDEC s Mfr s Required Specs Source: Research SSD Industry Report 2011

SSD Challenges & Solutions: Industry Standard Testing JEDEC Standard: Manufacturer Must Meet Requirements Class Active Usage Retention Failures UBER Power On Power Off FFR Client 8 Hrs/day (40 0 C) 1 yr. (40 0 C) <=3% <10^-15 Enterprise 24 Hrs/day (40 0 C) 3 mo.(40 0 C) <=3% <10^-16 JEDEC Standard: Specify Endurance, Verify Spec via EVT Rigorous verification of Spec using EVT (Endurance Verification Test) JEDEC supplies the workload. Data is continuously read and verified. Endurance spec is max TB written to SSD over which device meets spec SSD must meet<3% fail, UBER <1 in 10^16 EVT requires high/low temp stressing EVT represents lifetime worth of Stress Test, so its trusted Accelerated Test (High Temp) & Unaccelerated Room Temp Retention Test Manufacturer provides gauge informing user of % of endurance life used up JEDEC Standards Testing & Verification for Endurance, Failures and UBER under accelerated life testing assures use of SSDs in Enterprises

SSD Drivers & Challenges: MLC vs. SLC Raw Media Reliability Media Performance Controller Drivers Source: Research SSD Industry Report 2011 No moving parts Predictable wear out Post infant mortality catastrophic device failures rare Performance is excellent (vs.hdds) High performance/watt (IOPS/Watt) Low pin count: shared command / data bus, good balance Transparently converts NAND Flash memory into storage device Manages high bit error rate Improves endurance to sustain a 5-year life cycle Challenges Higher density of MLC increases bit error rate High bit error rate increases with wear Program and Read Disturb Prevention, Partial Page Programming Data retention is poor at high temperature and wear NAND not really a random access device Block oriented; Slow effective write, erase/transfer/program) latency, Imbalanced R/W access speed NAND Performance changes with wear, Some controllers do read/erase/modify/write, Others use inefficient garbage collection Interconnect Number of NAND Flash Chips (Die); # of Buses (Real / Pipelined) Data Protection (Int./Ext.RAID; DIF; ECC);Write Mitigation techniques Effective Block (LBA; Sector) Size: Write Amplification Garbage Collection (GC) Efficiency Buffer Capacity & Management: Meta-data processing 16

MLC vs. SLC SSDs - Price Erosion Relative Price Erosion SLC vs MLC 0 % Price Erosion ($/GB) -20-40 -60-80 -100-120 SLC MLC -140 2004 2005 2006 2007 2008 2009 2010 2011e 2012e 2013e 17

SSD Challenges & Solutions: Endurance/Wear-out Reason for Endurance Limitation in SSDs Anatomy of a PE Cycle in SSDs (Roundtrip through Tunnel Oxide) Floating Gate Memory Cell X-Section Tunnel Oxide N + N + Memory Cell X-Section Programmed N + N + Floating Gate can Permanently Store Charge Programming puts electrons on Floating Gate Memory Cell X-Section Erase Tunnel Oxide N+ N + Erase takes them off Floating Gate Fundamentally NAND Flash Memory Cell is an MOS Transistor with a Floating Gate that can permanently store charge Programming puts electrons in Floating Gate, Erase takes them off 1 Program/Erase (P/E) Cycle is a round trip by the electrons Electrons pass through Cell s Tunnel Oxide. Back & Forth round trips gradually damage the Tunnel Oxide over hundred thousands of trips (Program/Erase or PE cycles) resulting in Limited Endurance (or Wear-Out by PE cycles) in SSDs JEDEC Standards Testing & Verification for Endurance, Failures and UBER under accelerated life testing assures use of SSDs in Enterprises

SSD Challenges & Solutions: Endurance (Wear-Out) % of Blocks Failing Challenge: Bad Block Mgmt 50% 40% 30% 20% 10% 0% 1K Range of Best/Worst in Industry 10K 100K P/E Cycles 1000K The ability to erase slows down after a number of P/E Cycles. If NAND Memory block fails to erase, Controller is notified and another block from spares is used instead But there s no loss of data, so a failed NAND block does not pose a problem. Eventually devices will run out of spares The point where the % failing exceed number of spares is the most Basic Endurance Limit Solution: Over Provisioning Endurance can vary, depending on Workload Endurance should match usage needs of the system to minimize costs. SSD used as cache for 10 HDDs. 2 PB writes of useful life will support this.(1.1 TB writes/day for 5 years.) Over Provisioning by Increasing Spare blocks Decreases user capacity but Allows SSD to more efficiently complete random Writes Improves Random Write Endurance and Performance Methods to Implement include: Setting max LBA to limit visible drive capacity or Create Smaller RAID Logical Drives or Create Smaller Partitions Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SSD Challenges & Solutions: Endurance (UBER) Challenge: Uncorrectable BER Mgmt Solution: ECC Raw Bit Error Rate 10-5 10-6 10-7 10-8 10-9 Range of Best/Worst in Industry 1 10 100 1K 10K P/E Cycles Flash Media Starts with - 1 in 10 8 (1 error/100 million bits) Read Flash Media s Raw Bit Errors (RBER) Corrected by ECC UBER Left Uncorrected 1 in 10 16 (1 error/10,000 Trillion bits Read) Using modern ECC techniques based controllers, vendors are providing spec at 1 in 10^-17 UBER A small of written bits gets flipped (similar to HDDs) This is Flash Media s Raw Bit Error Rate (RBER) ECC is used to correct/reduce this RBER RBER gradually increases with P/E cycles. Any bit error rate over ECC Correction capability is the Uncorrected Bit Error Rate (UBER). Reaching a UBER domain user data can become corrupted. UBER is kept low. JEDEC Spec is 1 in 10 16 errors The point where UBER reaches this spec, is Another Endurance Limit Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SSD Challenges & Solutions: Data Retention Raw Bit Error Rate Raw Bit Error Rate Challenge: Data Retention 10-4 10-5 10-6 10-7 10-8 10-9 0 5K 10K Retention Time (Hours) Solution: Data Retention Firmware Powered-On Firmware To allow Higher Retention Balance out SSD Data Retention vs. Endurance Lower Data Retention allows for higher endurance After PE cycles, RBER increases with time. ECC corrects bit flips but only to a certain extent. So the industry lives with a required UBER and required Retention Time. This, in turn, determines the Safe PE cycles that device should be exercised to, prior to reaching the UBER and Retention time. This is also another endurance limit set by retention. Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SSD Challenges & Solutions: Functional Failure Defects Challenge: Electronic Component - Defects Solution: Burn-Ins, Error Avoidance Algorithm Role of Defects in SSD Reliability Wafer Process Defects 61% 1 Design Related & Test 10% EOS/ESD 10% Failure Rate 2 3 Handling Process Errors 5% 9% Time Assembly & Test 5% All ICs have defects that cause failures. In Flash early life failures are caused by such defects. Defects can cause functional failures not just data loss. Most of NAND defect failures are caused by PE cycles, coming in from high PE voltages causing defects to short. The point where % failing from defects would reach unacceptable limits is another boundary for endurance. Vigorous SSD Burn-In & Testing Remove Infant Mortality Compute NAND T read to improve Read Disturbs 2 T PROG to reduce Program Disturbs SSD Error Avoidance algorithms ECC ASICS Wear Leveling to avoid Hot Spots 3 Efficient Write Amplification Factor (WAF) WAF=Data written to NAND /Data Written by Host to SSD WAF dependent on (a) SSD FW algorithm built into SSD (b) Over Provisioning Amount (c) App Workload 1 Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SSD Challenges & Solutions: Industry Standard Testing JEDEC Requirements: Specify Endurance, Verify Spec via EVT Class Client Enterprise Active Usage Power On 8 Hrs/day (40 0 C) 24 Hrs/day (40 0 C) Retention Power Off 1 yr. (40 0 C) 3 mo.(40 0 C) Failures FFR <=3% <=3% UBER <10^-15 <10^-16 Rigorous verification of Spec using EVT (Endurance Verification Test) Endurance spec is max TB written to SSD over which device meets spec JEDEC supplies the workload. Data continuously read and verified. Lifetime worth of Stress Test, so can be trusted, Accelerated High Temp and Unaccelerated Room Temp Retention Tests Manufacturer provides gauge informing user of % of endurance life used up

SSD Challenges & Solutions: Goals & Best Practices % of Drives Failing (AFR%) 20% 15% 10% 5% 0% 0% 20% 40% Range of Extensive HDD Tests Done by Industry 60% Lifetime (TBW) 80% JEDEC SSD Std. <=3% 100% NAND Flash will have finite Endurance Limits due to limitations imposed by: Uncorrectable Bit Error Rates Functional Failures Data Retention Time Goal: Embody technologies to Improve Life (Years of Use) Push TeraBytesWritten (Endurance Limit) beyond product life as required by SSD products Push defect rate/afr down through Burn- Ins, Error Avoidance Algorithms and Practices. so total defects and wear-outs issues combined is <=3% Target data errors to be < 1 in 10^16 for Enterprise SSDs for both TBW and Retentions specs. With advanced Errors Avoidance (ECC/Wear-Leveling etc) and capacity over-provisioning techniques, controllers successfully creating Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SSD Challenges & Solutions: Goals & Best Practices % of Drives Failing (AFR %) 20% 15% 10% 5% 0% Paranoia Curve 0 1 2 3 4 Years of Use 5 Real World Promise JEDEC SSD Std. <=3% Concerned about SSD Adoption in your Enterprise? Be aware of Tools & Best Practices And you should be OK!! Best Practices By leveraging Error Avoidance Algorithms, and Best Practices of Verification Testing, to keep total functional failure rate <=3% (with defects and wear-outs issues combined) In practice, endurance ratings are likely to be significantly higher than typical use, so data errors and failures will be even less. Capacity Over-provisioning will provide large increases in random performance and endurance. Select SSD based on confirmed EVT Ratings Use MLC within requirements of Endurance Limits Using Best-of-Breed Controllers to achieve <=3% AFR and JEDEC Endurance Verification Testing should allow Enterprise Capabile SSDs Source: Intel IDF 10 & Research SSD Industry Report 2011 2010-11

SCM New Storage Class Memory Price $/GB CPU SDRAM Tape NAND HDD NOR SCM SATA SSD HDD becoming Cheaper, not faster DRAM PCIe SSD DRAM getting Faster (to feed faster CPUs) & Larger (to feed Multi-cores & Multi-VMs from Virtualization) SSD segmenting into PCIe SSD Cache - as backend to DRAM & SATA SSD - as front end to HDD Source: Research SSD Industry Report 2011 Performance I/O Access Latency 26

WW Enterprise SSD Mkt. Opportunity WW Enterprise SSD 5-Yr Mkt Opportunity Cum $B (2010-14) 2010-14 CAGR % 120% 60% 0% Market Size $8.6B (5-Yr cum) SATA SAS PCIe -60% FC $- $2 $4 Source: Research SSD Industry Report 2011 5-Yr Cum Market Size $B by Interface

PCIe based SSD Storage PCIe based SSD Storage Target Market Servers Storage SSD as backend storage to DRAM as the front end 36 PCIe Lanes Availability, 3/6 GB/s Performance (PCIe Gen2/3 x8), Low Latency in micro sec, Low Cost (via eliminating HBA cost) DRAM DRAM DRAM DRAM CPU Core CPU Core Northbridge PCIe Switch PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD CPU Core CPU Core PCIe Switch PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD PCIe SSD s attributes of high IOPS, high Bandwidth, Low Latency and lower cost are a good match for Caching 28

Hybrid SSD Storage Hybrid Storage SAS or SATA SSD+HDD Target market External Storage Systems Combines best features of SSDs - outstanding Read Performance (Latency, IOPs) and Throughput (MB/s) with extremely low cost of HDDs giving rise to a new class of storage - Hybrid Storage Devices SSD as Front End to HDD Controller emulates SSD as HDD Use of Adaptive Memory sends High IOPS requirements to SSD while capacity requiring Apps sent to HDD Simple Add on to SATA HDD Storage SAS 6Gb/sec announced by multi-vendors 29

Hybrid SSD Storage - Perf & TCO 250 SAN TCO using HDD vs. Hybrid Storage SAN Performance Improvements using SSD Cost $K 200 150 100 50 0 145 0 75 14.2 HDD Only HDD-FC HDD- SATA SSD RackSpace Pwr/Cool 0 36 64 28 5.2 HDD/SSD IOPS 300 250 200 150 100 50 0 $/IOPS Improvement 800% FC-HDD Only IOPS Improvement 475% SSD/SATA-HDD 10 9 8 7 6 5 4 3 2 1 0 $/IOP Power & Cooling RackSpace SSDs HDD SATA HDD FC Source: Research SSD Industry Report 2011 Performance (IOPS) $/IOP 30

New Intelligent Controllers: SSD Storage Architecture DRAM Cache Encryption DRAM Controller Host I/F Connector Interface Controller Flash Controller RAID Controller PCIe Flash PCIe Flash SSD Array PCIe Flash SSD Array PCIe Flash SSD Array SSD Array PCIe Flash PCIe Flash SSD Array PCIe Flash SSD Array PCIe Flash SSD Array SSD Array PCIe Flash PCIe Flash SSD Array PCIe Flash SSD Array PCIe Flash SSD Array SSD Array PCIe Flash PCIe Flash SSD Array PCIe Flash SSD Array PCIe Flash SSD Array SSD Array Power Mgmt 1 2 3 4 5 6 7 8 Interface Controller Flash Controller RAID Controller Channels DRAM Power Failure Power Mgmt Encryption Big Voltage Capacitor Regulator PCB/Chassis Signaling Mgmt, Interpret WR/RD/Status Commands, Native Command Queuing, Move Data < > Host Signaling Mgmt, Format, Interpret WR/RD/Status Commands for Flash Arrays, Move Data. Defect Mapping/Bad Block Mgmt, Wear Leveling, Physical<>Logical Translations, ECC RAID Type & RD/WR/Parity Manipulation Multiple Channel to Increase Speed between NAND Flash Arrays & Flash Controller Increase Performance using fast DRAM Cache Buffer Power Failure Protection using Big Capacitor Power/Performance Balancing, Sleep Mode Mgmt Security Schemes Implementation & Manipulation 31

New Intelligent Controllers: Managing NAND Media in NexGen SSDs Leveraging Long History of managing HDD s imperfect media & high error rates Characterizing the quality & capabilities of media Allocating data based on quality of media HDD Media 10-4 Adaptive Signal Processing for Media Rd/Wr/Erase 10-16 Advanced Bit Detection & Error Correction Codes Defect Management Flash Media 10-4 Adaptive Signal Conditioning for Flash Media 10-17 Auto Bit Detection & Error Correction Codes Defect Management Leveraging Long History of managing HDD s imperfect media & high error rates Endurance for Long Life Cycle Reliability through RAID of Flash Elements Adaptive Digital Signal Processing Technology Dynamically adjust Read/Write characteristics of each chip Tune adjustments over life of media ECCs - PRML Deploying Enhanced Error Correction Codes Source: Research SSD Industry Report 2011

New Intelligent Controllers: Meeting Enterprise Requirements Enterprise Requirements Always-On 24x7 Reliability and performance supersede cost Fast I/O Performance required by business-critical applications and 5-Yr. Life Cycle Endurance required by mission-critical applications in the enterprise. Use State-of-the-Art new sophisticated controllers and firmware technologies to run mission critical applications in the enterprise, using - Robust ECC, Internal RAID, Wear Leveling (To reduce hot spots), Spare Capacity, Write Amplification, Avoidance, Garbage Collection Efficiency, Wear Out Prediction Management etc. Source: SandForce SATA3 I/F 6Gb/s,32 NCQ PHY Link Transport Command Optimized Write New Intelligent Controller (2 nd Generation) Block Mgmt/ Wear Leveling AES 256/128 Encryption Garbage Collection TCG Compliance Read/Disturb Management 55b/512B BCH ECC RAID w/o Std. Parity OH RS232,GPIO,I2C, JTAG I/F NAND Flash I/F Toggle, ONFI-2 SLC/ MLC/ emlc 8ch/16 Byte Lanes 3x,2x nm Supp 512 GB Capable New Gen Controllers allow SSDs to meet Enterprise Class Availability/Performance/ over 5-Year Life/Scalability/ Auto-Configuration & Auto Data-Tiering 33

New Intelligent Controllers : Managing Endurance in NextGen SSDs Managing Endurance To overcome NAND s earlier endurance shortfalls due to limitation in write/erase cycles/block, intelligent controllers manage NAND SSDs using ECC Techniques Correct and guard against bit failures, same as in HDDs Wear Leveling Algorithms Writing data to evenly distributes it over all available cells to avoids a block of cells being overused and cause failures. Over-provisioning Capacity Extra spare raw blocks are designed-in as headroom and included to replace those blocks that get overused or go bad. Additionally provide enough room for wear-leveling algorithms to enhance reliability of the device over its life-cycle. Typical SSD device s specified GB device will actually contain 20-25% extra raw capacity to meet these criterions. With advanced Errors Avoidance (ECC/Wear-Leveling etc) and capacity over-provisioning techniques, controllers successfully creating endurance for over 5-Year Product Life Cycle in Enterprise SSDs 34

New Intelligent Controllers - Performance in Next Gen SSDs Managing Factors Impacting Performance Hardware - CPU, Interface, Chipset... System SW - OS, App, Drivers, Caches, SSD specific TRIM, Purge Device - Flash Generation, Parallelism, Caching Strategy, Wear- Leveling, Garbage Collection, Warranty Strategy Write History - TBW, spares Workload - Random, Sequential, R/W Mix, Queues, Threads Pre-Conditioning - Random, Sequential, Amount Performance - Burst First On Board (FOB), Steady State post xpe Cycles By using New Gen Controllers, performance of MLC SSDs starting to match performance of some SLC SSDs Additional performance gains with interleaved memory banks, caching and other techniques 35

Apps Best Suited for SSDs: OLTP to Improve Query Response Time Query Response Time (ms) 0 8 6 4 2 HDDs 14 Drives $$ HDDs 112 Drives w short stroking $$$$$$$$ Hybrid HDDs 36 Drives + SSDs $$$$ SSDs 12 Drives Improving Query Response Time Cost effective way to improve Query response time for a given number of users or servicing an increased number of users at a given response time is best served with use of SSDs or Hybrid (SSD + HDDs) approach, particularly for Database and Online Transaction Applications $$$ Conceptual Only -Not to Scale 0 10,000 20,000 30,000 40,000 IOPS (or Number of Concurrent Users) Source: Research SSD Industry Report 2011 36

AutoSmart Storage-Tiering SW: Workload I/O Monitoring/Smart Migrations Storage-Tiered Virtualization Storage-Tiering at LBA/Sub-LUN Level Physical Storage Logical Volume SSDs Arrays HDDs Arrays Hot Data Cold Data LBA Monitoring and Tiered Placement Every workload has unique I/O access signature Historical performance data for a LUN can identify performance skews & hot data regions by LBAs Using Smart Tiering identify hot LBA regions and non-disruptively migrate hot data from HDD to SSDs. Typically 4-8% of data becomes a candidate and when migrated to SSDs can provide response time reduction of ~65% at peak loads. Source: IBM & Research SSD Industry Report 2011 2010-11 37

AutoSmart Storage-Tiering SW: Enhancing Database Throughput DB Throughput Optimization Every workload has unique I/O access signature and historical behavior identify hot database objects and smartly placed in the right tier. Scalable Throughput Improvement - 300% Substantial IO Bound Transaction Response time Improvement - 45%-75% Productivity (Response Time) Improvement Using automated reallocation of hot spot data to SSDs (typically 5-10% of total data), significant performance improvements is achieved : Response time reduction of around 70+% or IOPS increase of 200% for any I/O intensive loads Verticals benefitting from Online Transactions : Airlines Reservations, Investment Banking Wall St. Stock Transactions Financial Institutions Hedge Funds etc. plus Low Latency seeking HPC Clustered Systems etc. Source: IBM & Research SSD Industry Report 2011 2010-11 38

Applications Best Suited for SSDs DB / OLTP E-Mail/Collab. Applications most benefitting from SSDs Use HPC BI / DW ERP/SCM/CRM Web 2.0 Office Apps 20% 25% 23% 32% 31% 30% 43% Apps and impact from SSD Usage Databases Databases have key elements of commit files logs, redo, undo, tempdb Structured data Structured data access is an excellent fit for SSD Exception large, growing table spaces Unstructured data Unstructured data access is a poor fit for SSD Exception small, non-growing, tagged files OS images boot-from-flash, page-to-dram Typical Cases - Impact on Applications Financials/ATM Transactions Improvements Batch Window 22%, App Response Time 50%, App I/O Rate 50% Messaging Applications Cost Savings: 200+ FC HDDS into only 16 SSDs Source: Research SSD Industry Report 2011 39

Apps Best Suited for SSDs: DB in Memory for Data Warehouse/BI Scale Up Scale In Scale Out Storage Usage vs DB Capacity DB Size (TB) 1200 1100 1000 900 800 700 600 500 400 300 200 100 0 Large DB Size Growth by Market Segment OLTP DW/BI 2009 2010 2011 2012 2013 2014 DB Size 1-2 TB 2-5 TB 5-10 TB DB Size (TB) Storage Size (TB) VTL & >10 TB 0 20 40 60 80 Data Source: Research Cloud Infrastructure Report 2009-11 TB

Apps Best Suited for SSDs: Collaboration using Virtual Desktops Mitigating Boot Storms Boot Storm created by simultaneous Logins by users at start of office day Over provisioning SAN capacity just for short morning burst expensive, while sitting almost idle rest of the day Three potential solutions with pros and cons include: (1) DAS Storage, (2) Virtual Desktop Images on SSD (3) SS Cache to accelerate SAN, Virtual Desktops Solution Pros Con New DAS Storage. Popular with Desktop SW Vendors Lowered Cost Additional Cost for Dedicated Storage Wasted existing SAN Storage Data Protection & Mgmt Challenges Host SSD SSD Ideal for read intensive app Instant On/Immediate Boot Response Images Stored with High Efficiency Using most expensive storage High Speed Needed just for an hour Not simple shoe in w existing storage SAN w SSD Accelerator. Possibly best way to solve problem Small SSD optimized for image store No change to existing Data Protection Not feasible without existing SAN SSD in SAN Integration still a challenge A perfect balance of access and storage is achieved through Integrating SATA HDDs with SSDs and using Automatic Tiering Solutions Source: Research SSD Industry Report 2011

Apps Best Suited for SSDs: HPC/Web 2.0 Smart Mobile Devices Commercial Visualization Bioinformatics & Diagnostics Decision Support Bus. Intelligence Entertainment- VoD / U-Tube Data: Research & Panasas 42 Instant On Boot Ups Rugged, Low Power 1GB/s, ms Rendering (Texture & Polygons) Very Read Intensive, Small Block I/O Data Warehousing Random IO, High OLTPM Most Accessed Videos Very Read Intensive 10 GB/s, ms 1GB/s, ms 4 GB/s, ms

Key Takeaways Optimize Infrastructure to meet needs of Applications/SLA Solid State Storage creating a paradigm shift in Storage Industry Leverage the opportunity to optimize your computing infrastructure with SSD adoption after making a due diligence in selection of vendors/products, industry testing and interoperability Enterprise SSD Market Segments: PCIe vs. SAS/SATA 5-Year cum Market $8.6B Segments by Revenues: 36% PCIe, 33% SAS, 24% SATA, 6% FC based SSDs Understand Drivers and Challenges of SSDs for Enterprise Use Intelligent Controllers key to adoption & success of SSDs Mitigate Endurance, Wear-Out, Life issues Optimize Transactions for Query Response Time vs. # of Users Improving Query Response time for a given number of users (IOPs) or Serving more users (IOPS) for a given query response time Select Automated Storage Tiering Software 43