Software Defined Storage Based on Direct Attached Storage Slava Kuznetsov Microsoft

Similar documents
Application-Focused Flash Acceleration

The Data Placement Challenge

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

The Use of Flash in Large-Scale Storage Systems.

Accelerating Real Time Big Data Applications. PRESENTATION TITLE GOES HERE Bob Hansen

Flash for Databases. September 22, 2015 Peter Zaitsev Percona

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Methods to achieve low latency and consistent performance

89 Fifth Avenue, 7th Floor. New York, NY White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

Azure VM Performance Considerations Running SQL Server

Make A Right Choice -NAND Flash As Cache And Beyond

A Close Look at PCI Express SSDs. Shirish Jamthe Director of System Engineering Virident Systems, Inc. August 2011

QuickSpecs. PCIe Solid State Drives for HP Workstations

StarWind Virtual SAN for Microsoft SOFS

Scaling from Datacenter to Client

Flash Controller Architecture for All Flash Arrays

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

NV-DIMM: Fastest Tier in Your Storage Strategy

Increasing Storage Performance

Flash at the price of disk Redefining the Economics of Storage. Kris Van Haverbeke Enterprise Marketing Manager Dell Belux

Software Defined Microsoft. PRESENTATION TITLE GOES HERE Siddhartha Roy Cloud + Enterprise Division Microsoft Corporation

SSDs: Practical Ways to Accelerate Virtual Servers

Comparison of Hybrid Flash Storage System Performance

SQL Server Virtualization

SSDs: Practical Ways to Accelerate Virtual Servers

VMware Virtual SAN Hardware Guidance. TECHNICAL MARKETING DOCUMENTATION v 1.0

Big Fast Data Hadoop acceleration with Flash. June 2013

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE. Matt Kixmoeller, Pure Storage

A Study of Application Performance with Non-Volatile Main Memory

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Bringing the Public Cloud to Your Data Center

Latency vs. Capacity Storage Projections

QuickSpecs. HP Z Turbo Drive

Flash Performance in Storage Systems. Bill Moore Chief Engineer, Storage Systems Sun Microsystems

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Windows 8 SMB 2.2 File Sharing Performance

Q & A From Hitachi Data Systems WebTech Presentation:

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

SMB Direct for SQL Server and Private Cloud

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

HP Smart Array Controllers and basic RAID performance factors

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

Introducing NetApp FAS2500 series. Marek Stopka Senior System Engineer ALEF Distribution CZ s.r.o.

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

Important Differences Between Consumer and Enterprise Flash Architectures

How To Write On A Flash Memory Flash Memory (Mlc) On A Solid State Drive (Samsung)

Microsoft Windows Server Hyper-V in a Flash

Accelerating Datacenter Server Workloads with NVDIMMs

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Flash Storage: Trust, But Verify

Deep Dive: Maximizing EC2 & EBS Performance

SALSA Flash-Optimized Software-Defined Storage

Solid State Storage in Massive Data Environments Erik Eyberg

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

WHITE PAPER. Drobo TM Hybrid Storage TM

ZD-XL SQL Accelerator 1.5

Sistemas Operativos: Input/Output Disks

Storage Spaces. Storage Spaces

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated

Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS

Oracle Aware Flash: Maximizing Performance and Availability for your Database

Building All-Flash Software Defined Storages for Datacenters. Ji Hyuck Yun Storage Tech. Lab SK Telecom

Microsoft Windows Server in a Flash

Architecture Enterprise Storage Performance: It s All About The Interface.

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

Flash Accel, Flash Cache, Flash Pool, Flash Ray Was? Wann? Wie?

Flash In The Enterprise

Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1

VMware Virtual SAN 6.0 Design and Sizing Guide

PCI Express SATA III RAID Controller Card with Mini-SAS Connector (SFF-8087) - HyperDuo SSD Tiering

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

Data Center Solutions

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

SLIDE 1 Previous Next Exit

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

How SSDs Fit in Different Data Center Applications

Flash Memory Technology in Enterprise Storage

THESUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

A Balanced Approach to Optimizing Storage Performance in the Data Center

Microsoft Windows Server Hyper-V in a Flash

OCZ s NVMe SSDs provide Lower Latency and Faster, more Consistent Performance

PARALLELS CLOUD STORAGE

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Transcription:

Software Defined Storage Based on Direct Attached Storage Slava Kuznetsov Microsoft

Storage Spaces in Windows Server 2012 R2 with Shared SAS Cluster Nodes SAS JBOD SAS JBOD 2

Design Goals for Windows Server 2016 Lower cost Easy to maintain Better scale out Support wider range of vendors, devices and technologies High IOPS, low latency, small CPU overhead 3

Storage Spaces Direct in Windows Server 2016 with Directly Attached Storage Cluster Nodes 4

Storage Bus Architecture Node 1 Node 2 Application Layer Application Cluster Shared Volumes File System (CSVFS) C:\clusterstorage\volume1 Access Layer File System File System Virtual Disks Virtual Disks Virtualization Layer SpacePort miniport SpacePort miniport Virtual Disks Storage Bus Layer ClusBflt Target Clusport Virtual HBA Block over SMB Clusport Virtual HBA ClusBflt Target SBL Disks Physical Layer Physical Devices HDD or SSD Physical Devices HDD or SSD Physical Devices 5

Storage Devices Shared SAS HDDs and SSDs with Persistent Reservations (PR) support Storage Spaces Direct SAS HDDs and SSDs, PRs not required Lower cost SATA HDDs and SSDs NVMe SSDs for high perf and low latency Forward looking for NVDIMM support 6

Connectivity Shared Requires SAS cabling and shared JBODs, may require SAS switches Storage Spaces Direct Uses network, shared JBODs not required Utilizes SMB3 and SMB Direct. TCP or RDMA interconnect for low latency and CPU usage Supports multi-rack storage, not constrained by SAS cabling distance limitations 7

Storage Bus Bandwidth Management Shared Not supported Storage Spaces Direct Predictable performance Better performance 8

Storage Bus Bandwidth Management Predictable performance IOs may have different priorities Application IOs have to coexist with system IOs Nodes should have equal access to storage devices SAS and SATA HDDs are optimized for different usage patterns 9

Storage Bus Bandwidth Management Each IO cost is estimated as a sum seek cost + setup cost + data transfer cost Supports fair priority scheduling, higher priority bucket leaves 5%..20% to the next pri buckets App and system IOs are throttled to 50% / 50% IOs from different nodes are throttled to ensure fair access from all nodes 10

Storage Bus Bandwidth Management Better performance HDDs perform 100 times better with sequential IOs vs random 8K IOs Extra seeks consume energy and reduce disk lifetime Devices have limits on max number of pending IOs they can handle 32 for SATA, 64 for SAS 11

Storage Bus Bandwidth Management De-Randomization - discovers sequential/sparse streams and orders IOs to minimize seek and increase performance Holds other IOs until sequential stream drains of maximum debt for priority/category/node is reached 12

Tiering and caching Shared Supports storage tiering at file system level Storage Spaces Direct Supports storage bus cache hybrid storage Low cost per TB, high capacity Number of Device or PCIe slots is limited CPU & Network max out with 1..4 fast flash (NVMe) devices 13

Storage Bus Cache Increase performance of rotational storage by caching small reads and writes on SSDs or other storage devices (NVMEs) with no seek penalty Increase durability of low endurance flash devices Persistent Read/Write cache Keep storage cost low for archival (cold) data Adapt fast to changing workloads at high granularity for best cache efficiency (8K page) 14

Storage Bus Cache Single Node High endurance SSDs HDDs (read and write cache), or Low Endurance SSDs (write cache only) 15

Storage Bus Cache IO types IOs are classified by size. 64KB or less go through cache, large IO bypass cache. Small read Cache hit read from SSD Cache miss read ahead HDD populate SSD Large read read HDD, read dirty from SSD Small write write to SSD Large Write write to HDD, purge dirty data on SSD 16

Storage Bus Cache Destager Dirty pages are split into two main categories Dirty Hot (50%) and Dirty Cold (50%). Only Dirty Cold pages are considered for destage Picks area on a HDD with highest density of dirty pages Coalesces IOs when possible Orders IOs by LBA to achieve maximum throughput 17

Storage Bus Cache Destager Variable destage priority based on % of dirty pages in cache 0%.. 30% - 5% of HDD bandwidth 30%.. 40% - 15% of HDD bandwidth 40%.. 50% - 50% of HDD bandwidth 18

Storage Bus Cache Performance Read Cache Hit: same as SSD Read Cache Miss: HDD 0..5% * Small Writes: SSD 15% Destager: x3.. x25 more IOPS than random Destages mostly when HDDs are not busy Leaves more bandwidth for read misses, large reads and large writes in HDDs Rich set of performance counters Cluster Storage Cache Stores Cluster Storage Hybrid Disks 19

Acknowledgments Cluster team SMB team Storage Spaces team File System team StorSimple team 20

Q & A Storage Spaces Direct feature is available in Windows Server 2016 Technical Preview Have a question later? Send me e-mail slavak@microsoft.com 21