Performance test StorPool vs. Ceph. November StorPool. All rights reserved. 1 of 9

Similar documents
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

StorPool Distributed Storage Software Technical Overview

Analysis of VDI Storage Performance During Bootstorm

I/O PERFORMANCE COMPARISON OF VMWARE VCLOUD HYBRID SERVICE AND AMAZON WEB SERVICES

Deep Dive: Maximizing EC2 & EBS Performance

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Benchmarking Cassandra on Violin

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Comparison of Hybrid Flash Storage System Performance

How To Evaluate Netapp Ethernet Storage System For A Test Drive

POSIX and Object Distributed Storage Systems

Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks

Using Synology SSD Technology to Enhance System Performance Synology Inc.

SALSA Flash-Optimized Software-Defined Storage

VMware Virtual SAN 6.0 Performance

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

HA Certification Document Armari BrontaStor 822R 07/03/2013. Open-E High Availability Certification report for Armari BrontaStor 822R

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Convergence-A new keyword for IT infrastructure transformation

PARALLELS CLOUD STORAGE

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

Best Practices for Increasing Ceph Performance with SSD

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Enabling Enterprise Solid State Disks Performance

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Scaling from Datacenter to Client

Pivot3 Reference Architecture for VMware View Version 1.03

Benchmarking Hadoop & HBase on Violin

Isilon IQ Scale-out NAS for High-Performance Applications

Building a Private Cloud with Eucalyptus

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Parallels Cloud Storage

Virtuoso and Database Scalability

NexentaStor Enterprise Backend for CLOUD. Marek Lubinski Marek Lubinski Sr VMware/Storage Engineer, LeaseWeb B.V.

Business white paper. HP Process Automation. Version 7.0. Server performance

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Intel Xeon Processor 5560 (Nehalem EP)

Tyche: An efficient Ethernet-based protocol for converged networked storage

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

Couchbase Server: Accelerating Database Workloads with NVM Express*

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Parallels Cloud Server 6.0

NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB

High Performance Tier Implementation Guideline

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Microsoft Exchange Server 2003 Deployment Considerations

USB Flash Drives as an Energy Efficient Storage Alternative

Building All-Flash Software Defined Storages for Datacenters. Ji Hyuck Yun Storage Tech. Lab SK Telecom

Large Unstructured Data Storage in a Small Datacenter Footprint: Cisco UCS C3160 and Red Hat Gluster Storage 500-TB Solution

PERFORMANCE TUNING ORACLE RAC ON LINUX

D1.2 Network Load Balancing

Windows 8 SMB 2.2 File Sharing Performance

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario

Bringing the Public Cloud to Your Data Center

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Understanding Data Locality in VMware Virtual SAN

AIX NFS Client Performance Improvements for Databases on NAS

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Delivering SDS simplicity and extreme performance

Accelerating Server Storage Performance on Lenovo ThinkServer

I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges

Windows Server 2008 R2 Hyper-V Live Migration

XenDesktop 7 Database Sizing

Built for Business. Ready for the Future.

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Zadara Storage Cloud A

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison

Chronicle: Capture and Analysis of NFS Workloads at Line Rate

N /150/151/160 RAID Controller. N MegaRAID CacheCade. Feature Overview

Microsoft SQL Server 2014 Fast Track

SAN Acceleration Using Nexenta VSA for VMware Horizon View with Third-Party SAN Storage NEXENTA OFFICE OF CTO ILYA GRAFUTKO

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

Atlantis USX Hyper- Converged Solution for Microsoft SQL 2014

EqualLogic PS Series Load Balancers and Tiering, a Look Under the Covers. Keith Swindell Dell Storage Product Planning Manager

Picking the right number of targets per server for BeeGFS. Jan Heichler March 2015 v1.2

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Client-aware Cloud Storage

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Bright Idea: GE s Storage Performance Best Practices Brian W. Walker

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Enabling Technologies for Distributed and Cloud Computing

LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

AirWave 7.7. Server Sizing Guide

Optimizing SQL Server Storage Performance with the PowerEdge R720

IMPLEMENTING GREEN IT

Packet Capture in 10-Gigabit Ethernet Environments Using Contemporary Commodity Hardware

QoS-Aware Storage Virtualization for Cloud File Systems. Christoph Kleineweber (Speaker) Alexander Reinefeld Thorsten Schütt. Zuse Institute Berlin

Transcription:

Performance test StorPool vs. Ceph November 2014 2014 StorPool. All rights reserved. 1 of 9

Executive Summary StorPool is a distributed storage system running on standard server hardware. It uses minimal system resources to achieve outstanding performance. Even on a small system with 3 nodes and 12 hard drives, StorPool outperforms Ceph by a large margin. This difference in performance and efficiency translates to a large economy in the number and type of servers needed to provide the high performance block storage service, underpinning clouds. We ran both Ceph and StorPool on 3 nodes with 12 HDDs and 3 SSDs. The network is 10Gigabit Ethernet. We ran the test workload(fio) on a separate node. Performance Test Ceph HDD (baseline) Ceph HDD + Journal StorPool HDD StorPool Hybrid Sequential reads 908 MB/s 1.3x * 1.3x * 1.3x * Sequential writes 204 MB/s 1.7x 3.0x 3.9x Random reads 4k block size 1 512 IOPS 1.1x ** 1.2x ** 71x Random writes 4k block size 1 995 IOPS 1.4x 6.5x 7.8x * - constrained by 10GE interface ** - constrained by HDD random IOPS Server CPU Test Ceph HDD (baseline) Ceph HDD + Journal StorPool HDD StorPool Hybrid Sequential reads 4.1 % 0.6x better 1.1x better 1.6x better Sequential writes 7.2 % 0.6x better 1.5x better 1.6x better Random reads 4k block size 2.2 % 1.0x better 1.9x better 0.3x better * Random writes 4k block size 25 % 0.6x better 6.4x better 6.4x better ** Note: Percentages are average out of 24 CPU threads total. Because of hyperthreading, 40% CPU usage (on Ceph HDD+Journal) really means near full load. * - 4x higher CPU usage, while delivering 71x higher IOPS ** - 6.4x lower CPU usage, while delivering 7.8x higher IOPS 2014 StorPool. All rights reserved. 2 of 9

Introduction StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of commodity servers to create a single pool of shared storage. The StorPool software is installed on each server in the cluster and combines the capacity and performance of all drives attached to the servers into one global namespace. This document presents results from performance tests run in StorPool's test lab. No test result can be an exact replica of actual workloads, thus these results could be used only as an indication of expected performance of StorPool. Customers are advised to perform their own tests. Server Configuration Name CPU RAM RAID/HBA Drives s11 Xeon E5-1620V2 4 cores 3.7GHz 32 GB Intel C600 AHCI Client s12 Xeon E5-1620V2 1x SSD 32 GB Intel C600 AHCI 4 cores 3.7GHz 4x HDD s13 Xeon E5-1620V2 1x SSD 32 GB Intel C600 AHCI 4 cores 3.7GHz 4x HDD s14 Xeon E5-1620V2 1x SSD 32 GB Intel C600 AHCI 4 cores 3.7GHz 4x HDD Hard drive model: HGST Deskstar 500GB (HDS721050CLA360) SSD models: Intel DC S3500 240GB (SSDSC2BB240G4) Total: 1 client 3 servers 3 SSDs 12 HDDs Network configuration 2x Mellanox MCX312A-XCBT, 1x Solarflare SFN5162F (sfc), 1x Intel 82599ES (ixgbe) Single 10GE link per server Switch: Dell S8024F 24-port 10GE SFP+ switch 9000 bytes MTU (Jumbo frames) Flow control enabled Software configuration and testing methodology The tests are performed on 3 servers and 1 client. All servers are installed with a CentOS 6.5 operating system. 2014 StorPool. All rights reserved. 3 of 9

Each test run consists of: 1. configuring and starting a StorPool cluster or Ceph cluster 2. creating one 200GB volume 3. filling the volumes with uncompressible data 4. performing all test cases by running FIO on the client The following table summarizes the test parameters common to all tests. Operating system CentOS 6.5 Performance-testing software fio-2.0.13 and StorPool test runner scripts Number of servers used for storage 3 nodes Number of servers used for storage 1 clients Number of volumes 1 Volume size 200GB Replication level (number of copies) 2 Number of hard drives used for test 12 HDDs 12 HDDs + 3 SSDs Kernel version Linux 3.12.29 (storpool build) StorPool-specific parameters Distributed storage software version StorPool 14.08.282 Integrity provided by system End-to-end data integrity. Protects data throughout its lifetime. Object Size 32MB Stripe Size 1MB Caching, Buffering, Data consistency Read caching with 4GB cache per storage node Write-through caching shared with read cache. No write-back cache Ceph-specific parameters Distributed storage software version Integrity provided by system Object Size Stripe Size Caching, Buffering, Data consistency Ceph 0.80.7-0.el6 None 4MB 4MB Read caching in Linux buffer cache. No write-back cache. Journal on SSD or on HDD. Test cases Test name Read/write Block size Queue depth Duration IOPS tests randwrite Random Writes 4 KB 4, 16, 64, 256 1m randrw Random Reads + Random Writes 4 KB 4, 16, 64, 256 1m 2014 StorPool. All rights reserved. 4 of 9

50/50 randread Random Reads 4 KB 4, 16, 64, 256 1m Sequential tests seqwrite Sequential Writes 1 MB seqread Sequential Read 1 MB 1, 4, 16, 64, 256 1, 4, 16, 64, 256 1m 1m Latency tests writelatency Random Writes 4 KB 1 1m readlatency Random Reads 4 KB 1 1m All tests use FIO with libaio, direct, sync, norandommap and randrepeat=0. There were 10-minute pauses between tests. Test runs We tested the following scenarios Ceph HDD with 12 HDDs Ceph HDD+Journal 12 HDDs + 3 SSDs StorPool HDD 12 HDDs StorPool Hybrid 12 HDDs + 3 SSDs For each scenario we ran all tests in sequence, after filling the 200GB volumes with uncompressible data. Test results Random ops (IOPS) 2014 StorPool. All rights reserved. 5 of 9

Numbers in reddish highlight denote workloads limited by the random IO performance of the hard disks. 2014 StorPool. All rights reserved. 6 of 9

Sequential (MB/s) 2014 StorPool. All rights reserved. 7 of 9

Numbers in reddish highlight denote workloads limited by the network bandwidth. 2014 StorPool. All rights reserved. 8 of 9

Latency (ms) StorPool write latency is governed by latency of the underlying hard drives. Write latency can be mitigated by using safe battery-backed write-back cache in a RAID controller under StorPool. Conclusion StorPool has exceptional performance, while at the same time providing end-to-end data integrity and shared storage capabilities. Additionally, StorPool maintains high performance even with many competing workloads. Even with SSD journal, Ceph just matches StorPool s performance on hard disks only. When StorPool also uses SSDs it has staggering performance advantages, peaking at 71 times (!) the performance of Ceph (for random reads 4k block size). Furthermore, Ceph s CPU usage for random writes is very high, making it unsuitable for converged architectures - running storage+compute on the same servers. StorPool can run on the same node or on a standalone node. However, we recommend running the compute nodes which lowers TCO and provides a single building block for the datacenter. Contacts If you would like to learn more or test StorPool, contact us: info@storpool.com www.storpool.com @storpool 2014 StorPool. All rights reserved. 9 of 9