How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda

Similar documents
Cloud Computing and Amazon Web Services

Client-aware Cloud Storage

POSIX and Object Distributed Storage Systems

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using

Best Practices for Increasing Ceph Performance with SSD

Amazon Cloud Storage Options

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

The OpenStack TM Object Storage system

PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT

Backing up to the Cloud

Introduction to Cloud : Cloud and Cloud Storage. Lecture 2. Dr. Dalit Naor IBM Haifa Research Storage Systems. Dalit Naor, IBM Haifa Research

The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service

ZFS Backup Platform. ZFS Backup Platform. Senior Systems Analyst TalkTalk Group. Robert Milkowski.

COSBench: A benchmark Tool for Cloud Object Storage Services. Jiangang.Duan@intel.com

Cloud Computing For Bioinformatics

Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack

Parallels Cloud Server 6.0

Deep Dive: Maximizing EC2 & EBS Performance

OTM in the Cloud. Ryan Haney

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks

A Middleware Strategy to Survive Compute Peak Loads in Cloud

Protecting Information in a Smarter Data Center with the Performance of Flash

PARALLELS CLOUD STORAGE

Storage Architectures for Big Data in the Cloud

Low-cost

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

DELL s Oracle Database Advisor

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

Scalable Architecture on Amazon AWS Cloud

Fixed Price Website Load Testing

SALSA Flash-Optimized Software-Defined Storage

Building low cost disk storage with Ceph and OpenStack Swift

Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study

Convergence-A new keyword for IT infrastructure transformation

February, 2015 Bill Loewe

What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1

Building a Parallel Cloud Storage System using OpenStack s Swift Object Store and Transformative Parallel I/O

Performance Benchmark for Cloud Block Storage

Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions

QoS-Aware Storage Virtualization for Cloud File Systems. Christoph Kleineweber (Speaker) Alexander Reinefeld Thorsten Schütt. Zuse Institute Berlin

GeoCloud Project Report USGS/EROS Spatial Data Warehouse Project

Figure 1. The cloud scales: Amazon EC2 growth [2].

Benchmarking Couchbase Server for Interactive Applications. By Alexey Diomin and Kirill Grigorchuk

Virtuoso and Database Scalability

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

Berkeley Ninja Architecture

Mark Bennett. Search and the Virtual Machine

Capacity planning for IBM Power Systems using LPAR2RRD.

Towards a Community Cloud Storage

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Getting performance & scalability on standard platforms, the Object vs Block storage debate. Copyright 2013 MPSTOR LTD. All rights reserved.

Business Intelligence Competency Partners

Configuration and Deployment Guide For OpenStack * Swift * Object Storage on Intel Atom Processor and Intel Xeon Processor Microservers

HYPER-CONVERGED INFRASTRUCTURE STRATEGIES

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud

Zadara Storage Cloud A

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Scala Storage Scale-Out Clustered Storage White Paper

Service Oriented Architecture(SOA)

An Oracle White Paper September Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

Hadoop Size does Hadoop Summit 2013

StorReduce Technical White Paper Cloud-based Data Deduplication

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Cloud Server Performance A Comparative Analysis of 5 Large Cloud IaaS Providers

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Scaling in the Cloud with AWS. By: Eli White (CTO & mojolive) eliw.com - mojolive.com

GraySort on Apache Spark by Databricks

2009 Oracle Corporation 1

Data Centers and Cloud Computing. Data Centers

Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers

Hadoop & its Usage at Facebook

LDA, the new family of Lortu Data Appliances

Choosing Between Commodity and Enterprise Cloud

SOLUTION BRIEF. Resolving the VDI Storage Challenge

How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc.

Future-Proofed Backup For A Virtualized World!

IBM Spectrum Protect in the Cloud

Variations in Performance and Scalability when Migrating n-tier Applications to Different Clouds

How AWS Pricing Works

SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL 2012 IBM Corporation

Design and Evolution of the Apache Hadoop File System(HDFS)

MySQL performance in a cloud. Mark Callaghan

Transcription:

How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda 1

Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments When something goes wrong in a Swift cluster Two Types of Failures: Hard Drive, Entire Node What is performance degradation before the failures are fixed How soon the data will be back (when all failed nodes are back on-line)? Experiments 2

Zmanda Leader in Open Source Backup and Cloud Backup We got strong interest in integrating our cloud backup products with OpenStack Swift Backup to OpenStack Swift Alternative to tape based backups Swift Installation and Configuration Services 3

Background Public Storage Cloud Pros: pay-as-you-go, low upfront cost Cons: expensive in the long run, performance is not clear Private Storage Cloud (use case: backup data to private cloud by Zmanda products) Pros: low TCO in the long run, expected performance, in-house data Cons: high upfront cost, long ramp-up period (prepare and tune HW & SW) Open Problem / Challenge: How to build a private cloud storage with. Low upfront cost, expected performance, short ramp-up period 4

Background Swift is an open-source object store running on commodity HW High scalability (linear scale-out as needed) High availability (3 copies of data) High durability Swift has heterogeneous types of nodes Proxy Swift s brain (coordinate requests, handle failure ) Storage Swift s warehouse (store objects) 5

Problem How to provision the proxy and storage nodes in a Swift cluster for expected performance (SLA) while keeping low upfront cost? Hardware: CPU, memory network, I/O device Software: filesystem, Swift configuration 6

Lesson Learnt from Past CPU, network I/O intensive High-end CPU, 10 GE networking Disk I/O intensive Commodity CPU, 1 GE networking XFS filesystem Are they always true in all cases? especially for different workloads? always pay off to choose 10GE (expensive!) for proxy nodes? always sufficient to use commodity CPU for storage nodes? always disk I/O intensive on storage nodes? how much difference in performance between XFS and other FS?. 7

Solution Solution (similar to Divide and Conquer strategy) First, solve the problem in a small Swift cluster (e.g. 2 proxy nodes, 5-15 storage nodes) 1: For each HW configuration for proxy node Pruning methods make it simple! 2: For each HW configuration for storage node 3: For each number of storage node from 5, 10, 15 4: For each SW parameter setting Exhaustive search? 5: A small Swift cluster is made, measure its performance, calculate and save its perf/cost 6: Recommend the small Swift clusters with high performance/cost Then, scale out recommended small Swift clusters to large Swift clusters until SLA is met Performance and cost also get scaled (when networking is not a bottleneck) The ratio between proxy and storage nodes (identified in small clusters) As well as HW & SW settings (identified in small clusters) still hold in large clusters 8

Evaluation - Hardware Hardware configuration for proxy and storage nodes Amazon EC2 (diverse HW resources, no front cost, virtualized HW -> physical HW) Two hardware choices for proxy node: # 1: Cluster Compute Extra Large Instance (EC2 Cluster) # 2: High-CPU Extra Large Instance (EC2 High-CPU) Two hardware choices for storage node: # 1: High-CPU # 2: Large Instance (EC2 Large) CPU speed Cluster High-CPU Large 33.5 EC2 Compute Units 20 EC2 Compute Units Memory 23 GB 7GB 7.5GB Network 10 GE 1 GE 1 GE 4 EC2 Compute Units Pricing (US East) $ 1.30/h $ 0.66/h $ 0.32/h 9

Evaluation Cost & Software Upfront Cost EC2 cost ($/hour) EC2 cost physical HW cost, but it is a good implication of physical HW cost Software Configuration Filesystem XFS (recommended by RackSpace) Ext4 (popular FS, but not evaluated for Swift) Swift Configuration Files db_preallocate (it is suggested to set True for HDD to reduce defragmentation) OS settings disable TIME_WAIT, disable syn cookies will discuss in our future blog 10

Evaluation Workloads 2 Sample Workloads Small Objects (object size 1KB 100 KB) Large Objects (object size 1MB 10 MB) Upload (GET 5%, PUT 90%, DEL: 5%) Example: Online gaming hosting service the game sessions are periodically saved as small files. Example: Enterprise Backup the files are compressed into large trunk to backup. Occasionally, recovery and delete operations are needed. Object sizes are randomly and uniformly chosen within the pre-defined range Objects are continuously uploaded to the test Swift clusters COSBench a cloud storage benchmark tool from Free to define your own workloads in COSBench! 11

Evaluation Upload small objects Top-3 recommended hardware for a small Swift cluster Upload Small Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (High-CPU) 5 storage nodes (High-CPU) 151 2 2 proxy nodes (Cluster) 10 storage nodes (High-CPU) 135 3 2 proxies nodes (Cluster) 5 storage nodes (High-CPU) 123 Storage node is all based on High-CPU CPU are intensively used for handling large # requests. CPU is the key resources. Comparing to Large Instance (4 EC2 Compute Units, $0.32/h) High-CPU Instance has 20 EC2 Compute Units with $0.66/h (5X more CPU resources, only 2X expensive) 12 Proxy node Traffic pattern: high throughput, low network bandwidth (e.g. 1250 op/s -> 61MB/s) 1 GE from High-CPU Instance will not be the serious bottleneck for this workload. Comparing to High-CPU, Cluster has 1.67X CPU resources, but 2X expensive Besides, 5 High-CPU based storage nodes can almost saturate 2 High-CPU based proxy nodes

Evaluation Upload large objects Top-3 recommended hardware for a small Swift cluster Upload Large Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (Cluster) 10 storage nodes (Large) 5.6 2 2 proxy nodes (High-CPU) 5 storage nodes (Large) 4.9 3 2 proxy nodes (Cluster) 5 storage nodes (Large) 4.7 Storage node is all based on Large More time is spent on transferring objects to I/O devices. Write request rate is low, CPU is not the key factor. Comparing to High-CPU Instance (20 EC2 Compute Units, $0.66/h), Large Instance has 4 EC2 Compute Units (sufficient) with $0.32/h (2X cheaper). 13 Proxy node Traffic pattern: low throughput, high network bandwidth e.g. 32 op/s -> 160 MB/s for incoming and ~500 MB/s for outgoing traffic (write in triplicate!) 1 GE from High-CPU is under-provisioned, 10 GE from Cluster is paid off for this workload. Need 10 Large based storage nodes to keep up with the 2 proxy nodes (10 GE)

Evaluation Conclusion for HW Take-away points for provisioning HW for a Swift cluster Upload Small Object Upload Large Object Hardware for proxy node 1 GE High-end CPU 10 GE High-end CPU Hardware for storage node 1 GE High-end CPU 1 GE Commodity CPU Download workloads: see the backup slides 14 Contrary to the lessons learnt from the past It does NOT always pay off to choose 10 GE (expensive!) for proxy nodes It is NOT always sufficient to use commodity CPU for storage nodes Upload is disk I/O intensive (3 copies of data) but download is NOT always disk I/O intensive (retrieve one copy of data)

Evaluation Conclusion for SW Take-away points for provisioning SW for a Swift cluster db_preallocate XFS vs. Ext4 Upload Small Objects on XFS Upload Large Objects on / off XFS / Ext4 Upload Small Object (more sensitive to software settings) db_preallocation: intensive updates on container DB. Setting it to on will gain 10-20% better performance Filesystem: we observe XFS achieves 15-20% extra performance than Ext4 15

Upfront Cost ($/hour) Evaluation Scale out small cluster Workload #1: upload small objects (same workload for exploring HW & SW configurations for small Swift cluster) Based on the top-3 recommended small Swift clusters 20 18 2X 3X 2 Proxy Node (High-CPU) 5 Storage Node (High-CPU) 16 14 12 10 8 6 3X: 6 proxy 15 storage Lowest cost SLA 1: 80% RT < 600ms 1.5X 2X: 4 proxy 10 storage 2X Lowest cost 1X SLA 2: 80% RT < 1000ms 2 Proxy Node (Cluster) 10 Storage Node (High-CPU) 2 Proxy Node (Cluster) 5 Storage Node (High-CPU) 1X 4 2 1X: 2 proxy 5 storage 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 16 Response Time (ms) of 80% requests

Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments When something goes wrong in a Swift cluster Two Types of Failures: Hard Drive, Entire Node What is performance degradation before the failures are fixed How soon the data will be back (when all failed nodes are back on-line)? Experiments 17

Why Consider Failures Failure stats in Google s DC (from Google fellow Jeff Dean s interview at 2008) A cluster of 1,800 servers in its first year Totally, 1,000 servers failed, thousands of HDDs failed 1 power distribution unit failed, bringing down 500 1,000 machines for 6 hours 20 racks failed, each time causing 40 80 machines to vanish from network Failures in Swift Given a 5-zone setup, Swift can tolerate at most 2 zones failed (data will not be lost) But, performance will degrade to some extent before the failed zones are fixed. If Swift operators want to ensure certain performance level They need to benchmark the performance of their Swift clusters upfront 18

How Complex to Consider Failure (1) Possible failure at one node Disk Swift process (rsync is still working) Entire node (2) Which type of node failed Proxy Storage (3) How many nodes failed at same time Combining above three considerations, the total space of all failure scenarios is huge practical to prioritize those failure scenarios E.g. the worst or more common scenarios are considered first 19

Evaluation - Setup Focus on performance (not data availability) Measure performance degradation comparing to no failure case, before failed nodes back on-line Workload: Backup workload (uploading large objects is the major operation) Swift cluster: 2 proxy nodes (Cluster: Xeon CPU, 10 GE), 10 storage nodes Two common failure scenarios: (1) entire storage node failure (2) HDD failure in storage node (1) Entire storage node failure 10%, 20%, 30% and 40% storage nodes failed in a cluster (E.g. partial power outage) Different HW resources are provisioned for storage node EC2 Large for storage node (cost-efficient, high performance/cost) EC2 High-CPU for storage node (costly, over-provisioned for CPU resources) 20

Evaluation - Setup (2) HDD failure in storage node (EC2 Large for storage node) Each storage node attaches 8 HDDs Intentionally umount some HDDs during the execution. Storage node is still accessible 10%, 20%, 30% and 40% of HDDs failed in a cluster Compare two failure distributions: Uniform HDD failure (failed HDDs uniformly distributed over all storage nodes) Skewed HDD failure (some storage nodes get much more # HDDs failed than other nodes) 21

Evaluation Entire Node Failure 40 35 30 25 20 15 10 5 0 Cluster throughput (# operations / second) no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU Storage node based on Large Instance Throughput decreases as more storage failed 22 Storage node based on High-CPU Instance Throughput decreases only when 40% nodes fail

Evaluation Entire Node Failure 100% 80% 60% 40% 20% 0% CPU usage in unaffected storage node no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU Network bandwidth (MB/s) in unaffected storage node When storage node is based on High-CPU Instance Over-provisioned resources in unaffected node get more used as # failures increases So, it can keep performance from degrading initially 80 70 60 50 40 30 20 10 0 no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU 23 When storage node is based on Large Instance CPU is almost saturated when no failure happens

Evaluation HDD Failure 24 40 35 30 25 20 15 10 5 0 Cluster throughput (# operations / second) no failure uniform 10% HDD failure uniform 20% HDD failure uniform 30% HDD failure uniform 40% skewed 40% HDD failure HDD failure 40% node failure When HDD are uniformly failed across all storage nodes Throughput does not decrease! Why? When some storage nodes have more failed HDDs than others (skewed) Throughput decreases significantly, still better than entire node failure Extreme case: when all HDDs on a storage node fail, it is almost equal to entire node failure 30 25 20 15 10 5 0 uniform 10% HDD failure Usage of unaffected disk (%) when HDDs are uniformly failed uniform 20% HDD failure uniform 30% HDD failure uniform 40% HDD failure When HDDs are uniformly failed I/O loads are evenly distributed over other unaffected HDDs

Evaluation Take-away points In order to maintain certain performance in fact of failure Make sense to over-provision the HW resources to some extent When failure happens, the over-provisioned resources will reduce the performance degradation Entire storage node failure vs. HDD failure Entire node failure is the worse than HDD failure. When only HDDs failed, performance degradation depends on: If failed HDDs are uniformly distributed across all storage nodes degradation is smaller, because I/O load can be rebalanced over unaffected HDDs Otherwise (failure distribution is skewed) degradation may be larger What if proxy node failure? proxy and storage nodes fail together? Reduce the performance, need to figure out in the future 25

When Failed Nodes Are Fixed When all failed (affected) nodes have been fixed and re-join the Swift cluster (1) How soon the recovery will take on the affected nodes? (2) What is performance when the recovery is undergoing? We will show empirical results in our blog (http://www.zmanda.com/blogs/) For (1), it depends on: How much data need to be recovered. Networking latency b/w unaffected and affected nodes HW resources (e.g. CPU) in unaffected nodes (lookup which data need to be restored) 26

When Failed Nodes Are Fixed When all failed nodes have been fixed and re-join the Swift cluster (1) How soon the recovery will take on the affected nodes? (2) What is performance when the recovery is undergoing? For (2), it depends on: HW resources in unaffected nodes. The unaffected nodes become more resourceintensive because they still serve requests, also help affected nodes to restore their data Performance will gradually increase as the recovery progress is close to 100% 27

Thanks! Questions/Comments? http://www.zmanda.com/blogs/ swift@zmanda.com 28

Back-up Slides 29

Evaluation Download small objects Top-3 recommended hardware for a small Swift cluster Download Small Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (High-CPU) 5 storage nodes (Large) 737 2 2 proxy nodes (Cluster) 5 storage nodes (Large) 572 3 2 proxies nodes (High-CPU) 10 storage nodes (Large) 513 Storage node is all based on Large Only one copy of data is retrieved. CPU and disk I/O are not busy Large is sufficient for workload and saves more cost than High-CPU Proxy node Traffic pattern: high throughput, low network bandwidth (e.g. 2400 op/s -> 117 MB/s) 1 GE from High-CPU is adequate 5 Large based storage nodes can almost saturate the 2 High-CPU based proxy nodes. 30

Evaluation Download large objects Top-3 recommended hardware for a small Swift cluster Download Large Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (Cluster) 5 storage nodes (Large) 16.8 2 2 proxy nodes (Cluster) 10 storage nodes (Large) 14.5 3 2 proxy nodes (High-CPU) 5 storage nodes (Large) 12.9 Storage node is all based on Large Request rate is low, little load on CPU. Large Instance is sufficient for workload and saves more cost than High-CPU. Proxy node Traffic pattern: low throughput, high network bandwidth e.g. 70 op/s -> 350 MB/s for incoming and 350 MB/s for outgoing traffics 1 GE from High-CPU is under-provisioned, 10 GE from Cluster is paid off for this workload. 5 Large based storage nodes can nearly saturate the 2 Cluster based proxy nodes. 31