How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda
|
|
|
- Augustus Bradley
- 10 years ago
- Views:
Transcription
1 How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda 1
2 Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments When something goes wrong in a Swift cluster Two Types of Failures: Hard Drive, Entire Node What is performance degradation before the failures are fixed How soon the data will be back (when all failed nodes are back on-line)? Experiments 2
3 Zmanda Leader in Open Source Backup and Cloud Backup We got strong interest in integrating our cloud backup products with OpenStack Swift Backup to OpenStack Swift Alternative to tape based backups Swift Installation and Configuration Services 3
4 Background Public Storage Cloud Pros: pay-as-you-go, low upfront cost Cons: expensive in the long run, performance is not clear Private Storage Cloud (use case: backup data to private cloud by Zmanda products) Pros: low TCO in the long run, expected performance, in-house data Cons: high upfront cost, long ramp-up period (prepare and tune HW & SW) Open Problem / Challenge: How to build a private cloud storage with. Low upfront cost, expected performance, short ramp-up period 4
5 Background Swift is an open-source object store running on commodity HW High scalability (linear scale-out as needed) High availability (3 copies of data) High durability Swift has heterogeneous types of nodes Proxy Swift s brain (coordinate requests, handle failure ) Storage Swift s warehouse (store objects) 5
6 Problem How to provision the proxy and storage nodes in a Swift cluster for expected performance (SLA) while keeping low upfront cost? Hardware: CPU, memory network, I/O device Software: filesystem, Swift configuration 6
7 Lesson Learnt from Past CPU, network I/O intensive High-end CPU, 10 GE networking Disk I/O intensive Commodity CPU, 1 GE networking XFS filesystem Are they always true in all cases? especially for different workloads? always pay off to choose 10GE (expensive!) for proxy nodes? always sufficient to use commodity CPU for storage nodes? always disk I/O intensive on storage nodes? how much difference in performance between XFS and other FS?. 7
8 Solution Solution (similar to Divide and Conquer strategy) First, solve the problem in a small Swift cluster (e.g. 2 proxy nodes, 5-15 storage nodes) 1: For each HW configuration for proxy node Pruning methods make it simple! 2: For each HW configuration for storage node 3: For each number of storage node from 5, 10, 15 4: For each SW parameter setting Exhaustive search? 5: A small Swift cluster is made, measure its performance, calculate and save its perf/cost 6: Recommend the small Swift clusters with high performance/cost Then, scale out recommended small Swift clusters to large Swift clusters until SLA is met Performance and cost also get scaled (when networking is not a bottleneck) The ratio between proxy and storage nodes (identified in small clusters) As well as HW & SW settings (identified in small clusters) still hold in large clusters 8
9 Evaluation - Hardware Hardware configuration for proxy and storage nodes Amazon EC2 (diverse HW resources, no front cost, virtualized HW -> physical HW) Two hardware choices for proxy node: # 1: Cluster Compute Extra Large Instance (EC2 Cluster) # 2: High-CPU Extra Large Instance (EC2 High-CPU) Two hardware choices for storage node: # 1: High-CPU # 2: Large Instance (EC2 Large) CPU speed Cluster High-CPU Large 33.5 EC2 Compute Units 20 EC2 Compute Units Memory 23 GB 7GB 7.5GB Network 10 GE 1 GE 1 GE 4 EC2 Compute Units Pricing (US East) $ 1.30/h $ 0.66/h $ 0.32/h 9
10 Evaluation Cost & Software Upfront Cost EC2 cost ($/hour) EC2 cost physical HW cost, but it is a good implication of physical HW cost Software Configuration Filesystem XFS (recommended by RackSpace) Ext4 (popular FS, but not evaluated for Swift) Swift Configuration Files db_preallocate (it is suggested to set True for HDD to reduce defragmentation) OS settings disable TIME_WAIT, disable syn cookies will discuss in our future blog 10
11 Evaluation Workloads 2 Sample Workloads Small Objects (object size 1KB 100 KB) Large Objects (object size 1MB 10 MB) Upload (GET 5%, PUT 90%, DEL: 5%) Example: Online gaming hosting service the game sessions are periodically saved as small files. Example: Enterprise Backup the files are compressed into large trunk to backup. Occasionally, recovery and delete operations are needed. Object sizes are randomly and uniformly chosen within the pre-defined range Objects are continuously uploaded to the test Swift clusters COSBench a cloud storage benchmark tool from Free to define your own workloads in COSBench! 11
12 Evaluation Upload small objects Top-3 recommended hardware for a small Swift cluster Upload Small Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (High-CPU) 5 storage nodes (High-CPU) proxy nodes (Cluster) 10 storage nodes (High-CPU) proxies nodes (Cluster) 5 storage nodes (High-CPU) 123 Storage node is all based on High-CPU CPU are intensively used for handling large # requests. CPU is the key resources. Comparing to Large Instance (4 EC2 Compute Units, $0.32/h) High-CPU Instance has 20 EC2 Compute Units with $0.66/h (5X more CPU resources, only 2X expensive) 12 Proxy node Traffic pattern: high throughput, low network bandwidth (e.g op/s -> 61MB/s) 1 GE from High-CPU Instance will not be the serious bottleneck for this workload. Comparing to High-CPU, Cluster has 1.67X CPU resources, but 2X expensive Besides, 5 High-CPU based storage nodes can almost saturate 2 High-CPU based proxy nodes
13 Evaluation Upload large objects Top-3 recommended hardware for a small Swift cluster Upload Large Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (Cluster) 10 storage nodes (Large) proxy nodes (High-CPU) 5 storage nodes (Large) proxy nodes (Cluster) 5 storage nodes (Large) 4.7 Storage node is all based on Large More time is spent on transferring objects to I/O devices. Write request rate is low, CPU is not the key factor. Comparing to High-CPU Instance (20 EC2 Compute Units, $0.66/h), Large Instance has 4 EC2 Compute Units (sufficient) with $0.32/h (2X cheaper). 13 Proxy node Traffic pattern: low throughput, high network bandwidth e.g. 32 op/s -> 160 MB/s for incoming and ~500 MB/s for outgoing traffic (write in triplicate!) 1 GE from High-CPU is under-provisioned, 10 GE from Cluster is paid off for this workload. Need 10 Large based storage nodes to keep up with the 2 proxy nodes (10 GE)
14 Evaluation Conclusion for HW Take-away points for provisioning HW for a Swift cluster Upload Small Object Upload Large Object Hardware for proxy node 1 GE High-end CPU 10 GE High-end CPU Hardware for storage node 1 GE High-end CPU 1 GE Commodity CPU Download workloads: see the backup slides 14 Contrary to the lessons learnt from the past It does NOT always pay off to choose 10 GE (expensive!) for proxy nodes It is NOT always sufficient to use commodity CPU for storage nodes Upload is disk I/O intensive (3 copies of data) but download is NOT always disk I/O intensive (retrieve one copy of data)
15 Evaluation Conclusion for SW Take-away points for provisioning SW for a Swift cluster db_preallocate XFS vs. Ext4 Upload Small Objects on XFS Upload Large Objects on / off XFS / Ext4 Upload Small Object (more sensitive to software settings) db_preallocation: intensive updates on container DB. Setting it to on will gain 10-20% better performance Filesystem: we observe XFS achieves 15-20% extra performance than Ext4 15
16 Upfront Cost ($/hour) Evaluation Scale out small cluster Workload #1: upload small objects (same workload for exploring HW & SW configurations for small Swift cluster) Based on the top-3 recommended small Swift clusters X 3X 2 Proxy Node (High-CPU) 5 Storage Node (High-CPU) X: 6 proxy 15 storage Lowest cost SLA 1: 80% RT < 600ms 1.5X 2X: 4 proxy 10 storage 2X Lowest cost 1X SLA 2: 80% RT < 1000ms 2 Proxy Node (Cluster) 10 Storage Node (High-CPU) 2 Proxy Node (Cluster) 5 Storage Node (High-CPU) 1X 4 2 1X: 2 proxy 5 storage Response Time (ms) of 80% requests
17 Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments When something goes wrong in a Swift cluster Two Types of Failures: Hard Drive, Entire Node What is performance degradation before the failures are fixed How soon the data will be back (when all failed nodes are back on-line)? Experiments 17
18 Why Consider Failures Failure stats in Google s DC (from Google fellow Jeff Dean s interview at 2008) A cluster of 1,800 servers in its first year Totally, 1,000 servers failed, thousands of HDDs failed 1 power distribution unit failed, bringing down 500 1,000 machines for 6 hours 20 racks failed, each time causing machines to vanish from network Failures in Swift Given a 5-zone setup, Swift can tolerate at most 2 zones failed (data will not be lost) But, performance will degrade to some extent before the failed zones are fixed. If Swift operators want to ensure certain performance level They need to benchmark the performance of their Swift clusters upfront 18
19 How Complex to Consider Failure (1) Possible failure at one node Disk Swift process (rsync is still working) Entire node (2) Which type of node failed Proxy Storage (3) How many nodes failed at same time Combining above three considerations, the total space of all failure scenarios is huge practical to prioritize those failure scenarios E.g. the worst or more common scenarios are considered first 19
20 Evaluation - Setup Focus on performance (not data availability) Measure performance degradation comparing to no failure case, before failed nodes back on-line Workload: Backup workload (uploading large objects is the major operation) Swift cluster: 2 proxy nodes (Cluster: Xeon CPU, 10 GE), 10 storage nodes Two common failure scenarios: (1) entire storage node failure (2) HDD failure in storage node (1) Entire storage node failure 10%, 20%, 30% and 40% storage nodes failed in a cluster (E.g. partial power outage) Different HW resources are provisioned for storage node EC2 Large for storage node (cost-efficient, high performance/cost) EC2 High-CPU for storage node (costly, over-provisioned for CPU resources) 20
21 Evaluation - Setup (2) HDD failure in storage node (EC2 Large for storage node) Each storage node attaches 8 HDDs Intentionally umount some HDDs during the execution. Storage node is still accessible 10%, 20%, 30% and 40% of HDDs failed in a cluster Compare two failure distributions: Uniform HDD failure (failed HDDs uniformly distributed over all storage nodes) Skewed HDD failure (some storage nodes get much more # HDDs failed than other nodes) 21
22 Evaluation Entire Node Failure Cluster throughput (# operations / second) no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU Storage node based on Large Instance Throughput decreases as more storage failed 22 Storage node based on High-CPU Instance Throughput decreases only when 40% nodes fail
23 Evaluation Entire Node Failure 100% 80% 60% 40% 20% 0% CPU usage in unaffected storage node no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU Network bandwidth (MB/s) in unaffected storage node When storage node is based on High-CPU Instance Over-provisioned resources in unaffected node get more used as # failures increases So, it can keep performance from degrading initially no failure 10% failure 20% failure 30% failure 40% failure Large High-CPU 23 When storage node is based on Large Instance CPU is almost saturated when no failure happens
24 Evaluation HDD Failure Cluster throughput (# operations / second) no failure uniform 10% HDD failure uniform 20% HDD failure uniform 30% HDD failure uniform 40% skewed 40% HDD failure HDD failure 40% node failure When HDD are uniformly failed across all storage nodes Throughput does not decrease! Why? When some storage nodes have more failed HDDs than others (skewed) Throughput decreases significantly, still better than entire node failure Extreme case: when all HDDs on a storage node fail, it is almost equal to entire node failure uniform 10% HDD failure Usage of unaffected disk (%) when HDDs are uniformly failed uniform 20% HDD failure uniform 30% HDD failure uniform 40% HDD failure When HDDs are uniformly failed I/O loads are evenly distributed over other unaffected HDDs
25 Evaluation Take-away points In order to maintain certain performance in fact of failure Make sense to over-provision the HW resources to some extent When failure happens, the over-provisioned resources will reduce the performance degradation Entire storage node failure vs. HDD failure Entire node failure is the worse than HDD failure. When only HDDs failed, performance degradation depends on: If failed HDDs are uniformly distributed across all storage nodes degradation is smaller, because I/O load can be rebalanced over unaffected HDDs Otherwise (failure distribution is skewed) degradation may be larger What if proxy node failure? proxy and storage nodes fail together? Reduce the performance, need to figure out in the future 25
26 When Failed Nodes Are Fixed When all failed (affected) nodes have been fixed and re-join the Swift cluster (1) How soon the recovery will take on the affected nodes? (2) What is performance when the recovery is undergoing? We will show empirical results in our blog ( For (1), it depends on: How much data need to be recovered. Networking latency b/w unaffected and affected nodes HW resources (e.g. CPU) in unaffected nodes (lookup which data need to be restored) 26
27 When Failed Nodes Are Fixed When all failed nodes have been fixed and re-join the Swift cluster (1) How soon the recovery will take on the affected nodes? (2) What is performance when the recovery is undergoing? For (2), it depends on: HW resources in unaffected nodes. The unaffected nodes become more resourceintensive because they still serve requests, also help affected nodes to restore their data Performance will gradually increase as the recovery progress is close to 100% 27
28 Thanks! Questions/Comments? 28
29 Back-up Slides 29
30 Evaluation Download small objects Top-3 recommended hardware for a small Swift cluster Download Small Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (High-CPU) 5 storage nodes (Large) proxy nodes (Cluster) 5 storage nodes (Large) proxies nodes (High-CPU) 10 storage nodes (Large) 513 Storage node is all based on Large Only one copy of data is retrieved. CPU and disk I/O are not busy Large is sufficient for workload and saves more cost than High-CPU Proxy node Traffic pattern: high throughput, low network bandwidth (e.g op/s -> 117 MB/s) 1 GE from High-CPU is adequate 5 Large based storage nodes can almost saturate the 2 High-CPU based proxy nodes. 30
31 Evaluation Download large objects Top-3 recommended hardware for a small Swift cluster Download Large Objects HW for proxy node HW for storage node Throughput/$ 1 2 proxy nodes (Cluster) 5 storage nodes (Large) proxy nodes (Cluster) 10 storage nodes (Large) proxy nodes (High-CPU) 5 storage nodes (Large) 12.9 Storage node is all based on Large Request rate is low, little load on CPU. Large Instance is sufficient for workload and saves more cost than High-CPU. Proxy node Traffic pattern: low throughput, high network bandwidth e.g. 70 op/s -> 350 MB/s for incoming and 350 MB/s for outgoing traffics 1 GE from High-CPU is under-provisioned, 10 GE from Cluster is paid off for this workload. 5 Large based storage nodes can nearly saturate the 2 Cluster based proxy nodes. 31
Cloud Computing and Amazon Web Services
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
Client-aware Cloud Storage
Client-aware Cloud Storage Feng Chen Computer Science & Engineering Louisiana State University Michael Mesnier Circuits & Systems Research Intel Labs Scott Hahn Circuits & Systems Research Intel Labs Cloud
POSIX and Object Distributed Storage Systems
1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using
Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using Amazon Web Services rather than setting up a physical server
Best Practices for Increasing Ceph Performance with SSD
Best Practices for Increasing Ceph Performance with SSD Jian Zhang [email protected] Jiangang Duan [email protected] Agenda Introduction Filestore performance on All Flash Array KeyValueStore
Amazon Cloud Storage Options
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
The OpenStack TM Object Storage system
The OpenStack TM Object Storage system Deploying and managing a scalable, open- source cloud storage system with the SwiftStack Platform By SwiftStack, Inc. [email protected] Contents Introduction...
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation
PSAM, NEC PCIe SSD Appliance for Microsoft SQL Server (Reference Architecture) September 11 th, 2014 NEC Corporation 1. Overview of NEC PCIe SSD Appliance for Microsoft SQL Server Page 2 NEC Corporation
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
Backing up to the Cloud
Backing up to the Cloud Practical Considerations Chander Kant CEO Zmanda, Inc. 1 Zmanda Worldwide Leader in Open Source Backup 500,000+ Protected Systems Open Source. Open APIs. Open Formats. Smashes traditional
Introduction to Cloud : Cloud and Cloud Storage. Lecture 2. Dr. Dalit Naor IBM Haifa Research Storage Systems. Dalit Naor, IBM Haifa Research
Introduction to Cloud : Cloud and Cloud Storage Lecture 2 Dr. Dalit Naor IBM Haifa Research Storage Systems 1 Advanced Topics in Storage Systems for Big Data - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom
The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service
The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service Jinesh Varia and Jose Papo March 2012 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1
ZFS Backup Platform. ZFS Backup Platform. Senior Systems Analyst TalkTalk Group. http://milek.blogspot.com. Robert Milkowski.
ZFS Backup Platform Senior Systems Analyst TalkTalk Group http://milek.blogspot.com The Problem Needed to add 100's new clients to backup But already run out of client licenses No spare capacity left (tapes,
COSBench: A benchmark Tool for Cloud Object Storage Services. [email protected] 2012.10
COSBench: A benchmark Tool for Cloud Object Storage Services [email protected] 2012.10 Updated June 2012 Self introduction COSBench Introduction Agenda Case Study to evaluate OpenStack* swift performance
Cloud Computing For Bioinformatics
Cloud Computing For Bioinformatics Cloud Computing: what is it? Cloud Computing is a distributed infrastructure where resources, software, and data are provided in an on-demand fashion. Cloud Computing
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack
Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack May 2015 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 1 of 19 Table of Contents INTRODUCTION... 3 OpenStack
Parallels Cloud Server 6.0
Parallels Cloud Server 6.0 Parallels Cloud Storage I/O Benchmarking Guide September 05, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings
Deep Dive: Maximizing EC2 & EBS Performance
Deep Dive: Maximizing EC2 & EBS Performance Tom Maddox, Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved What we ll cover Amazon EBS overview Volumes Snapshots
OTM in the Cloud. Ryan Haney
OTM in the Cloud Ryan Haney The Cloud The Cloud is a set of services and technologies that delivers real-time and ondemand computing resources Software as a Service (SaaS) delivers preconfigured applications,
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...
Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks
PERFORMANCE BENCHMARKS PAPER Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks Arvind Pruthi Senior Staff Manager Marvell April 2011 www.marvell.com Overview In today s virtualized data
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
Protecting Information in a Smarter Data Center with the Performance of Flash
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 Protecting Information in a Smarter Data Center with the Performance of Flash IBM FlashSystem and IBM ProtecTIER Printed in
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
Low-cost storage @PSNC
Low-cost storage @PSNC Update for TF-Storage TF-Storage meeting @Uppsala, September 22nd, 2014 Agenda Motivations data center perspective Application / use-case Hardware components: bought some, will buy
Hyperscale Use Cases for Scaling Out with Flash. David Olszewski
Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized
DELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
Scalable Architecture on Amazon AWS Cloud
Scalable Architecture on Amazon AWS Cloud Kalpak Shah Founder & CEO, Clogeny Technologies [email protected] 1 * http://www.rightscale.com/products/cloud-computing-uses/scalable-website.php 2 Architect
Fixed Price Website Load Testing
Fixed Price Website Load Testing Can your website handle the load? Don t be the last one to know. For as low as $4,500, and in many cases within one week, we can remotely load test your website and report
SALSA Flash-Optimized Software-Defined Storage
Flash-Optimized Software-Defined Storage Nikolas Ioannou, Ioannis Koltsidas, Roman Pletka, Sasa Tomic,Thomas Weigold IBM Research Zurich 1 New Market Category of Big Data Flash Multiple workloads don t
Building low cost disk storage with Ceph and OpenStack Swift
Background photo from: http://edelomahony.com/2011/07/25/loving-money-doesnt-bring-you-more/ Building low cost disk storage with Ceph and OpenStack Swift Paweł Woszuk, Maciej Brzeźniak TERENA TF-Storage
Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study
Creating Value Delivering Solutions Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study Chris Zajac, NJDOT Bud Luo, Ph.D., Michael Baker Jr., Inc. Overview
Convergence-A new keyword for IT infrastructure transformation
Convergence-A new keyword for IT infrastructure transformation www.huawei.com Derek Liu, Sr. Marketing Director Singapore, Nov. 2013 HUAWEI TECHNOLOGIES CO., LTD. Evolution of IT Infrastructure/Stack IBM/DEC/
February, 2015 Bill Loewe
February, 2015 Bill Loewe Agenda System Metadata, a growing issue Parallel System - Lustre Overview Metadata and Distributed Namespace Test setup and implementation for metadata testing Scaling Metadata
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x
Building a Parallel Cloud Storage System using OpenStack s Swift Object Store and Transformative Parallel I/O
Building a Parallel Cloud Storage System using OpenStack s Swift Object Store and Transformative Parallel I/O or Parallel Cloud Storage as an Alternative Archive Solution Kaleb Lora Andrew AJ Burns Martel
Performance Benchmark for Cloud Block Storage
Performance Benchmark for Cloud Block Storage J.R. Arredondo vjune2013 Contents Fundamentals of performance in block storage Description of the Performance Benchmark test Cost of performance comparison
Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions
MAXCACHE 3. WHITEPAPER Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3. SSD Read and Write Caching Solutions Executive Summary Today s data centers and cloud computing
QoS-Aware Storage Virtualization for Cloud File Systems. Christoph Kleineweber (Speaker) Alexander Reinefeld Thorsten Schütt. Zuse Institute Berlin
QoS-Aware Storage Virtualization for Cloud File Systems Christoph Kleineweber (Speaker) Alexander Reinefeld Thorsten Schütt Zuse Institute Berlin 1 Outline Introduction Performance Models Reservation Scheduling
GeoCloud Project Report USGS/EROS Spatial Data Warehouse Project
GeoCloud Project Report USGS/EROS Spatial Data Warehouse Project Description of Application The Spatial Data Warehouse project at the USGS/EROS distributes services and data in support of The National
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
Benchmarking Couchbase Server for Interactive Applications. By Alexey Diomin and Kirill Grigorchuk
Benchmarking Couchbase Server for Interactive Applications By Alexey Diomin and Kirill Grigorchuk Contents 1. Introduction... 3 2. A brief overview of Cassandra, MongoDB, and Couchbase... 3 3. Key criteria
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012
Estimating the Cost of a GIS in the Amazon Cloud An Esri White Paper August 2012 Copyright 2012 Esri All rights reserved. Printed in the United States of America. The information contained in this document
STeP-IN SUMMIT 2013. June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)
10 th International Conference on Software Testing June 18 21, 2013 at Bangalore, INDIA by Sowmya Krishnan, Senior Software QA Engineer, Citrix Copyright: STeP-IN Forum and Quality Solutions for Information
Berkeley Ninja Architecture
Berkeley Ninja Architecture ACID vs BASE 1.Strong Consistency 2. Availability not considered 3. Conservative 1. Weak consistency 2. Availability is a primary design element 3. Aggressive --> Traditional
Mark Bennett. Search and the Virtual Machine
Mark Bennett Search and the Virtual Machine Agenda Intro / Business Drivers What to do with Search + Virtual What Makes Search Fast (or Slow!) Virtual Platforms Test Results Trends / Wrap Up / Q & A Business
Capacity planning for IBM Power Systems using LPAR2RRD. www.lpar2rrd.com www.stor2rrd.com
Capacity planning for IBM Power Systems using LPAR2RRD Agenda LPAR2RRD and STOR2RRD basic introduction Capacity Planning practical view CPU Capacity Planning LPAR2RRD Premium features Future STOR2RRD quick
Towards a Community Cloud Storage
Towards a Community Cloud Storage Ying Liu KTH Royal Institute of Technology Stockholm, Sweden [email protected] Vladimir Vlassov KTH Royal Institute of Technology Stockholm, Sweden [email protected] Leandro Navarro
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Getting performance & scalability on standard platforms, the Object vs Block storage debate. Copyright 2013 MPSTOR LTD. All rights reserved.
Getting performance & scalability on standard platforms, the Object vs Block storage debate 1 December Webinar Session Getting performance & scalability on standard platforms, the Object vs Block storage
Business Intelligence Competency Partners
Business Intelligence Competency Partners BI 4.1 Installation Best Practices Presenter: Rich Chlebek May 15, 2014 What We ll Cover Webinar Protocol Introductions Architecture Server Clustering High Availability
Configuration and Deployment Guide For OpenStack * Swift * Object Storage on Intel Atom Processor and Intel Xeon Processor Microservers
Software Configuration and Deployment Guide Configuration and Deployment Guide For OpenStack * Swift * Object on Intel Atom Processor and Intel Xeon Processor Microservers About this Guide This Configuration
HYPER-CONVERGED INFRASTRUCTURE STRATEGIES
1 HYPER-CONVERGED INFRASTRUCTURE STRATEGIES MYTH BUSTING & THE FUTURE OF WEB SCALE IT 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard to product planning
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth
Filesystems Performance in GNU/Linux Multi-Disk Data Storage
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical
An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud
An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud Esri, 380 New York St., Redlands, CA 92373-8100 USA TEL 909-793-2853 FAX 909-793-5953 E-MAIL [email protected] WEB esri.com
Zadara Storage Cloud A whitepaper. @ZadaraStorage
Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Service Oriented Architecture(SOA)
Outline Service Oriented Architecture ( 1.7) Cloud Computing, Fallacies and Pitfalls ( 1.8-1.9) Pair Programming ( 9.3) Ruby 101 ( 3.1) Everything in Ruby is an Object ( 3.2-3.3) 1 Service Oriented Architecture(SOA)
An Oracle White Paper September 2011. Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups
An Oracle White Paper September 2011 Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups Table of Contents Introduction... 3 Tape Backup Infrastructure Components... 4 Requirements...
Hadoop Hardware @Twitter: Size does matter. @joep and @eecraft Hadoop Summit 2013
Hadoop Hardware : Size does matter. @joep and @eecraft Hadoop Summit 2013 v2.3 About us Joep Rottinghuis Software Engineer @ Twitter Engineering Manager Hadoop/HBase team @ Twitter Follow me @joep Jay
StorReduce Technical White Paper Cloud-based Data Deduplication
StorReduce Technical White Paper Cloud-based Data Deduplication See also at storreduce.com/docs StorReduce Quick Start Guide StorReduce FAQ StorReduce Solution Brief, and StorReduce Blog at storreduce.com/blog
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Cloud Server Performance A Comparative Analysis of 5 Large Cloud IaaS Providers
Cloud Server Performance A Comparative Analysis of 5 Large Cloud IaaS Providers Cloud Spectator Study June 5, 2013 With a lack of standardization in the IaaS industry, providers freely use unique terminology
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Scaling in the Cloud with AWS. By: Eli White (CTO & Co-Founder @ mojolive) eliw.com - @eliw - mojolive.com
Scaling in the Cloud with AWS By: Eli White (CTO & Co-Founder @ mojolive) eliw.com - @eliw - mojolive.com Welcome! Why is this guy talking to us? Please ask questions! 2 What is Scaling anyway? Enabling
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Slides courtesy of Tim Wood 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises for server applications Internet
Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers
BASEL UNIVERSITY COMPUTER SCIENCE DEPARTMENT Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers Distributed Information Systems (CS341/HS2010) Report based on D.Kassman, T.Kraska,
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the Storage Developer Conference, Santa Clara September 15, 2009 Outline Introduction
LDA, the new family of Lortu Data Appliances
LDA, the new family of Lortu Data Appliances Based on Lortu Byte-Level Deduplication Technology February, 2011 Copyright Lortu Software, S.L. 2011 1 Index Executive Summary 3 Lortu deduplication technology
Choosing Between Commodity and Enterprise Cloud
Choosing Between Commodity and Enterprise Cloud With Performance Comparison between Cloud Provider USA, Amazon EC2, and Rackspace Cloud By Cloud Spectator, LLC and Neovise, LLC. 1 Background Businesses
SOLUTION BRIEF. Resolving the VDI Storage Challenge
CLOUDBYTE ELASTISTOR QOS GUARANTEE MEETS USER REQUIREMENTS WHILE REDUCING TCO The use of VDI (Virtual Desktop Infrastructure) enables enterprises to become more agile and flexible, in tune with the needs
How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc.
How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc. What is OpenStack Swift Cluster? Cluster of Storage Server Nodes, Proxy Server Nodes and Storage Devices 2 Data
Future-Proofed Backup For A Virtualized World!
! Future-Proofed Backup For A Virtualized World! Prepared by: Colm Keegan, Senior Analyst! Prepared: January 2014 Future-Proofed Backup For A Virtualized World Like death and taxes, growing backup windows
IBM Spectrum Protect in the Cloud
IBM Spectrum Protect in the Cloud. Disclaimer IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM s sole discretion. Information regarding
Variations in Performance and Scalability when Migrating n-tier Applications to Different Clouds
Variations in Performance and Scalability when Migrating n-tier Applications to Different Clouds Deepal Jayasinghe, Simon Malkowski, Qingyang Wang, Jack Li, Pengcheng Xiong, Calton Pu Outline Motivation
How AWS Pricing Works
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL [email protected] 2012 IBM Corporation
Page:1 Openstack Swift Object Store Cloud built from the grounds up David Hadas Swift ATC HRL [email protected] Page:2 Object Store Cloud Services Expectations: PUT/GET/DELETE Huge Capacity (Scale) Always
Design and Evolution of the Apache Hadoop File System(HDFS)
Design and Evolution of the Apache Hadoop File System(HDFS) Dhruba Borthakur Engineer@Facebook Committer@Apache HDFS SDC, Sept 19 2011 Outline Introduction Yet another file-system, why? Goals of Hadoop
MySQL performance in a cloud. Mark Callaghan
MySQL performance in a cloud Mark Callaghan Special thanks Eric Hammond (http://www.anvilon.com) provided documentation that made all of my work much easier. What is this thing called a cloud? Deployment
