Software Defined Storage No More Limits



Similar documents
Software Defined Storage No More Limits

THE FUTURE OF STORAGE IS SOFTWARE DEFINED. Jasper Geraerts Business Manager Storage Benelux/Red Hat

RED HAT STORAGE PORTFOLIO OVERVIEW

Product Spotlight. A Look at the Future of Storage. Featuring SUSE Enterprise Storage. Where IT perceptions are reality

SUSE Enterprise Storage Highly Scalable Software Defined Storage. Gábor Nyers Sales

WHITE PAPER. Software Defined Storage Hydrates the Cloud

RED HAT STORAGE SERVER TECHNICAL OVERVIEW

SUSE Enterprise Storage Highly Scalable Software Defined Storage. Māris Smilga

Sep 23, OSBCONF 2014 Cloud backup with Bareos

Clodoaldo Barrera Chief Technical Strategist IBM System Storage. Making a successful transition to Software Defined Storage

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director

Distributed File System Choices: Red Hat Storage, GFS2 & pnfs

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

VMware Software-Defined Storage Vision

OPTIMIZING PRIMARY STORAGE WHITE PAPER FILE ARCHIVING SOLUTIONS FROM QSTAR AND CLOUDIAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Analyzing Big Data with Splunk A Cost Effective Storage Architecture and Solution

ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE

Addressing Storage Management Challenges using Open Source SDS Controller

Maxta Storage Platform Enterprise Storage Re-defined

DreamObjects. Cloud Object Storage Powered by Ceph. Monday, November 5, 12

Getting performance & scalability on standard platforms, the Object vs Block storage debate. Copyright 2013 MPSTOR LTD. All rights reserved.

Red Hat Storage Server

Product Overview. Marc Skinner Principal Solutions Architect Red Hat RED HAT ENTERPRISE LINUX OPENSTACK PLATFORM

ovirt and Gluster Hyperconvergence

VMware VMware Inc. All rights reserved.

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

SMART SCALE YOUR STORAGE - Object "Forever Live" Storage - Roberto Castelli EVP Sales & Marketing BCLOUD

10th TF-Storage Meeting

Your Journey to the Cloud with Red Hat

Linux Powered Storage:

The path to the cloud training

Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari

Introduction to NetApp Infinite Volume

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO

StorPool Distributed Storage Software Technical Overview

EMC IRODS RESOURCE DRIVERS

StorReduce Technical White Paper Cloud-based Data Deduplication

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Amazon Cloud Storage Options

東 海 大 學 資 訊 工 程 研 究 所 碩 士 論 文

Scale out NAS on the outside, Object storage on the inside

EMC SOLUTION FOR SPLUNK

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

Big data Devices Apps

VM Image Hosting Using the Fujitsu* Eternus CD10000 System with Ceph* Storage Software

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

MPSTOR Technology Overview

Introduction to Red Hat Storage. January, 2012

The path to the cloud training

Arif Goelmhd Goelammohamed Solutions Hyperconverged Infrastructure: The How-To and Why Now?

Object-based Storage in Big Data and Analytics. Ashish Nadkarni Research Director Storage IDC

Kaminario K2 All-Flash Array

Product Brochure. Hedvig Distributed Storage Platform Modern Storage for Modern Business. Elastic. Accelerate data to value. Simple.

HP OpenStack & Automation

IBM ELASTIC STORAGE SEAN LEE

M710 - Max 960 Drive, 8Gb/16Gb FC, Max 48 ports, Max 192GB Cache Memory

Building Storage Service in a Private Cloud

EMC VNX-F ALL FLASH ARRAY

End to end application delivery & Citrix XenServer 5. John Glendenning Vice President Server Virtualization, EMEA

At-Scale Data Centers & Demand for New Architectures

Red Hat Storage Server Administration Deep Dive

SCALABLE FILE SHARING AND DATA MANAGEMENT FOR INTERNET OF THINGS

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

SMB in the Cloud David Disseldorp

Introduction to Gluster. Versions 3.0.x

owncloud Architecture Overview

Best Practices for Increasing Ceph Performance with SSD

SOFTWARE-DEFINED STORAGE IN ACTION

SOFTWARE DEFINED STORAGE IN ACTION

Storage Architectures for Big Data in the Cloud

A virtual SAN for distributed multi-site environments

Diagram 1: Islands of storage across a digital broadcast workflow

Accelerate Cloud Initiatives with Cisco UCS and Ubuntu OpenStack

RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP. Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013

The Design and Implementation of the Zetta Storage Service. October 27, 2009

Getting Started Hacking on OpenNebula

Open Source, Scale-out clustered NAS using nfs-ganesha and GlusterFS

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

NEXT GENERATION EMC: LEAD YOUR STORAGE TRANSFORMATION. Copyright 2013 EMC Corporation. All rights reserved.

SwiftStack Filesystem Gateway Architecture

EMC DATA DOMAIN OPERATING SYSTEM

Evolution from Big Data to Smart Data

Introduction to Highly Available NFS Server on scale out storage systems based on GlusterFS

Understanding Enterprise NAS

OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group,

THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.

owncloud Architecture Overview

Assignment # 1 (Cloud Computing Security)

Boas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, IBM Corporation

A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief

VMware VSAN och Virtual Volumer

Storage solutions for a. infrastructure. Giacinto DONVITO INFN-Bari. Workshop on Cloud Services for File Synchronisation and Sharing

Enterprise IT is complex. Today, IT infrastructure spans the physical, the virtual and applications, and crosses public, private and hybrid clouds.

NFS Ganesha and Clustered NAS on Distributed Storage System, GlusterFS. Soumya Koduri Meghana Madhusudhan Red Hat

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Transcription:

RED HAT ARCHITECT SEMINARS @ WARSZAWA, 25 WRZEŚNIA 2015 Software Defined Storage No More Limits Wojciech Furmankiewicz Senior Solution Architect Red Hat CEE wojtek@redhat.com

WHAT HAPPENES IN AN INTERNET MINUTE

THE FORECAST By 2020 over 15 ZB of data will be stored 1.5 ZB are stored today

STORAGE MARKET GROWTH FORECAST Software-Defined Storage is leading a shift in the global storage industry, with far-reaching effects. By 2016, server-based storage solutions will lower storage hardware costs by 50% or more. SDS-P MARKET SIZE BY SEGMENT $1,349B $1,195B Block Storage File Storage Object Storage Hyperconverged $1,029B $859B Gartner: IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry $706B By 2020, between 70-80% of unstructured data will be held on lower-cost storage managed by SDS environments. $592B $457B Innovation Insight: Separating Hype From Hope for Software-Defined Storage By 2019, 70% of existing storage array products will also be available as software only versions Innovation Insight: Separating Hype From Hope for Software-Defined Storage 2013 2014 2015 2016 2017 2018 2019 Source: IDC Market size is projected to increase approximately 20% year-over-year between 2015 and 2019

A COUPLE OF STORAGE FACTS >50% Average Storage Capacity CAGR (Compound Annual Growth Rate) >30% Storage share of Total-IT Budget 90:10% Unstructured vs. Structured Data 8 ct Avg. Legacy Storage Price for 1 GB/mon (*) 3 ct Avg. Cloud Storage Price for 1 GB/mon 1-2 ct Avg. Open SW-defined Price 1 GB/mon (*) (*) HW + SW + maintenance for 3 yr. depreciation period

THE PROBLEM Growth of data IT Storage Budget 2020 2010 Existing systems don t scale Increasing cost and complexity Need to invest in new platforms ahead of time

THE SOLUTION PAST: SCALE UP FUTURE: SCALE OUT

SCALE-OUT ARCHITECTURE Scale out performance, capacity and availability SERVER (CPU, MEM, IO) STORAGE (CAPACITY, PERFORMANCE) Scale up capacity 1TB............... GLOBAL NAMESPACE / SINGLE CONSISTENT STORAGE SYSTEM... Single consistent storage system Global namespace Aggregates CPU, memory, network capacity Deploys on RHEL-supported servers and directly connected storage Scale out linearly Scale out performance and capacity as needed

SOLUTIONS FOR PETABYTE-SCALE OPERATORS Optimized for large-scale deployments Enhanced for flexibility and performance Version 1.3 of Red Hat Ceph Storage is the first major release since joining the Red Hat Storage product portfolio, and incorporates feedback from customers who have deployed in production at large scale. Version 3.1 of Red Hat Gluster Storage contains many new features and capabilities aimed to bolster data protection, performance, security, and client compatibility. New capabilities include: Areas of improvement: Erasure coding Robustness at scale Tiering Performance tuning Bit Rot detection Operational efficiency NVSv4 client support

GROWING INNOVATION COMMUNITIES 78 AUTHORS/mo Contributions from Intel, SanDisk, SUSE,and DTAG. Presenting Ceph Days in cities around the world and quarterly virtual Ceph Developer Summit events. 1500 COMMITS/mo 258 POSTERS/mo 41 AUTHORS/mo Over 11M downloads in the last 12 months 259 COMMITS/mo Increased development velocity, authorship, and discussion has resulted in rapid feature expansion. 166 POSTERS/mo

RED HAT GLUSTER STORAGE

GLUSTER: SCALE-OUT ARCHITECTURE

GLUSTER: OVERVIEW RED HAT GLUSTER STORAGE POOL VIRTUAL PHYSICAL /brick ADMIN Gluster Node RED HAT STORAGE CLI & WEB /brick /brick SSH HTTPS /brick Gluster Node /brick /brick NFS USERS CIFS /brick FUSE /brick Native Client OpenStack Swift Gluster Node /brick

GLUSTER: NODES BRICKS AND VOLUMES A VOLUME IS SOME NUMBER OF BRICKS, CLUSTERED AND EXPORTED WITH GLUSTER Volumes have administrator assigned names (= export names) A brick is a member of only one volume A global namespace can have a mix of replicated and distributed volumes Data in different volumes physically exists on different bricks Volumes can be sub-mounted on clients using NFS, CIFS and/or Glusterfs clients THE DIRECTORY STRUCTURE OF THE VOLUME EXISTS ON EVERY Volume 2 /archives Storage Node Storage Node Storage Node /export1 /export2 /export3 /export1 /export2 /export3 /export4 /export5 /export1 /export2 /export3 /export4 3 bricks 5 bricks 4 bricks Volume 1 /shares

GLUSTER: DISTRIBUTED VOLUME

GLUSTER: REPLICATED VOLUME

GLUSTER: GEO-REPLICATION

GLUSTER: LINEAR PERFORMANCE - READ read transfer rate read transfer rate -- normalized 1000 normalized transfer rate (MB/s/server) 16000 14000 12000 MB/s 10000 8000 6000 4000 2000 0 900 800 700 600 500 400 300 200 100 0 0 2 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16 4 6 8 10 12 14 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 16

GLUSTER: LINEAR PERFORMANCE - WRITE transfer rate transfer rate (normalized) 16000 1000 14000 900 800 12000 700 MB/s/server MB/s 10000 8000 6000 600 500 400 300 4000 200 2000 100 0 0 0 2 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16

BIT ROT DETECTION Detection of silent data corruption Bit rot detection is a mechanism that detects data corruption resulting from silent hardware failures, leading to deterioration in performance and integrity. ADMIN Red Hat Gluster Storage 3.1 provides a mechanism to scan data periodically and detect bit-rot. Using the SHA256 algorithm, checksums are computed when files are accessed and compared against previously stored values. If they do not match, an error is logged for the storage admin.!!! 0 0 0 0 0 X 0 0

SECURITY Scalable and secure NFSv4 client support Using NFS-Ganesha, an NFS server implementation, Red Hat Gluster Storage 3.1 provides client access with simplified failover and failback in the case of a node or network failure. Supporting both NFSv3 and NFSv4 clients, NFSGanesha introduces ACLs for additional security, Kerberos authentication, and dynamic export management. CLIENT CLIENT CLIENT CLIENT NFS NFS NFS NFS NFS-GANESHA NFS-GANESHA STORAGE CLUSTER

ROADMAP: RED HAT GLUSTER STORAGE Gluster 3.7, RHEL 6, 7 CORE At-rest encryption PERF MGMT MGMT CORE FILE SEC Device Management Geo-Replication, Snapshots Dashboard FILE SELinux SSL encryption (in-flight) SMB 3 (advanced features) Multi-channel Dynamic provisioning of resources Gluster 3.8, RHEL 6, 7 Compression Deduplication Highly scalable control plane Next-gen replication/distribution Inode quotas Faster Self-heal Controlled Rebalance Active/Active NFSv4 SMB 3 (basic subset) SEC CORE FUTURE (v4.0 and beyond) MGMT Erasure Coding Tiering Bit Rot Detection Snap Schedule V3.2 (H1-2016) FILE TODAY (v3.1) pnfs QoS Client side caching New UI Gluster REST API Gluster 4, RHEL 7

DETAIL: RED HAT GLUSTER STORAGE 3.1 New support in the console for snapshotting and geo-replication features. Tiering New features to allow creation of a tier of fast media (SSDs, Flash) that accompanies slower media, supporting policy-based movement of data between tiers and enhancing create/read/write performance for many small file workloads. Bit rot detection Ability to detect silent data corruption in files via signing and scrubbing, enabling long term retention and archival of data without fear of bit rot. Snapshot scheduling Ability to schedule periodic execution of snapshots easily, without the complexity of custom automation scripts. Backup hooks Features that enable incremental, efficient backup of volumes using standard commercial backup tools, providing time-savings over full-volume backups. Erasure coding Introduction of erasure coded volumes (dispersed volumes) that provide cost-effective durability and increase usable capacity when compared to standard RAID and replication. CORE CORE MGMT Snapshots, Geo-replication CORE Support in the console for discovery, format, and creation of bricks based on recommended best practices; an improved dashboard that shows vital statistics of pools. CORE Device management, dashboard CORE MGMT These features were introduced in the most recent release of Red Hat Gluster Storage, and are now supported by Red Hat.

DETAIL: RED HAT GLUSTER STORAGE 3.1 PERF Small file Optimizations to enhance small file performance, especially with small file create and write operations. PERF Rebalance Optimizations that result in enhanced rebalance speed at large scale. SECURITY SELinux enforcing mode Introduction of the ability to operate with SELinux in enforcing mode, increasing security across an entire deployment. PROTOCOL NFSv4 (multi-headed) Support for data access via clustered, active-active NFSv4 endpoints, based on the NFS-Ganesha project. PROTOCOL These features were introduced in the most recent release of Red Hat Gluster Storage, and are now supported by Red Hat. SMB 3 (subset of features) Enhancements to SMB 3 protocol negotiation, copy-data offload, and support for in-flight data encryption [Sayan: what is copy-data offload?]

RED HAT CEPH STORAGE

HISTORY OF CEPH 10 years in making Open Source 2006 2004 Project Starts at UCSC OpenStack Integration 2011 2010 Mainline Linux Kernel MAY 2012 Launch of Inktank Production Ready Ceph SEPT 2012 Xen Integration 2013 2012 CloudStack Integration RHEL-OSP Certification FEB 2014 OCT 2013 Inktank Ceph Enterprise Launch APR 2014 Inktank Acquired by Red Hat

TRADITIONAL STORAGE VS. CEPH TRADITIONAL ENTERPRISE STORAGE Single Purpose Multi-Purpose, Unified Hardware Distributed Software Single Vendor Lock-in Open Hard Scale Limit Exabyte Scale

ARCHITECTUAL COMPONENTS APP RGW A web services gateway for object storage, compatible with S3 and Swift HOST/VM RBD A reliable, fully-distributed block device with cloud platform integration CLIENT CEPHFS A distributed file system with POSIX semantics and scale-out metadata management LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors

CEPH UNIFIED STORAGE OBJECT STORAGE BLOCK STORAGE FILE SYSTEM S3 & Swift Snapshots POSIX Multi-tenant Clones Linux Kernel Keystone OpenStack CIFS/NFS Geo-Replication Linux Kernel HDFS Erasure Coding Tiering Distributed Metadata

CEPH ARCHITECTURE S3/SWIFT HOST/HYPERVISOR OBJECT STORAGE iscsi CIFS/NFS BLOCK STORAGE MONITORS SDK FILE SYSTEM OBJECT STORAGE DAEMONS (OSD) NODE NODE NODE NODE NODE NODE NODE NODE NODE

RADOS COMPONENTS OSDs: 10s to 10000s in a cluster One per disk (or one per SSD, RAID group ) Serve stored objects to clients Intelligently peer for replication & recovery M Monitors: Maintain cluster membership and state Provide consensus for distributed decision-making Small, odd number These do not serve stored objects to clients

OBJECT STORAGE DAEMONS M OSD btrfs xfs ext4 OSD OSD OSD M FS FS FS FS DISK DISK DISK DISK M

A METADATA SERVER? M 1 M APPLICATION 2 M

CALCULATED PLACEMENT A-G M H-N M APPLICATION F O-T M U-Z

EVEN BETTER: CRUSH 10 10 01 01 11 01 01 10 01 01 11 10 10 10 10 01 01 01 01 10 OBJECT 10 01 11 01 RADOS CLUSTER

CRUSH DYNAMIC DATA PLACEMENT CRUSH: Pseudo-random placement algorithm Fast calculation, no lookup Repeatable, deterministic Statistically uniform distribution Stable mapping Limited data migration on change Rule-based configuration Infrastructure topology aware Adjustable replication Weighting

FOCUSED SET OF USE CASES ANALYTICS CLOUD INFRASTRUCTURE RICH MEDIA AND ARCHIVAL SYNC AND SHARE ENTERPRISE VIRTUALIZATION Big Data analytics with Hadoop Machine data analytics with Splunk Virtual machine storage with OpenStack Object storage for tenant applications Cost-effective storage for media streaming Active archives File sync and share with owncloud Storage for conventional virtualization with RHEV

VIRTUAL MACHINE STORAGE OpenStack Keystone API Swift API Cinder API The most widely deployed 1 technology for OpenStack storage Glance API Nova API Hypervisor Ceph Oobject Gateway (RADOS GW) Ceph Block Device (RBD) Red Hat Ceph Storage FEATURES Full integration with Nova, Cinder and Glance Single storage for images and ephemeral and persistent volumes Copy-on-write provisioning Swift-compatible object storage gateway Full integration with Red Hat Enterprise Linux OpenStack Platform 1 http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up BENEFITS Provides both volume storage and object storage for tenant applications Reduces provisioning time for new virtual machines No data transfer of images between storage and compute nodes required Unified installation experience with Red Hat Enterprise Linux OpenStack Platform

OPENSTACK USER SURVEY DEV / QA PROOF OF CONCEPT PRODUCTION

STORAGE FOR TENANT APPLICATIONS OpenStack Unstructured data storage for distributed, cloud-native applications APP APP APP APP S3 API S3 API S3 API S3 API Ceph Object Gateway (RADOS GW) Ceph Object Gateway (RADOS GW) Red Hat Ceph Storage FEATURES Compatibility with S3 and Swift APIs Fully-configurable replicated or erasure coded storage backends Cache tiering pools Multi-site failover BENEFITS Supports broad ecosystem of tools and applications built for the S3 API Provides a modern hot/warm/cold storage topology that offers cost-efficient performance Advanced durability at optimal price

RICH MEDIA Unstructured image, video, and audio content Massively-scalable, flexible, and cost-effective storage for image, video, and audio content Red Hat Gluster Storage FEATURES Support for multi-petabyte storage clusters on commodity hardware Erasure coding and replication for capacity-optimized or performance-optimized pools Support for standard file & object protocols Snapshots and replication capabilities for high availability and disaster recovery Red Hat Ceph Storage BENEFITS Provides massive and linear scalability in on-premise or cloud environments Offers robust data protection with an optimal blend of price & performance Standard protocols allow access to broadcast content anywhere, on any device Cost-effective, high performance storage for on-demand rich media content

ACTIVE ARCHIVES Unstructured file data Open source, capacity-optimized archival storage on commodity hardware Red Hat Gluster Storage FEATURES Cache tiering to enable "temperature"-based storage Erasure coding to support archive and cold storage use cases Support for industry-standard file and object access protocols Unstructured object data Volume backups Red Hat Ceph Storage BENEFITS Store data based on its access frequency Store data on premise or in a public or hybrid cloud Achieve durability while reducing raw capacity requirements and limiting cost Deploy on industry-standard hardware

CEPH WITH RHEV Red Hat Enterprise Virtualization 3.6 RHEV Hypervisor RHEV Hypervisor RHEL Node CEPH BLOCK DEVICE (RBD) Red Hat Ceph Storage (RADOS) Copyright 2014 Red Hat, Inc. Private and Confidential RHEL Node

ARCHIVE/COLD STORAGE APPLICATION CACHE POOL (SSD, REPLICATED) BACKING POOL (ERASURE CODED) CEPH STORAGE CLUSTER Copyright 2014 Red Hat, Inc. Private and Confidential

RED HAT CEPH STORAGE ROADMAP OSD w/ssd optimization More robust rebalancing Improved repair process Local and pyramid erasure c. CORE Improved read IOPS Faster booting from clones BLOCK S3 object versioning Bucket sharing OBJECT v1.3.z (Q3/Q4 2015) Foreman/puppet installer CLI :: Calamari API parity Multi-user and multi-cluster MGMT v1.3 (June 2015) v2.0 and Beyond (2016) MGMT CORE LTTNG Tracepoints BLOCK Swift Storage Policies SELinux Puppet Modules (Tech Preview) Ceph Hammer SELinux OBJECT Performance Consistency Guided Repair New Backing Store (Tech Preview) MGMT OBJECT BLOCK CORE Ceph Hammer iscsi Mirroring NFS Active/Active multi-site New UI Alerts Ceph Jewel

DETAIL: RED HAT CEPH STORAGE 1.3 CORE OSD with SSD optimization Performance improvements for both read and write operations, especially applicable for configurations including all-flash cache tiers. CORE More robust rebalancing Improved rebalancing that prioritizes repair of degraded data over rebalancing of sub-optimally-placed data; optimized data placement and improved utilization reporting and management that delivers better distribution of data. CORE Local/pyramid erasure codes Inclusion of locally-stored parity bit (within a rack or data-center) that reduces network bandwidth required to repair degraded data. MGMT Foreman/puppet installer Support for deployment of new Ceph clusters using Foreman and provided Puppet modules. MGMT CLI :: Calamari API parity Improvements to the Calamari API and command-line interface that enable administrators to perform the same set of operations through each. MGMT These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat. Multi-user and multi-cluster Support in the calamari interface for multiple administrator accounts and multiple deployed clusters.

DETAIL: RED HAT CEPH STORAGE 1.3 BLOCK Improved read IOPS Introduction of allocation hints, which reduce file system fragmentation over time and ensure IOPS performance throughout the life of a block volume. BLOCK Faster booting from clones Addition of copy-on-read functionality to improve initial and subsequent write performance for cloned volumes. OBJECT S3 object versioning New versioning of objects that help users avoid unintended overwrites/ deletions and allow them to archive objects and retrieve previous versions. OBJECT Bucket sharding Sharding of buckets in the Ceph Object Gateway to improve metadata operations on those with a large number of objects. OBJECT These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat. New RGW implementation A new implementation of the Ceph Object Gateway that uses a Civetweb-based embedded web server to simplify installation and upgrades

DETAIL: RED HAT CEPH STORAGE TUFNELL CORE Performance Consistency More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance. CORE Guided Repair More information about objects will be provided to help administrators perform repair operations on corrupted data. CORE New Backing Store (Tech Preview) New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V). MGMT New UI A new user interface with improved sorting and visibility of critical data. MGMT These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Alerting Introduction of altering features that notify administrations of critical issues via email or SMS.

DETAIL: RED HAT CEPH STORAGE TUFNELL BLOCK iscsi Introduction of a highly-available iscsi interface for the Ceph Block Device, allowing integration with legacy systems BLOCK Mirroring Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes OBJECT NFS Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications OBJECT These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Active/Active Multi-Site Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration)

VISION: UNIFIED STORAGE MANAGER WEB CONSOLE API COMMAND LINE A browser interface designed for managing distributed storage A full API for automation and integration with outside systems A robust, scriptable command-line interface for expert operators PROVISION INSTALL CONFIGURE TUNE MONITOR Full lifecycle management for distributed, software-defined data services OBJECT STORE VIRTUAL BLOCK DEVICE Storage Cluster on commodity hardware DISTRIBUTED FILE SYSTEM

THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern IT infrastructures

RED HAT ARCHITECT SEMINARS @ WARSZAWA, 25 WRZEŚNIA 2015 Dziękuję! Wojciech Furmankiewicz Senior Solution Architect Red Hat CEE wojtek@redhat.com

RED HAT ARCHITECT SEMINARS @ WARSZAWA, 25 WRZEŚNIA 2015 Software Defined Storage No More Limits Wojciech Furmankiewicz Senior Solution Architect Red Hat CEE wojtek@redhat.com

WHAT HAPPENES IN AN INTERNET MINUTE

THE FORECAST By 2020 over 15 ZB of data will be stored 1.5 ZB are stored today 3

STORAGE MARKET GROWTH FORECAST Software-Defined Storage is leading a shift in the global storage industry, with far-reaching effects. By 2016, server-based storage solutions will lower storage hardware costs by 50% or more. SDS-P MARKET SIZE BY SEGMENT $1,349B $1,195B Block Storage File Storage Object Storage Hyperconverged $1,029B $859B Gartner: IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry $706B By 2020, between 70-80% of unstructured data will be held on lower-cost storage managed by SDS environments. $592B $457B Innovation Insight: Separating Hype From Hope for Software-Defined Storage By 2019, 70% of existing storage array products will also be available as software only versions Innovation Insight: Separating Hype From Hope for Software-Defined Storage 2013 2014 2015 2016 2017 2018 2019 Source: IDC Market size is projected to increase approximately 20% year-over-year between 2015 and 2019

A COUPLE OF STORAGE FACTS >50% Average Storage Capacity CAGR (Compound Annual Growth Rate) >30% Storage share of Total-IT Budget 90:10% Unstructured vs. Structured Data 8 ct Avg. Legacy Storage Price for 1 GB/mon (*) 3 ct Avg. Cloud Storage Price for 1 GB/mon 1-2 ct Avg. Open SW-defined Price 1 GB/mon (*) (*) HW + SW + maintenance for 3 yr. depreciation period

THE PROBLEM Growth of data IT Storage Budget 2020 2010 Existing systems don t scale Increasing cost and complexity Need to invest in new platforms ahead of time 6

THE SOLUTION PAST: SCALE UP FUTURE: SCALE OUT 7

SCALE-OUT ARCHITECTURE Scale out performance, capacity and availability SERVER (CPU, MEM, IO) STORAGE (CAPACITY, PERFORMANCE) Scale up capacity 1TB............... GLOBAL NAMESPACE / SINGLE CONSISTENT STORAGE SYSTEM... Single consistent storage system Global namespace Aggregates CPU, memory, network capacity Deploys on RHEL-supported servers and directly connected storage Scale out linearly Scale out performance and capacity as needed

SOLUTIONS FOR PETABYTE-SCALE OPERATORS Optimized for large-scale deployments Enhanced for flexibility and performance Version 1.3 of Red Hat Ceph Storage is the first major release since joining the Red Hat Storage product portfolio, and incorporates feedback from customers who have deployed in production at large scale. Version 3.1 of Red Hat Gluster Storage contains many new features and capabilities aimed to bolster data protection, performance, security, and client compatibility. New capabilities include: Areas of improvement: Erasure coding Robustness at scale Tiering Performance tuning Bit Rot detection Operational efficiency NVSv4 client support

GROWING INNOVATION COMMUNITIES 78 AUTHORS/mo Contributions from Intel, SanDisk, SUSE,and DTAG. Presenting Ceph Days in cities around the world and quarterly virtual Ceph Developer Summit events. 1500 COMMITS/mo 258 POSTERS/mo 41 AUTHORS/mo Over 11M downloads in the last 12 months 259 COMMITS/mo Increased development velocity, authorship, and discussion has resulted in rapid feature expansion. 166 POSTERS/mo

RED HAT GLUSTER STORAGE

GLUSTER: SCALE-OUT ARCHITECTURE

GLUSTER: OVERVIEW RED HAT GLUSTER STORAGE POOL VIRTUAL PHYSICAL /brick ADMIN Gluster Node RED HAT STORAGE CLI & WEB /brick /brick SSH HTTPS /brick Gluster Node /brick /brick NFS CIFS FUSE USERS Native Client OpenStack Swift /brick Gluster Node /brick /brick

GLUSTER: NODES BRICKS AND VOLUMES A VOLUME IS SOME NUMBER OF BRICKS, CLUSTERED AND EXPORTED WITH GLUSTER Volumes have administrator assigned names (= export names) A brick is a member of only one volume A global namespace can have a mix of replicated and distributed volumes Data in different volumes physically exists on different bricks Volumes can be sub-mounted on clients using NFS, CIFS and/or Glusterfs clients THE DIRECTORY STRUCTURE OF THE VOLUME EXISTS ON EVERY Volume 2 /archives Storage Node Storage Node Storage Node /export1 /export2 /export3 /export1 /export2 /export3 /export4 /export5 /export1 /export2 /export3 /export4 3 bricks 5 bricks 4 bricks Volume 1 /shares

GLUSTER: DISTRIBUTED VOLUME

GLUSTER: REPLICATED VOLUME

GLUSTER: GEO-REPLICATION

GLUSTER: LINEAR PERFORMANCE - READ read transfer rate read transfer rate -- normalized 1000 normalized transfer rate (MB/s/server) 16000 14000 12000 MB/s 10000 8000 6000 4000 2000 0 900 800 700 600 500 400 300 200 100 0 0 2 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16 4 6 8 10 12 14 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 16

GLUSTER: LINEAR PERFORMANCE - WRITE transfer rate transfer rate (normalized) 16000 1000 14000 900 800 12000 700 MB/s/server MB/s 10000 8000 6000 600 500 400 300 4000 200 2000 100 0 0 0 2 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16 4 6 8 10 12 servers glusterfs, repl=1 gluster-nfs, repl=1 glusterfs, repl=2 gluster-nfs, repl=2 14 16

BIT ROT DETECTION Detection of silent data corruption Bit rot detection is a mechanism that detects data corruption resulting from silent hardware failures, leading to deterioration in performance and integrity. ADMIN Red Hat Gluster Storage 3.1 provides a mechanism to scan data periodically and detect bit-rot. Using the SHA256 algorithm, checksums are computed when files are accessed and compared against previously stored values. If they do not match, an error is logged for the storage admin.!!! 0 0 0 0 0 X 0 0

SECURITY Scalable and secure NFSv4 client support Using NFS-Ganesha, an NFS server implementation, Red Hat Gluster Storage 3.1 provides client access with simplified failover and failback in the case of a node or network failure. Supporting both NFSv3 and NFSv4 clients, NFSGanesha introduces ACLs for additional security, Kerberos authentication, and dynamic export management. CLIENT CLIENT CLIENT CLIENT NFS NFS NFS NFS NFS-GANESHA NFS-GANESHA STORAGE CLUSTER

ROADMAP: RED HAT GLUSTER STORAGE TODAY (v3.1) Gluster 3.7, RHEL 6, 7 At-rest encryption Dynamic provisioning of resources CORE SMB 3 (advanced features) Multi-channel Gluster 3.8, RHEL 6, 7 Compression Deduplication Highly scalable control plane Next-gen replication/distribution FILE PERF SEC Device Management Geo-Replication, Snapshots Dashboard MGMT CORE FILE SELinux SSL encryption (in-flight) FILE Active/Active NFSv4 SMB 3 (basic subset) Inode quotas Faster Self-heal Controlled Rebalance MGMT SEC CORE FUTURE (v4.0 and beyond) MGMT Erasure Coding Tiering Bit Rot Detection Snap Schedule V3.2 (H1-2016) pnfs QoS Client side caching New UI Gluster REST API Gluster 4, RHEL 7

DETAIL: RED HAT GLUSTER STORAGE 3.1 MGMT Snapshots, Geo-replication New support in the console for snapshotting and geo-replication features. CORE Tiering New features to allow creation of a tier of fast media (SSDs, Flash) that accompanies slower media, supporting policy-based movement of data between tiers and enhancing create/read/write performance for many small file workloads. CORE Bit rot detection Ability to detect silent data corruption in files via signing and scrubbing, enabling long term retention and archival of data without fear of bit rot. CORE Support in the console for discovery, format, and creation of bricks based on recommended best practices; an improved dashboard that shows vital statistics of pools. Snapshot scheduling Ability to schedule periodic execution of snapshots easily, without the complexity of custom automation scripts. CORE Device management, dashboard Backup hooks Features that enable incremental, efficient backup of volumes using standard commercial backup tools, providing time-savings over full-volume backups. CORE MGMT These features were introduced in the most recent release of Red Hat Gluster Storage, and are now supported by Red Hat. Erasure coding Introduction of erasure coded volumes (dispersed volumes) that provide cost-effective durability and increase usable capacity when compared to standard RAID and replication.

DETAIL: RED HAT GLUSTER STORAGE 3.1 PERF Small file Optimizations to enhance small file performance, especially with small file create and write operations. PERF Rebalance Optimizations that result in enhanced rebalance speed at large scale. SECURITY SELinux enforcing mode Introduction of the ability to operate with SELinux in enforcing mode, increasing security across an entire deployment. PROTOCOL NFSv4 (multi-headed) Support for data access via clustered, active-active NFSv4 endpoints, based on the NFS-Ganesha project. PROTOCOL These features were introduced in the most recent release of Red Hat Gluster Storage, and are now supported by Red Hat. SMB 3 (subset of features) Enhancements to SMB 3 protocol negotiation, copy-data offload, and support for in-flight data encryption [Sayan: what is copy-data offload?]

RED HAT CEPH STORAGE

HISTORY OF CEPH 10 years in making Open Source 2006 2004 Project Starts at UCSC OpenStack Integration 2011 2010 Mainline Linux Kernel MAY 2012 Launch of Inktank Production Ready Ceph SEPT 2012 Xen Integration 2013 2012 CloudStack Integration RHEL-OSP Certification FEB 2014 OCT 2013 Inktank Ceph Enterprise Launch APR 2014 Inktank Acquired by Red Hat We can start with a quick how we got here. 26

TRADITIONAL STORAGE VS. CEPH TRADITIONAL ENTERPRISE STORAGE Single Purpose Multi-Purpose, Unified Hardware Distributed Software Single Vendor Lock-in Open Hard Scale Limit Exabyte Scale 27

ARCHITECTUAL COMPONENTS APP RGW A web services gateway for object storage, compatible with S3 and Swift HOST/VM RBD A reliable, fully-distributed block device with cloud platform integration CLIENT CEPHFS A distributed file system with POSIX semantics and scale-out metadata management LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 28

CEPH UNIFIED STORAGE OBJECT STORAGE BLOCK STORAGE FILE SYSTEM S3 & Swift Snapshots POSIX Multi-tenant Clones Linux Kernel Keystone OpenStack CIFS/NFS Geo-Replication Linux Kernel HDFS Erasure Coding Tiering Distributed Metadata 29

CEPH ARCHITECTURE S3/SWIFT HOST/HYPERVISOR OBJECT STORAGE iscsi CIFS/NFS BLOCK STORAGE MONITORS SDK FILE SYSTEM OBJECT STORAGE DAEMONS (OSD) NODE NODE NODE NODE NODE NODE NODE NODE NODE 30

RADOS COMPONENTS OSDs: 10s to 10000s in a cluster One per disk (or one per SSD, RAID group ) Serve stored objects to clients Intelligently peer for replication & recovery M Monitors: Maintain cluster membership and state Provide consensus for distributed decision-making Small, odd number These do not serve stored objects to clients

OBJECT STORAGE DAEMONS M OSD btrfs xfs ext4 OSD OSD OSD M FS FS FS FS DISK DISK DISK DISK M

A METADATA SERVER? M 1 M APPLICATION 2 M

CALCULATED PLACEMENT A-G M H-N M APPLICATION F O-T M U-Z

EVEN BETTER: CRUSH 10 10 01 01 11 01 01 10 01 01 11 10 10 10 10 01 01 01 01 10 OBJECT 10 01 11 01 RADOS CLUSTER

CRUSH DYNAMIC DATA PLACEMENT CRUSH: Pseudo-random placement algorithm Fast calculation, no lookup Repeatable, deterministic Statistically uniform distribution Stable mapping Limited data migration on change Rule-based configuration Infrastructure topology aware Adjustable replication Weighting

FOCUSED SET OF USE CASES ANALYTICS CLOUD INFRASTRUCTURE RICH MEDIA AND ARCHIVAL SYNC AND SHARE ENTERPRISE VIRTUALIZATION Big Data analytics with Hadoop Machine data analytics with Splunk Virtual machine storage with OpenStack Object storage for tenant applications Cost-effective storage for media streaming Active archives File sync and share with owncloud Storage for conventional virtualization with RHEV

VIRTUAL MACHINE STORAGE OpenStack Keystone API Swift API Cinder API The most widely deployed 1 technology for OpenStack storage Glance API Nova API Hypervisor Ceph Oobject Gateway (RADOS GW) Ceph Block Device (RBD) Red Hat Ceph Storage FEATURES Full integration with Nova, Cinder and Glance Single storage for images and ephemeral and persistent volumes Copy-on-write provisioning Swift-compatible object storage gateway Full integration with Red Hat Enterprise Linux OpenStack Platform BENEFITS Provides both volume storage and object storage for tenant applications Reduces provisioning time for new virtual machines No data transfer of images between storage and compute nodes required Unified installation experience with Red Hat Enterprise Linux OpenStack Platform 1 http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up ----- Meeting Notes (12/9/14 13:23) ----Nigel: reduce to just Ceph and block? Nurendra: include old technology on graph? 38

OPENSTACK USER SURVEY DEV / QA PROOF OF CONCEPT PRODUCTION The community agrees 39

STORAGE FOR TENANT APPLICATIONS OpenStack Unstructured data storage for distributed, cloud-native applications APP APP APP APP S3 API S3 API S3 API S3 API Ceph Object Gateway (RADOS GW) Ceph Object Gateway (RADOS GW) Red Hat Ceph Storage FEATURES Compatibility with S3 and Swift APIs Fully-configurable replicated or erasure coded storage backends Cache tiering pools Multi-site failover BENEFITS Supports broad ecosystem of tools and applications built for the S3 API Provides a modern hot/warm/cold storage topology that offers cost-efficient performance Advanced durability at optimal price ----- Meeting Notes (12/9/14 13:23) ----Nigel: reduce to just Ceph and block? Nurendra: include old technology on graph? 40

RICH MEDIA Unstructured image, video, and audio content Massively-scalable, flexible, and cost-effective storage for image, video, and audio content Red Hat Gluster Storage FEATURES Support for multi-petabyte storage clusters on commodity hardware Erasure coding and replication for capacity-optimized or performance-optimized pools Support for standard file & object protocols Snapshots and replication capabilities for high availability and disaster recovery Red Hat Ceph Storage BENEFITS Provides massive and linear scalability in on-premise or cloud environments Offers robust data protection with an optimal blend of price & performance Standard protocols allow access to broadcast content anywhere, on any device Cost-effective, high performance storage for on-demand rich media content ----- Meeting Notes (12/9/14 13:23) ----Nigel: reduce to just Ceph and block? Nurendra: include old technology on graph? 41

ACTIVE ARCHIVES Unstructured file data Open source, capacity-optimized archival storage on commodity hardware Red Hat Gluster Storage FEATURES Cache tiering to enable "temperature"-based storage Erasure coding to support archive and cold storage use cases Support for industry-standard file and object access protocols Unstructured object data Volume backups Red Hat Ceph Storage BENEFITS Store data based on its access frequency Store data on premise or in a public or hybrid cloud Achieve durability while reducing raw capacity requirements and limiting cost Deploy on industry-standard hardware ----- Meeting Notes (12/9/14 13:23) ----Nigel: reduce to just Ceph and block? Nurendra: include old technology on graph? 42

CEPH WITH RHEV Red Hat Enterprise Virtualization 3.6 RHEV Hypervisor RHEV Hypervisor RHEL Node RHEL Node CEPH BLOCK DEVICE (RBD) Red Hat Ceph Storage (RADOS) Copyright 2014 Red Hat, Inc. Private and Confidential 43

ARCHIVE/COLD STORAGE APPLICATION CACHE POOL (SSD, REPLICATED) BACKING POOL (ERASURE CODED) CEPH STORAGE CLUSTER Copyright 2014 Red Hat, Inc. Private and Confidential 44

RED HAT CEPH STORAGE ROADMAP v1.3.z (Q3/Q4 2015) OSD w/ssd optimization More robust rebalancing Improved repair process Local and pyramid erasure c. Improved read IOPS Faster booting from clones S3 object versioning Bucket sharing OBJECT v1.3 (June 2015) v2.0 and Beyond (2016) Foreman/puppet installer CLI :: Calamari API parity Multi-user and multi-cluster CORE LTTNG Tracepoints BLOCK SELinux Swift Storage Policies SELinux OBJECT Puppet Modules (Tech Preview) MGMT Ceph Hammer MGMT MGMT Performance Consistency Guided Repair New Backing Store (Tech Preview) CORE BLOCK OBJECT BLOCK CORE Ceph Hammer iscsi Mirroring NFS Active/Active multi-site New UI Alerts Ceph Jewel

DETAIL: RED HAT CEPH STORAGE 1.3 CORE OSD with SSD optimization Performance improvements for both read and write operations, especially applicable for configurations including all-flash cache tiers. CORE More robust rebalancing Improved rebalancing that prioritizes repair of degraded data over rebalancing of sub-optimally-placed data; optimized data placement and improved utilization reporting and management that delivers better distribution of data. CORE Local/pyramid erasure codes Inclusion of locally-stored parity bit (within a rack or data-center) that reduces network bandwidth required to repair degraded data. MGMT Foreman/puppet installer Support for deployment of new Ceph clusters using Foreman and provided Puppet modules. MGMT CLI :: Calamari API parity Improvements to the Calamari API and command-line interface that enable administrators to perform the same set of operations through each. MGMT These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat. Multi-user and multi-cluster Support in the calamari interface for multiple administrator accounts and multiple deployed clusters.

DETAIL: RED HAT CEPH STORAGE 1.3 BLOCK Improved read IOPS Introduction of allocation hints, which reduce file system fragmentation over time and ensure IOPS performance throughout the life of a block volume. BLOCK Faster booting from clones Addition of copy-on-read functionality to improve initial and subsequent write performance for cloned volumes. OBJECT S3 object versioning New versioning of objects that help users avoid unintended overwrites/ deletions and allow them to archive objects and retrieve previous versions. OBJECT Bucket sharding Sharding of buckets in the Ceph Object Gateway to improve metadata operations on those with a large number of objects. OBJECT These features were introduced in the most recent release of Red Hat Ceph Storage, and are now supported by Red Hat. New RGW implementation A new implementation of the Ceph Object Gateway that uses a Civetweb-based embedded web server to simplify installation and upgrades

DETAIL: RED HAT CEPH STORAGE TUFNELL CORE Performance Consistency More intelligent scrubbing policies and improved peering logic to reduce impact of common operations on overall cluster performance. CORE Guided Repair More information about objects will be provided to help administrators perform repair operations on corrupted data. CORE New Backing Store (Tech Preview) New backend for OSDs to provide performance benefits on existing and modern drives (SSD, K/V). MGMT New UI A new user interface with improved sorting and visibility of critical data. MGMT These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Alerting Introduction of altering features that notify administrations of critical issues via email or SMS.

DETAIL: RED HAT CEPH STORAGE TUFNELL BLOCK iscsi Introduction of a highly-available iscsi interface for the Ceph Block Device, allowing integration with legacy systems BLOCK Mirroring Capabilities for managing virtual block devices in multiple regions, maintaining consistency through automated mirroring of incremental changes OBJECT NFS Access to objects stored in the Ceph Object Gateway via standard Network File System (NFS) endpoints, providing storage for legacy systems and applications OBJECT These projects are currently active in the Ceph development community. They may be available and supported by Red Hat once they reach the necessary level of maturity. Active/Active Multi-Site Support for deployment of the Ceph Object Gateway across multiple sites in an active/active configuration (in addition to the currently-available active/passive configuration)

VISION: UNIFIED STORAGE MANAGER WEB CONSOLE API COMMAND LINE A browser interface designed for managing distributed storage A full API for automation and integration with outside systems A robust, scriptable command-line interface for expert operators PROVISION INSTALL CONFIGURE TUNE MONITOR Full lifecycle management for distributed, software-defined data services OBJECT STORE VIRTUAL BLOCK DEVICE Storage Cluster on commodity hardware DISTRIBUTED FILE SYSTEM

THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern IT infrastructures

RED HAT ARCHITECT SEMINARS @ WARSZAWA, 25 WRZEŚNIA 2015 Dziękuję! Wojciech Furmankiewicz Senior Solution Architect Red Hat CEE wojtek@redhat.com