Introduction to Gluster. Versions 3.0.x



Similar documents
Red Hat Storage Server Administration Deep Dive

RED HAT STORAGE SERVER An introduction to Red Hat Storage Server architecture

StorPool Distributed Storage Software Technical Overview

Highly-Available Distributed Storage. UF HPC Center Research Computing University of Florida

New Storage System Solutions

Distributed File System Choices: Red Hat Storage, GFS2 & pnfs

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

Large Scale Storage. Orlando Richards, Information Services LCFG Users Day, University of Edinburgh 18 th January 2013

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Scala Storage Scale-Out Clustered Storage White Paper

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

Key Messages of Enterprise Cluster NAS Huawei OceanStor N8500

The Panasas Parallel Storage Cluster. Acknowledgement: Some of the material presented is under copyright by Panasas Inc.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

GlusterFS Distributed Replicated Parallel File System

Red Hat Global File System for scale-out web services

Performance Analysis of RAIDs in Storage Area Network

Network Attached Storage. Jinfeng Yang Oct/19/2015

Red Hat Cluster Suite

Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib

Load Balancing and High availability using CTDB + DNS round robin

Building Storage Service in a Private Cloud

Quantum StorNext. Product Brief: Distributed LAN Client

Cloud storage reloaded:

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

POWER ALL GLOBAL FILE SYSTEM (PGFS)

SoftLayer Fundamentals. Storage and Backup. August, 2014

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

(Scale Out NAS System)

Clustering Windows File Servers for Enterprise Scale and High Availability

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

High Availability with Windows Server 2012 Release Candidate

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Storage Architectures for Big Data in the Cloud

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hadoop Architecture. Part 1

10th TF-Storage Meeting

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

Performance in a Gluster System. Versions 3.1.x

Windows Server 2012 Storage

Designing a Cloud Storage System

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

PARALLELS SERVER BARE METAL 5.0 README

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

ovirt and Gluster Hyperconvergence

SMB Direct for SQL Server and Private Cloud

HP reference configuration for entry-level SAS Grid Manager solutions

ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE

How To Write An Article On An Hp Appsystem For Spera Hana

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

Accelerating and Simplifying Apache

HyperQ Storage Tiering White Paper

An Introduction To Gluster. Ryan Matteson

ZFS Administration 1

Violin Memory Arrays With IBM System Storage SAN Volume Control

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

PARALLELS CLOUD STORAGE

EMC XTREMIO EXECUTIVE OVERVIEW

How to Choose your Red Hat Enterprise Linux Filesystem

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Supported File Systems

Implementing SAN & NAS with Linux by Mark Manoukian & Roy Koh

Big data management with IBM General Parallel File System

How to choose the right RAID for your Dedicated Server

PrimeArray Data Storage Solutions Network Attached Storage (NAS) iscsi Storage Area Networks (SAN) Optical Storage Systems (CD/DVD)

Big Data Storage Options for Hadoop Sam Fineberg, HP Storage

Volume Replication INSTALATION GUIDE. Open-E Data Storage Server (DSS )

Nutanix Tech Note. Failure Analysis All Rights Reserved, Nutanix Corporation

Optimizing Large Arrays with StoneFly Storage Concentrators

Object storage in Cloud Computing and Embedded Processing

PolyServe Matrix Server for Linux

A Survey of Shared File Systems

Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS

Introduction. Scalable File-Serving Using External Storage

EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X210. EMC Isilon X410 ARCHITECTURE

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

Availability Digest. Redundant Load Balancing for High Availability July 2013

nexsan NAS just got faster, easier and more affordable.

Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

How To Build A Clustered Storage Area Network (Csan) From Power All Networks

Milestone Solution Partner IT Infrastructure Components Certification Summary

Transcription:

Introduction to Gluster Versions 3.0.x

Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster Cluster... 4 Gluster server... 4 Gluster client... 5 NFS and CIFS considerations... 5 Choosing hardware... 5 Disk storage... 5 Hardware RAID... 6 Volume Managers... 6 File systems... 6 Operating systems... 6 A Gluster storage server... 7 Network connections... 8 Common cluster file distribution methods... 9 Distribute-only (RAID 0)... 9 Distribute over mirrors (RAID 10)... 10 Stripe... 10 Mixed environments... 10 Data Flow in a Gluster Environment... 11 Gluster File System client only... 11 Accessing the Gluster Cluster via other protocols... 12 Conclusion... 13 Glossary... 14 References and Further Reading... 15 Copyright 2010, Gluster, Inc. Page 2

Introduction to Gluster Gluster software simplifies storing and managing large quantities of unstructured data using commodity hardware. This guide explains fundamental concepts required to understand Gluster. The intended audience includes: Storage architects Systems administrators IT management It may be helpful to refer to the glossary on page 14 while reading this guide. Overview Gluster provides open source i storage software that runs on commodity hardware. The Gluster File System distributed file system aggregates disk and memory resources into a pool of storage in a single namespace accessed via multiple file-level protocols. Gluster uses a scaleout architecture where storage resources are added in building block fashion to meet performance and capacity requirements. Gluster can be deployed in two ways; as userspace software installable on most major 64-bit Linux distributions and as a software appliance that integrates the Gluster File System file system with an operating system and includes a Web GUI for management and installation. Note that regardless of the packaging, the underlying Gluster File System code is the same. Gluster File System Gluster File System ii is the core of the Gluster solution. Distributed as RPM and Debian packages in addition to source code iii, it is supported on most major Linux distributions (64-bit). As a user space application it can be installed alongside applications and be reexported as NFS, CIFS, WebDAV, FTP, and HTTP(S). Gluster File System supports locally attached, iscsi, and Fibre Channel storage. Managed with a single command, Gluster File System is a flexible distributed file system. Gluster Storage Platform Gluster Storage Platform iv is a comprehensive storage management offering. It includes its own Linux-based operating system layer as well as a GUI installer and web interface for administration. The software is installed via USB onto commodity hardware (64-bit) and uses all of the locally attached storage. Storage servers can be added easily and Copyright 2010, Gluster, Inc. Page 3

quickly. The Platform exports the Gluster File System protocol, NFS and CIFS simultaneously. Gluster supports distributing, mirroring and striping data and can be configured to be completely redundant. To simplify administration, the Platform version has most of the configurable features turned on, limiting configuration access to a subset of the features available in Gluster File System. Figure 1, 1a Screenshots of the Gluster web interface. No metadata with the Elastic Hash Algorithm Unlike other distributed file systems Gluster does not create, store, or use a separate index of metadata in any way. Gluster distributes and locates data in the cluster on-the-fly using the Elastic Hash Algorithm. The results of those calculations are dynamic values acquired as needed by any one of the storage servers in the cluster without the need to look up the information in a metadata index or communicate with other storage servers. The performance, availability, and stability advantages of not using metadata are significant. A Gluster Cluster A Gluster cluster is a collection of individual commodity servers and associated back-end disk resources exported as a POSIX compliant file-level protocol. All of the storage servers will run Gluster File System or Gluster Storage Platform. File locking is managed automatically and in a predictable way. Every storage server in the cluster is active, and any file in the entire namespace can be accessed from any server using any protocol simultaneously. Gluster implements the distributed file system with two pieces of software; the Gluster server and the Gluster client. Gluster server The Gluster server clusters all of the physical storage servers in an all-active cluster and exports the combined disk space of all of the servers as the Gluster file system, NFS, CIFS, DAV, FTP, or HTTP(S). Copyright 2010, Gluster, Inc. Page 4

Gluster client The optional Gluster client implements highly-available, massively-parallel access to every storage node in the cluster simultaneously. With the Gluster client installed on existing applications servers, failure of any single node is completely transparent. The Gluster client exports a completely POSIX compliant file system. The Gluster client is available for 64-bit Linux systems only and is strongly recommended over other protocols when possible. NFS and CIFS considerations If using the Gluster File System client is not feasible, Gluster fully supports both NFS and CIFS. However there are some challenges with these protocols. Each of these protocols require additional software for a load balancing layer, usually RRDNS and a layer to provide high availability, usually UCARP or CTDB, which add additional layers of complexity to the storage environment. Unlike when using the Gluster File System client, under certain configurations failure of a node can result in an application error. In a mirrored environment using UCARP or CTDB application servers will automatically connect to another storage server. Because neither NFS nor CIFS enable parallel access to clustered storage, they are generally slower than using the Gluster File System client. Choosing hardware Gluster runs on commodity, user supplied hardware and in the case of the Gluster File System a customer installed operating system layer and standard file systems. Each storage server in the Gluster cluster can have different hardware, in the case of mirror pairs we suggest the same amount of disk space per server, Gluster will not expose the additional space on a mirrored storage server, much like a standard RAID 1. Disk storage Gluster File System supports the following storage attachment options: Locally attached (SAS, SATA, JBOD, etc.) Fibre Channel iscsi Infiniband Copyright 2010, Gluster, Inc. Page 5

Gluster Storage Platform only supports locally attached storage and does not support software RAID. With any type of storage, Gluster recommends hardware RAID. Software RAID will work, but there are performance issues with this approach. Gluster only provides redundancy at the server level, not at the individual disk level. Hardware RAID For data availability and integrity reasons Gluster recommends RAID 6 or RAID 5 for general use cases. For high-performance computing applications, RAID 10 is recommended. Volume Managers Volume managers are fully supported but are not required. LVM2, ZFS, and Symantec Storage Foundation volume managers are known to work well with Gluster. Gluster fully supports snapshots created by a volume manager. File systems All POSIX-compliant file systems that support extended attributes will work with Gluster, including: ext3 ext4 ZFS XFS (can be slow) brtfs (experimental) Gluster tests with and recommends ext3, ext4, and ZFS. There are known challenges with other file systems. For kernel versions 2.6.30 or below, Gluster recommends ext3. For kernel versions 2.6.31 and above, ext4 is the suggested option. Customers planning on using a file system other than ext3 or ext4 should contact Gluster. Operating systems Gluster File System supports most 64-bit Linux environments, including: RHEL (5.1 or newer) CentOS Fedora Ubuntu Debian Solaris (server only) Copyright 2010, Gluster, Inc. Page 6

Hard Disk 0 Hard Disk 1 Hard Disk 2 Hard Disk 3 Hard Disk 4 Hard Disk 5 Hard Disk 6 Hard Disk 7 Hard Disk 8 Hard Disk 9 Hard Disk 10 Hard Disk 11 Hard Disk 12 Hard Disk 13 Hard Disk 14 NFS CIFS HTTP(S) WebDAV (S)FTP A Gluster storage server Anatomy of a typical, single Gluster Storage Server Figure 2 - An illustration of all of the components, both required and optional, of a Gluster storage server. required strongly recommended optional a Gluster service The exported Gluster File System Gluster Server RPM and DEB packages, or installed from source Public network 1Gb, 10Gb, Infiniband Storage server network 1Gb, 10Gb, Infiniband A 64-bit *nix distribution RHEL >= 5.1 recommended, Fedora, Debian, CentOS, Ubuntu supported Any number of file systems =< 16TB ext3 or ext4 recommended, JFS, ReiserFS, supported Volume Manager LVM2, ZFS Hardware RAID RAID 6 recommended, 1+0 for HPC applications Disk storage Local to host, SAS, SATA, SCSI attached JBODs, Fibre Channel supported iscsi supported only in low I/O, limited performance applications Copyright 2010, Gluster, Inc. Page 7

Network connections Gluster supports: 1GbE and 1GbE bonded interfaces 10GbE and 10GbE bonded interfaces InfiniBand SDR, QDR, and DDR using IB-Verbs (recommended) or IPoIB Performance is typically constrained by network connectivity. If 1GbE interconnect is not sufficient Gluster also supports 10GbE and InfiniBand connections. If clients are not using the Gluster File System client server-to-server network communication can consume network resources and impact performance. In these situations, Gluster recommends a second, dedicated back-end network for Gluster server communications. Copyright 2010, Gluster, Inc. Page 8

Common cluster file distribution methods These common use cases help illustrate how Gluster might be implemented. These examples include: Distribute-only (RAID 0) Distribute over mirrors (RAID 10) Stripe Mixed environments. Distribute-only (RAID 0) v In this example (shown in figure 3), Gluster distributes files evenly among the storage servers using the Elastic hash algorithm described above. Each file is stored only once. The advantage of a distribute-only Gluster cluster is lower storage costs and the fastest possible write speeds. There is no fault tolerance in this approach. Should a storage server fail; Reads from the failed storage server will fail. Writes destined to the failed storage server will fail. Reads and writes on all other storage servers will continue without interruption. Figure 3 Distribute-only In a 6 storage server Gluster Cluster running only distribute the Elastic Hash Algorithm will distribute (100/6) ~17% of the files in the file system onto each server. Any single file will be stored on a single storage server. Copyright 2010, Gluster, Inc. Page 9

Distribute over mirrors (RAID 10) In this scenario (illustrated in figure 4), each storage server is replicated to another storage server using synchronous writes. The benefits of this strategy are full faulttolerance; failure of a single storage server is completely transparent to Gluster File System clients. In addition, reads are spread across all members of the mirror. Using Gluster File System there can be an unlimited number of members in a mirror, using Gluster Storage Platform there can only be two members. The redundancy and performance gained by distribute over mirrors can increase storage costs. Mirroring without distributing is supported on Gluster clusters with only two storage servers. Figure 4 Distribute over mirrors In a 6 storage server Gluster Cluster running distribute over mirror (RAID 10) the Elastic Hash Algorithm will distribute (100/3) ~33% of the files in the file system onto each mirror pair. Any single file will be stored on two storage servers. Stripe Gluster supports striping individual files across multiple storage servers vi, something like a more traditional RAID 0. Stripe is appropriate for use cases with very large files (a minimum of 50GB) with very limited writes and simultaneous access from many clients. The default stripe size is 128KB; this means that all files smaller that that will always be on the first storage server, rather than spread across multiple servers. Distribute or mirror (over under) stripe is not supported. Mixed environments Any single, logical Gluster cluster is limited to a single distribution method; however it is possible to run multiple logical Gluster clusters on one set of hardware. By creating multiple volume specification files, each using a different port number it s possible to run clusters with different distribution types without additional hardware vii. The Gluster client mounts the multiple file systems, making the multiple instances and various port numbers transparent to any application servers. This approach also takes advantage of improved processor parallelism on the storage servers. Copyright 2010, Gluster, Inc. Page 10

Data Flow in a Gluster Environment Depending on whether or not the Gluster File System client is used, there are two different ways that data flows within a Gluster environment. Gluster File System client only Figure 5 Data flow using the GlusterFS client only Using the Gluster File System client, every application server maintains parallel connections to every Gluster storage server. Applications access the file system via the standard POSIX interface while the Gluster client transparently enables massively parallel access to the file system. This architecture is extremely scalable yet easy to deploy. In addition to its performance advantages, it also offers robust fault-tolerance, even if a storage server fails during I/O, application servers aren t aware of this event as long as mirroring has been setup. Gluster supports data access via the Gluster File System and any other protocol simultaneously. Copyright 2010, Gluster, Inc. Page 11

Accessing the Gluster Cluster via other protocols Figure 6 Data flow using NFS, CIFS, CIFS, DAV, FTP, or HTTP(S). The following is the process for accessing files using protocols other than native Gluster File System: 1. The application server (client) connects to a customer-supplied load balancing layer (usually RRDNS and/or UCARP, CTDB). 2. The load balancer directs the request to one of the storage servers in the cluster. 3. If the file is not on the selected storage server, the Gluster server requests the file from the correct storage server. 4. The file is then sent to the selected server. 5. And then sent to the application server (client); note the application server immediately starts receiving data and does not wait for the entire file to be delivered. The back-end file access process is completely transparent to the application server regardless of the protocol. Any client can connect to any Gluster storage server for any file. If the storage server fails during I/O, the client will receive connection reset. In addition, this approach can create significant server-server communication and typically a dedicated, private communication network for storage server to storage server is used. Gluster supports accessing the same file via any protocol simultaneously. File locking is managed at the file and block-range level in a POSIX compliant manner. Copyright 2010, Gluster, Inc. Page 12

Conclusion By delivering increased performance, scalability, and ease-of-use in concert with reduced cost of acquisition and maintenance, the Gluster Storage Platform is a revolutionary step forward in data management. The complete elimination of metadata is at the heart of many of its fundamental advantages, including its remarkable resilience, which dramatically reduces the risk of data loss, data corruption, or data becoming unavailable. To download a copy of Gluster File System and Gluster Storage Platform, visit us at http://gluster.com/products/download.php. If you would like to start a 60 day proof concept, including direct access to Gluster support engineers for architecture and design through testing and tuning please visit Gluster at http:///products/poc.php. To speak with a Gluster representative about how to solve your particular storage challenges, phone us at +1 (800) 805-5215. Copyright 2010, Gluster, Inc. Page 13

Glossary Block storage: Block special files or block devices correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory-regions. viii Gluster supports most POSIX compliant block level file systems with extended attributes. Examples include ext3, ext4, ZFS, etc. Distributed file system: is any file system that allows access to files from multiple hosts sharing via a computer network. ix Metadata: is defined as data providing information about one or more other pieces of data. x Namespace: is an abstract container or environment created to hold a logical grouping of unique identifiers or symbols. xi Each Gluster cluster exposes a single namespace as a POSIX mount point that contains every file in the cluster. POSIX: or "Portable Operating System Interface [for Unix]" is the name of a family of related standards specified by the IEEE to define the application programming interface (API), along with shell and utilities interfaces for software compatible with variants of the Unix operating system. xii Gluster exports a fully POSIX compliant file system. RAID: or Redundant Array of Inexpensive Disks, is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent. xiii RRDNS: or Round Robin Domain Name Service. RRDNS is a method to distribute load across application servers. It is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server. Userspace: Applications running in user space don t directly interact with hardware, instead using the kernel to moderate access. Userspace applications are generally more portable than applications in kernel space. Gluster is a user space application. Copyright 2010, Gluster, Inc. Page 14

References and Further Reading CTDB CTDB is primarily developed around the concept of having a shared cluster file system across all the nodes in the cluster to provide the features required for building a NAS cluster. http://ctdb.samba.org/ Elastic Hash Algorithm Get a more detailed explanation of the Elastic Hash Algorithm here - http://ftp.gluster.com/pub/gluster/documentation/gluster_architecture.pdf UCARP Find more information and configuration guides for UCARP here - http://www.ucarp.org/project/ucarp i GlusterFS is released under GNU General Public License v3 or later. Documentation is released under GNU Free Documentation License v1.2 or later. ii GlusterFS is available at http://ftp.gluster.com/pub/gluster/glusterfs/ iii The Gluster source code repository is at http://git.gluster.com/, it is strongly suggested that users only install Gluster from the source and packages at http://ftp.gluster.com, the code in the Git repository has not been through the QA process. iv GlusterSP is available at http://ftp.gluster.com/pub/gluster/gluster-platform/ v To create a distributed file system using the GlusterSP web interface choose None as the volume type. vi Striping requires at least four storage servers. vii Check the glusterfs-volgen man page, -p option for more information. viii http://en.wikipedia.org/wiki/device_file_system#block_devices ix http://en.wikipedia.org/wiki/distributed_file_system x http://en.wikipedia.org/wiki/metadata#metadata_definition xi http://en.wikipedia.org/wiki/namespace_%28computer_science%29 xii http://en.wikipedia.org/wiki/posix xiii http://en.wikipedia.org/wiki/raid Version 1.4 9/22/10 Copyright 2010, Gluster, Inc. Page 15