TECHNICAL WHITE PAPER: ELASTIC CLOUD STORAGE SOFTWARE ARCHITECTURE



Similar documents
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE

EMC Elastic Cloud Storage (ECS)

MODERNIZING THE DISPERSED ENTERPRISE WITH CLOUD STORAGE GATEWAYS AND OBJECT STORAGE

Simple. Extensible. Open.

Future Proofing Data Archives with Storage Migration From Legacy to Cloud

Big data management with IBM General Parallel File System

EMC IRODS RESOURCE DRIVERS

BIG DATA-AS-A-SERVICE

WOS Cloud. ddn.com. Personal Storage for the Enterprise. DDN Solution Brief

EMC CENTERA VIRTUAL ARCHIVE

Protecting Big Data Data Protection Solutions for the Business Data Lake

Distributed File Systems

ATMOS & CENTERA WHAT S NEW IN 2015

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC Documentum Interactive Delivery Services Accelerated Overview

EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION

Agenda. Big Data & Hadoop ViPR HDFS Pivotal Big Data Suite & ViPR HDFS ViON Customer Feedback #EMCVIPR

Understanding Object Storage and How to Use It

Amazon Cloud Storage Options

Frequently Asked Questions: EMC ViPR Software- Defined Storage Software-Defined Storage

Veeam Cloud Connect. Version 8.0. Administrator Guide

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX

T a c k l i ng Big Data w i th High-Performance

EMC SOLUTION FOR SPLUNK

Diagram 1: Islands of storage across a digital broadcast workflow

Understanding Storage Virtualization of Infortrend ESVA

AUTOMATED DATA RETENTION WITH EMC ISILON SMARTLOCK

Product Brochure. Hedvig Distributed Storage Platform Modern Storage for Modern Business. Elastic. Accelerate data to value. Simple.

EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER

A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief

EMC SCALEIO OPERATION OVERVIEW

Designing a Cloud Storage System

Distributed File Systems

Acronis Storage Gateway

Snapshots in Hadoop Distributed File System

Cloud Optimize Your IT

Cloud and Big Data initiatives. Mark O Connell, EMC

Object Storage: Out of the Shadows and into the Spotlight

Automated file management with IBM Active Cloud Engine

DATABASE VIRTUALIZATION AND INSTANT CLONING WHITE PAPER

EMC PERSPECTIVE: THE POWER OF WINDOWS SERVER 2012 AND EMC INFRASTRUCTURE FOR MICROSOFT PRIVATE CLOUD ENVIRONMENTS

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

WHY SECURE MULTI-TENANCY WITH DATA DOMAIN SYSTEMS?

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

EMC BACKUP-AS-A-SERVICE

VMware vcloud Air - Disaster Recovery User's Guide

WHITEPAPER. A Technical Perspective on the Talena Data Availability Management Solution

Backup & Recovery for VMware Environments with Avamar 6.0

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time

Implementing Multi-Tenanted Storage for Service Providers with Cloudian HyperStore. The Challenge SOLUTION GUIDE

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC ISILON AND ELEMENTAL SERVER

I/O Considerations in Big Data Analytics

Technical Brief: Global File Locking

Design and Evolution of the Apache Hadoop File System(HDFS)

Understanding EMC Avamar with EMC Data Protection Advisor

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS

High Availability of VistA EHR in Cloud. ViSolve Inc. White Paper February

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Integration Guide. EMC Data Domain and Silver Peak VXOA Integration Guide

A Brief Analysis on Architecture and Reliability of Cloud Based Data Storage

EMC MID-RANGE STORAGE AND THE MICROSOFT SQL SERVER I/O RELIABILITY PROGRAM

IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Actifio Big Data Director. Virtual Data Pipeline for Unstructured Data

StorReduce Technical White Paper Cloud-based Data Deduplication

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

Building your Big Data Architecture on Amazon Web Services

The Pros and Cons of Erasure Coding & Replication vs. RAID in Next-Gen Storage Platforms. Abhijith Shenoy Engineer, Hedvig Inc.

DDN updates object storage platform as it aims to break out of HPC niche

Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk

Product Brochure. Hedvig Distributed Storage Platform. Elastic. Modern Storage for Modern Business. One platform for any application. Simple.

High Performance Computing OpenStack Options. September 22, 2015

CDH AND BUSINESS CONTINUITY:

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Configuring Celerra for Security Information Management with Network Intelligence s envision

EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT

Elastic Cloud Storage (ECS)

High Availability with Windows Server 2012 Release Candidate

Hadoop Architecture. Part 1

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

Apache Hadoop. Alexandru Costan

VIPR SOFTWARE- DEFINED STORAGE

Improving Microsoft SQL Server Recovery with EMC NetWorker and EMC RecoverPoint

Google File System. Web and scalability

EMC Disk Library with EMC Data Domain Deployment Scenario

HGST Object Storage for a New Generation of IT

Red Hat Storage Server

Introduction to NetApp Infinite Volume

Software-Defined Networks Powered by VellOS

Transcription:

TECHNICAL WHITE PAPER: ELASTIC CLOUD STORAGE SOFTWARE ARCHITECTURE Deploy a modern hyperscale storage platform on commodity infrastructure ABSTRACT This document provides a detailed overview of the EMC Elastic Cloud Storage (ECS ) software architecture. ECS is a geo-scale cloud storage platform that delivers cloud-scale storage services, global access and operational efficiency at scale. February, 2015 EMC WHITE PAPER

To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number H13518.1 2

TABLE OF CONTENTS EXECUTIVE SUMMARY... 4 AUDIENCE... 4 ECS UNSTRUCTURED STORAGE ENGINE... 4 Storage Engine Design and Operation... 5 ECS Write, Read, and Update Paths... 6 Erasure Coding... 8 Storage Efficiency for Large and Small Files... 8 Geo Protection... 9 Multi-site Access... 11 CONCLUSION... 12 REFERENCES... 12 3

EXECUTIVE SUMMARY According to the 2014 IDC EMC Digital Universe Study, the digital universe is doubling in size every two years. The amount of data we create and copy will reach 44 zettabytes (44 trillion gigabytes) by 2020. Over two thirds of that 44 zettabytes is unstructured data generated by end users, yet 85% of that user-generated content is managed by an enterprise IT department at some point in its lifecycle. This explosive data growth is forcing companies of all sizes to explore more efficient data storage strategies. It s more than data growth however, that mandates a new approach to information storage. The universe of data is becoming more diverse and valuable it s not just spreadsheets and word documents. Seemingly everything is connected to a network and creates and consumes varying amounts of data. Your thermostats, your running shoes, your wristband, your glasses, traffic lights you name it every device is now a smart device with information that can provide valuable insight into our interconnected world or deliver a competitive edge. Businesses are using new development frameworks and analytics platforms to rapidly build new mobile, cloud and information-based applications that can seamlessly and quickly access and analyze globally distributed data. Data growth and modern applications mandate that businesses evolve their information storage to address three critical imperatives: Deliver storage services at cloud scale Both enterprises and service providers increasingly need to be able to scale seamlessly across data centers and protect critical data globally. Meet new application and user demands - New applications and users demand instant access to globally distributed content from any device or application anywhere in the world. Drive operational efficiency at scale - Storage systems must efficiently store petabytes of unstructured data of varying types and optimize storage utilization. This white paper provides a technical overview of the EMC Elastic Cloud Storage (ECS) unstructured storage engine architecture. ECS is a purpose-built geo-scale cloud storage platform that features patent pending technology that meets today s new requirements for cloud-scale storage services, modern applications with global access requirements and operational efficiency at scale. AUDIENCE This white paper is intended for solution architects, CIOs, storage administrators and developers and application architects. It provides a detailed technical discussion of the ECS software architecture, geo capabilities and multi-site access. ECS UNSTRUCTURED STORAGE ENGINE ECS software supports the storage, manipulation, and analysis of unstructured data on a massive scale on commodity infrastructure. ECS delivers a complete cloud storage platform with multi-tenancy, metering, self-service provisioning, etc. Customers can buy ECS either as a complete storage appliance with bundled hardware (EMC ECS Appliance) or as a software product (ECS Software) that can be installed on customer provided servers and EMC certified commodity disks. ECS features an unstructured storage engine that supports Object and HDFS data services today and, in the near future, file services. Object The ECS Object Service enables you to store, access, and manipulate unstructured data as objects on commodity-based systems, such as the HP SL4540 and ECS Appliance.. The Object Service is compatible with Amazon S3, OpenStack Swift, EMC Atmos, and Centera CAS APIs. HDFS The HDFS Service provides support for Hadoop Distributed File System (HDFS). ECS HDFS uses Hadoop as a protocol rather than a file system. The HDFS data service presents Hadoop-compatible file system access to ECS unstructured data. Using a drop-in client in the compute nodes of an existing Hadoop implementation, organizations can extend existing Hadoop queries and applications to unstructured data in ECS to gain new business insights without the need of ETL (Extract, Transform, Load) r other process to move the data to a dedicated cluster. This paper details the architecture of the ECS unstructured storage engine. The unstructured storage engine is the primary component that ensures data availability and protection against data corruption, hardware failures, and data center disasters. The engine exposes multi-head access across object and HDFS and allows access to the same data concurrently through multiple 4

protocols. For example, an application can write object and read through HDFS or vice versa. Today ECS supports object and HDFS interfaces. In the future it will add support for NFS. Multi-head access is built on a distributed storage engine that provides high availability and scalability, manages transactions and persistent data and protects data against failures, corruption and disasters. Figure 1. The ECS Unstructured Storage Engine * In a future release of ECS software STORAGE ENGINE DESIGN AND OPERATION The ECS storage engine writes object-related data (such as user data, metadata, and object location data) to logical containers of contiguous disk space known as chunks. Design Principles Key design principles of the ECS storage engine include the following: Stores all types of data and index information in chunks o o Chunks are logical containers of contiguous space (128MB) Data is written in an append-only pattern Performs data protection operations on chunks Does not overwrite or modify data Does not require locking for I/O Does not require cache invalidation Has journaling, snapshot and versioning natively built-in The storage engine writes data in an append-only pattern so that existing data is never overwritten or modified. This strategy improves performance because locking and cache validation are not required for I/O operations. All nodes can process write requests for the same object simultaneously while writing to different sets of disks. The storage engine tracks an object s location through an index that records the object s name, chunk ID, and offset. The object location index contains three location pointers before erasure coding takes place and multiple location pointers after erasure coding. 5

(Erasure coding is discussed more later in this document.) The storage engine performs all storage operations (such as erasure coding and object recovery) on chunk containers. The following diagram illustrates the various services and layers in the storage engine and their unique functions. Figure 2. ECS Services Storage Engine Service Layers Each layer is distributed across all nodes in the system and is highly available and scalable. This unique storage engine architecture has the following unique capabilities: All nodes can process write requests for the same chunk simultaneously and write to different sets of disks. Throughput takes advantage of all spindles and NICs in a cluster. Payload from multiple small objects are aggregated in memory and written in a single disk-write. Storage for both small and large data is handled efficiently with the same protection overhead. This unique architecture brings new levels of performance and efficiency for the storage of massive volumes of unstructured data. ECS WRITE, READ, AND UPDATE PATHS ECS Write Path This section illustrates the ECS commodity data flow when a user submits a create request. In this example, illustrated in Exhibit 3, a user submits a write request through one of the supported interfaces and the storage engine stores the object's data and metadata on nodes and disks within a single site. The detailed steps follow. 1. An application submits a request to store an object named vacation.jpg. 2. The storage engine receives the request. It writes three copies of the object to chunk containers on different nodes in parallel. In this example, the storage engine writes the object to chunk containers on nodes 1, 5, and 8. 3. The storage engine writes the location of the chunks to the object location index. 4. The storage engine writes the object location index to three chunk containers on three different nodes. In this example, the storage engine writes the object location index to chunk containers on nodes 2, 4, and 6. The storage engine chooses the index locations independently from the object replica locations. 5. When all of the chunks are written successfully, ECS acknowledges the write to the requesting application. 6

6. After ECS acknowledges the write and the chunks are full, the storage engine erasure-codes the chunk containers. Figure3. ECS Write Path ECS Read Path This section and Figure 4 illustrate the ECS commodity data flow when a user submits a read request. At a high level, the storage engine uses the object location index to find the chunk containers storing the object. It then retrieves the erasure-coded fragments from multiple storage nodes in parallel and automatically reconstructs and returns the object to the user. The detailed steps follow. 1. The system receives a read-object request for vacation.jpg. 2. The ECS storage engine gets the location of vacation.jpg from the object location index. 3. ECS reads the data and sends it to the user. Figure 4. ECS Read Path 7

Update Path ECS updates existing objects using byte-range updates, in which only the changed data is rewritten. The entire object does not need to be rewritten. The storage engine then updates the object location index to point to the new location. Because the old location is no longer referenced by an index, the original object is available for garbage collection. ERASURE CODING All data in ECS is erasure coded except the index. The index provides location to objects and chunks and requires to be frequently accessed hence it is always kept in 3 chunks for protection and also cached in memory. Erasure coding provides storage efficiency without compromising data protection or access. The ECS storage engine implements the Reed Solomon 12/4 erasure-coding scheme in which a chunk is broken into 12 data fragments and 4 coding fragments. The resulting 16 fragments are dispersed across nodes at the local site. The storage engine can reconstruct a chunk from a minimum of 12 fragments. ECS requires a minimum of four nodes to be running the object service at a single site. The system can tolerate failures depending on the number of nodes. When a chunk is erasure-coded, the original chunk data is present as a single copy that consists of 16 data fragments dispersed throughout the cluster. When a chunk has been erasure-coded, ECS can read that chunk directly without any decoding or reconstruction. ECS doesn t need to read all the fragments to read a small object, but can directly read the object itself in the fragment which contains it. ECS only uses the code fragments for chunk reconstruction when a hardware failure occurs. Note: ECS erasure codes only data chunks and not system index or metadata chunks. These chunks are left in 3 copies. This is due to the access pattern that requires frequent access to index. Figure 5. ECS Erasure Coding Garbage Collection In append-only systems, update and delete operations result in files with blocks of unused data. This is done at the level of chunks and unused chunks reclaimed by a background task. This maintains the efficiency of the system. STORAGE EFFICIENCY FOR LARGE AND SMALL FILES ECS is adept at handling both a high volume of small files as well as very large. ECS efficiently manages small files, for example, files that are 1KB to 512KB. Using a technique called box-carting, ECS can execute a large number of user transactions concurrently with very little latency, which enables ECS to support workloads with high transaction rates. When an application writes many small files with high I/O, ECS aggregates multiple requests in memory and writes them as one operation. ECS will acknowledge all the writes at the same time and only when the data is written to disk. This behavior improves performance by reducing round trips to the underlying storage. ECS is also very efficient when handling very large files. All ECS nodes can process write requests for the same object simultaneously and each node can write to a set of three disks. ECS never stores more than 30 MB of a single object in a chunk and supports multithreading. Throughput takes advantage of all spindles and NICs in a cluster, which enables applications to use the full bandwidth of every node in the system to achieve maximum throughput. 8

GEO PROTECTION EMC ECS geo-protection provides full protection against a site failure should a disaster or other calamity force an entire site offline. ECS geo-protection works across all types of ECS-supported hardware. A geo-protection layer in ECS protects data across geodistributed sites and ensures that applications seamlessly function in the event of a site failure. ECS geo-protection manages the storage overhead that occurs when replicating multiple copies of data across sites. In addition to providing local protection, ECS efficiently replicates data across multiple sites with minimal overhead. ECS Geo-protection layer is a higher-level construct on top of local protection that is provided in ECS software on commodity. ECS Geo-protection has two main components. First, data protection that ensures data is protected in case of a site failure and, second, a storage overhead management scheme that reduces the number of copies when used in a multi-site environment. Data Protection In order to protect data across sites, ECS ensures that for every chunk written, there is a backup chunk in another site. This means that ECS always keep a single backup copy in any other site to ensure data is available in case of a disaster. Figure 6 illustrates the backup of chunks. The blue indicates the primary data written and the gray indicates the backup chunk. Figure 6. ECS Services Chunk Backup Storage Overhead Management Once data has been protected and the Object virtual pool contains more than two virtual data centers (VDC), ECS starts background tasks to efficiently manage storage overhead. In order to reduce the number of copies in multi-site deployment, ECS implements a contraction model. The data contraction operation is an XOR between two different data sets that produces a new data set. This new data set is then kept and the original data is removed. In Figure 7, A and D is the original data set, and when contracted produce the value E. The value E is then kept on site 2 and A and D backups are removed. 9

Figure7. ECS Data Contraction Operation In case of a site failure the XOR operation that was previously done on data set A and D that produced E, can then start a data expansion process that retrieves the data affected by the failing site. In the diagram below the failure of site-1, causes ECS to start a reconstruction process that sends back data set D from site-3 to site-2 and using that expansion process ECS is able to retrieve back A. Figure 8. ECS Data Expansion Operation 10

The following table illustrates the storage overhead introduced by ECS based on the number of sites. Figure 9. ECS Storage Engine Storage Overhead Number of Data Centers Overhead 1 1.33x 2 2.67x 3 2.00x 4 1.77x 5 1.67x 6 1.60x 7 1.55x 8 1.52x Traditional approaches to protection make extra copies of the data and the overall protection overhead increases with the number of additional sites. With ECS, storage efficiency scales inversely: the more data centers, the more efficient the protection. The ECS geo-protection model is unique in the industry and delivers three critical benefits: Data reliability and availability - ECS can tolerate one full-site disaster, and two node failures in each site when using commodity based platforms with a minimum of eight nodes. Reduced WAN traffic and bandwidth - Data written to any site always has the complete data available locally, which avoids WAN traffic when recovering from node or disk failures. Optimized storage efficiency and access - Efficient algorithm that enables ~1.8 copies across 4 sites without WAN read/write penalties. MULTI-SITE ACCESS ECS provides strong, consistent views of data regardless of where the data resides. With geo-protection enabled, applications can access data immediately through any ECS site, regardless of where the data was last written. For example, an application that writes object-1 to site A can access that data immediately from site B. Additionally, any update to object-1 in site B can be read instantly from site A. ECS ensures that the data returned for object-1 is always the latest version, whether you access it from site A or B. ECS achieves this strong consistency by representing each bucket, object, directory, and file as an entity and applying the appropriate technique on each entity based on its traffic pattern. This technique minimizes WAN roundtrips, which reduces average latency. Achieving Strong Consistency with Asynchronous replication In ECS, all replication for backup data is done asynchronously. This means that after the write has committed, ECS returns back immediately to the client. Then a background asynchronous replication task ensures data is protected across sites. In a multi-site access model ECS ensures that clients are always getting the latest version of the data regardless of the where that data resides. Primary vs Remote Data ECS ensures the index across all locations is in sync, and this ensures that there is always a single owner for data written. This enables data written to a specific location to be known as primary and the backup data to be known as remote. Geo-Caching Geo-caching is a key capability in ECS that enables local performance for data that is not located at either the primary or remote site. 11

A request to the primary location returns the local copy. A request to the remote site would return the remote copy if it is the latest version. The latest version is determined by keeping the global index in sync across all locations (explained in details in later section). In the case of a remote copy that is not the latest version; ECS reads the primary copy and returns it back to the client. In case of a 3+ site scenario where there is 1 primary and 1 remote copy, ECS uses a cache that is reserved in each site. If a client requests data from a third site that doesn t have the remote copy, it immediately requests a copy from the primary site and caches it locally for subsequent access. This model ensures data can be accessed with local performance even if the primary or remote copy is not originally present in a site. Further, geo-caching is also used for data when it is XOR d. This means data is written on a primary site and is XOR d and backed up in a remote site. In such a situation data is not accessed from the remote site; it is brought in from primary site and cached locally for local access. ECS Global Index In order to provide a more detailed explanation of how strong consistency works in ECS, we need to understand ECS global index. The global index is key to ECS architecture. The index is stored and replicated across multiple sites. All index information is stored in chunks similar to object data. The index is always replicated to all the sites. This information is relatively small and enables other sites to execute lookup steps locally most of the time. Whenever data is stored in a site, it becomes primary in that site, and this site becomes the index owner for that data. Consequently, if there is an update to that data, this site needs to be contacted in order to ensure strong consistency. However, this may result in WAN penalties for such updates. ECS improves that by providing lease and heartbeat techniques which improve performance and avoid WAN hops. In order to ensure strong consistency, ECS uses background mechanisms like lease and heartbeat across sites to maintain the guarantees that an object is not modified. The optimization is suspended when the object is continuously updated, otherwise the mechanism remains in effect. Achieving strong consistency in using asynchronous replication is a key fundamental capability in ECS that relieves application developers from the worry of having to deal with the inconsistencies that is introduced by eventual consistency replication that may affect the application behavior. CONCLUSION Hyperscale means being able to massively scale a compute or storage environment in response to new demands. Hyperscale is now a necessity to support mobile, cloud, Big Data and social applications and the massive amounts of data they create and consume. Hyperscale is achieved by using standardized, off-the-shelf components that, individually, don t provide performance and reliability. However, at scale, the pool of components, together with intelligent software, provide the necessary reliability and performance. Most people associate hyperscale with large public cloud infrastructures such as Google, Facebook and Amazon Web Services. EMC has changed that and made hyperscale capabilities and economics available in any data center. Software-defined storage makes it possible to bring cloud-scale storage services to commodity platforms. ECS is the intelligent software in the hyperscale equation. The ECS software is architected using cloud principles which make it unique in the industry as a platform that delivers cloud-scale storage services, support for modern applications with global access requirements and offers operational efficiency at scale. REFERENCES ECS Hardware Checklist: http://www.emc.com/techpubs/vipr/commodity_install_checklist-1.htm 12