IT Agility, Consolidated Storage and the Power of Software

Similar documents
Is Hyperconverged Cost-Competitive with the Cloud?

Storage for Virtualized Workloads

Exchange Storage Meeting Requirements with Dot Hill

How the Software-Defined Data Center Is Transforming End User Computing

Protecting Data with a Unified Platform

Business white paper. environments. The top 5 challenges and solutions for backup and recovery

Evaluation of Enterprise Data Protection using SEP Software

VMware and Primary Data: Making the Software-Defined Datacenter a Reality

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER

Trends driving software-defined storage

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

June Blade.org 2009 ALL RIGHTS RESERVED

Developing a Backup Strategy for Hybrid Physical and Virtual Infrastructures

The Zadara Storage Cloud A Validation of its Use Cases and Economic Benefits

The Journey to Cloud Computing: from experimentation to business reality

Access to easy-to-use tools that reduce management time with Arcserve Backup

FLASH ARRAY MARKET TRENDS

Actifio Big Data Director. Virtual Data Pipeline for Unstructured Data

Oracle FS1 Flash Storage System

Virtual Machine Environments: Data Protection and Recovery Solutions

Maxta Storage Platform Enterprise Storage Re-defined

How the Software-Defined Data Center Is Transforming End User Computing

HADOOP SOLUTION USING EMC ISILON AND CLOUDERA ENTERPRISE Efficient, Flexible In-Place Hadoop Analytics

Hyperconverged Transformation: Getting the Software-defined Data Center Right

Successful Data Management Strategies for the Modern Data Center & Beyond

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

I D C V E N D O R S P O T L I G H T. F l a s h, C l o u d, a nd Softw ar e - D e f i n e d Storage:

EMC - XtremIO. All-Flash Array evolution - Much more than high speed. Systems Engineer Team Lead EMC SouthCone. Carlos Marconi.

Enabling comprehensive data protection for VMware environments using FalconStor Software solutions

StarWind Virtual SAN for Microsoft SOFS

EMC NETWORKER SNAPSHOT MANAGEMENT

HiTech. White Paper. Storage-as-a-Service. SAN and NAS Reference Architectures leveraging Private Cloud Storage

Data Protection as Part of Your Cloud Journey

Real World Considerations for Implementing Desktop Virtualization

Nutanix Solution Note

How Traditional Physical Backup Imaging Technology Fits Into a Virtual Backup Solution

Big data management with IBM General Parallel File System

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration

Delphix Agile Data Platform Technical White Paper

What Is Microsoft Private Cloud Fast Track?

Understanding Enterprise NAS

Software-Defined Storage: What it Means for the IT Practitioner WHITE PAPER

EMC RECOVERPOINT FAMILY

NetApp Big Content Solutions: Agile Infrastructure for Big Data

Optimizing Data Protection Operations in VMware Environments

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

Hyperconverged Infrastructure: Improve business value while decreasing TCO White Paper

Host Performance-Sensitive Applications in Your Cloud

Introduction to NetApp Infinite Volume

Improving Microsoft SQL Server Recovery with EMC NetWorker and EMC RecoverPoint

Maximizing Your Desktop and Application Virtualization Implementation

Virtualization Essentials

What Is Microsoft Private Cloud Fast Track?

Windows Server 2003 Migration Guide: Nutanix Webscale Converged Infrastructure Eases Migration

Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities

Data center and cloud management. Enabling data center modernization and IT transformation while simplifying IT management

BACKUP IS DEAD: Introducing the Data Protection Lifecycle, a new paradigm for data protection and recovery WHITE PAPER

Brochure. Data Protector 9: Nine reasons to upgrade

Software-Defined Networks Powered by VellOS

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

CA Workload Automation for SAP Software

Understanding Storage Virtualization of Infortrend ESVA

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

IBM FlashSystem and Atlantis ILIO

Business Benefits of Data Footprint Reduction

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

Leveraging the Cloud for Data Protection and Disaster Recovery

The Flash Based Array Market

The Future of Workload Automation in the Application Economy

Product Brochure. Hedvig Distributed Storage Platform Modern Storage for Modern Business. Elastic. Accelerate data to value. Simple.

EMC s Enterprise Hadoop Solution. By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst

Reducing Backups with Data Deduplication

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

Inside Track Research Note. In association with. Key Advances in Storage Technology. Overview of new solutions and where they are being used

Broadcloud improves competitive advantage with efficient, flexible and scalable disaster recovery services

Extending the Power of Your Datacenter

agility made possible

Product Brochure. Hedvig Distributed Storage Platform. Elastic. Modern Storage for Modern Business. One platform for any application. Simple.

Maginatics Cloud Storage Platform A primer

The All Flash Data Center

Bring the cloud to your datacenter

No matter the delivery model private, public, hybrid the cloud has the same core attributes:

SimpliVity OmniStack with Vormetric Transparent Encryption

Native Data Protection with SimpliVity. Solution Brief

Optimizing IT Data Services

PARALLELS CLOUD STORAGE

SimpliVity OmniStack with the HyTrust Platform

Big data Devices Apps

Nutanix Tech Note. Data Protection and Disaster Recovery

Flash storage addresses emerging storage dynamics and sparks a shift to primary storage alternatives. December 2015 TBR

Business Continuity with the. Concerto 7000 All Flash Array. Layers of Protection for Here, Near and Anywhere Data Availability

The Extraordinary Cloud Services Catalog

The Modern Virtualized Data Center

7 Myths about Backup & DR in Virtual Environments

Recovery-Series, Purpose-built Backup Appliances from Unitrends Date: June 2015 Author: Vinny Choinski, Senior Lab Analyst

CloudByte ElastiStor Date: February 2014 Author: Tony Palmer, Senior Lab Analyst

ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE

Transcription:

Technology Insight Paper IT Agility, Consolidated Storage and the Power of Software By Eric Slack, Sr. Analyst September 2016 Enabling you to make the best technology decisions

IT Agility, Consolidated Storage and the Power of Software 1 Introduction Agility is a popular concept in IT, one that's associated with time, specifically minimizing time. This could be the time it takes to get a new application up and running or the time needed for a software revision to get from development into production. Agility is frequently associated with cloud infrastructures or DevOps, but it s really a fundamental requirement of all IT environments. While agile IT is often part of a discussion about hyperconverged infrastructures or hybrid clouds, it s also about more core infrastructure components, including data storage systems. In this report we will examine the role a modern, enterprise storage system can play in IT agility by leveraging the power of software and metadata to reduce data copies, improve workflow efficiency and improve data protection. We ll also look at how combining data sets across a distributed storage system can greatly enable the use of snapshots and other metadata-based processes. Finally, we will explore the challenges of consolidating application workloads and the advanced functionality that s available in scale-out, multi-protocol, software-defined storage systems to address these challenges. There s an old saying that all problems with an IT infrastructure will eventually show up as storage problems. This is a testament to how often storage is the bottleneck in the IT process or production workflows. Provisioning capacity, creating copies, managing performance and optimizing the various processes needed to assure protection and data availability across disparate systems can be very complex. A storage system that can streamline and automate this copy activity and efficiently support the workflows, can have a big impact on the agility equation. A good example of this is found in test and development environments. The Dev/Test Lifecycle Development, test and production are fundamental steps in releasing new software applications and updates. They re typically run by different teams, often using different compute and storage infrastructures, generating multiple copies along the way. As an example, from a workflow perspective, the development group takes the current revision of software and modifies it to incorporate changes, bug fixes, new features, etc., and then sends this new version to the test group. If development and test are supported by separate storage systems, then that data set needs to be copied between storage arrays. After test, the final version goes into production, a process that often involves another copy job since production applications are routinely run on separate storage systems as well.

IT Agility, Consolidated Storage and the Power of Software 2 Potential Problems As mentioned, each of these groups usually generates multiple copies of the software being developed in order to facilitate a workflow that involves many people, but also to provide the data protection needed as these data sets are transferred between departments. These test/dev copies consume additional storage capacity on the primary array, but can also involve migrating the data between disparate storage systems. Physically copying data between storage systems is inefficient, from a CapEx and an OpEx perspective and the antithesis of an agile environment. One way to address this problem is to speed up and automate the storage management processes involved. Additionally, by eliminating the creation of the physical copies altogether, the lengthy data migration process is eliminated as well, providing immediate access to the data. Supporting the different Test/Dev steps described above using a modern, software-defined storage system enables agile development teams to manage these storage operations in a self-service manner without requiring IT intervention. The Power of Metadata Metadata is defined as data about data or, specifically, the information needed to access and manage the files, blocks and objects stored. A common type of metadata is a snapshot, a reference or pointer to the physical bytes on disk or SSD that comprise a file, volume or object (we ll use a file for this discussion). The concept behind the power of metadata is that creating pointers is much faster and more efficient than creating physical copies, allowing a file to remain in production while its state is captured. Also, snapshots can be modified, just like a physical copy, without impacting the original file, with the storage system needing to save only the changes. Data Protection Snapshots do facilitate the making of copies when a copy is needed, but the system doesn t pay the cost in storage capacity or processing overhead until that copy is actually created. This is why snapshots are so valuable for data protection, because they essentially allow you to take as many backups as desired, as often as needed, without consuming an inordinate amount of resources. But the power of metadata can provide another level of data protection as well.

IT Agility, Consolidated Storage and the Power of Software 3 Continuous Data Protection Continuous data protection (CDP) is a process developed originally for purpose-build backup appliances, but now available on some modern storage systems. It reduces the RPO to essentially zero and enables rapid recovery from issues such as data corruption. CDP maintains a journal for logging all I/O transactions as they occur on the primary volume, then creates virtual snapshots at specified increments of time. In this way the state of each data volume can be recovered at any point in time by recovering the closest snapshot and then applying the I/O transactions from the journal. Backup can be a drag on IT agility, because most IT organizations consider backup as the final step in the workflow process. CDP decouples backup from the production workflow, capturing small increments of data continuously, instead of running it as a batch backup process that takes much longer. BC / DR While they are certainly useful metadata copies can t do everything. They don t ensure the availability of critical data sets when there is a system failure and don t protect against natural disasters or human error. To ensure business continuity (BC) physical copies must be made and isolated from the primary data set. When a copy is needed for local backup or remote disaster recovery (DR) purposes a clone is created by replicating the data associated with a given snapshot to another volume, or another storage system. This process is done in the background so as not to disrupt applications accessing the primary data set. They do consume resources, but these clones can be very granular, created only for specific files or data volumes and at specific times. The Challenge of Consolidating Storage While modern, software-defined storage systems can combine multiple data sets on the same storage system to help create an agile infrastructure, administrators are very cautious about impacting application performance. Consolidating more data from more hosts improves efficiency and utilization, but you must be able to maintain performance SLAs, especially for production applications. Historically, SAN-based arrays would be set up with separate LUNs for each host they supported. While they were on the same system, each host had its own physical storage capacity, isolated from all the others, which is very inefficient. But enterprise storage systems often combine all data into the same physical volumes, applying single instancing, data de-duplication and other efficiency technologies. In modern environments application servers are usually virtualized, increasing the number of applications

IT Agility, Consolidated Storage and the Power of Software 4 sharing a LUN or volume and more importantly, increasing the randomness of workloads supported by a storage system. This can cause another problem, the noisy neighbor. If two hosts or workloads are sharing the same storage resources it s possible for one to take more than was intended, to consume resources that are needed by another host. This typically results in a performance loss, or performance inconsistency, as multiple applications call for data when there isn t enough processing power to service them all. For tier-one production applications this can be a serious problem. Quality of Service The way modern, enterprise storage systems handle this performance challenge is with quality of service (QoS) prioritization features. QoS can eliminate the conflicts caused by combining different data types and different applications into the same storage space. At a very basic level it guarantees that the most critical applications get more resources. But rather than just setting I/O transaction or IOPS levels for different workloads or volumes, modern QoS is more sophisticated. These systems set minimum and maximum thresholds and assign a relative priority to each volume. This lets the system discriminate between volumes when resources become constrained, moving resources from less critical volumes to those with a higher priority, but still staying within the min-max ranges already established. Instead of imposing a static limit on all volumes, this kind of intelligent QoS does what an IT administrator would do to fix the problem, but this happens dynamically without admin intervention. Some systems allow these parameters to be adjusted on the fly, enabling administrators to tune the system to support temporary requirements for performance, such as end-of-period financial activity or high transaction events like holiday shopping. Multi-Protocol Storage Where storage consolidation originally involved moving block-based data sets into a shared array, modern IT environments now include file-based data and even object storage as well. Consolidating these data sets requires a multi-protocol or unified storage system. These arrays were originally created by adding protocol translation software to an existing block or file-based storage system. But this architecture is less efficient and not as scalable as enterprises need. A better way to handle multiple protocols is to parse and store data in protocol-agnostic chunks and then maintain an index of these chunks that identifies which ones are associated with each data object,

IT Agility, Consolidated Storage and the Power of Software 5 block or file stored. This design supports the single instancing of data chunks for whichever protocol is ultimately used to present the data, reducing storage capacity when copies of data are made. Scale-Out, Software-Defined Storage This design also supports a scale-out, clustered architecture that distributes data chunks across multiple nodes. This allows performance to scale with capacity, since each server node contains processing power and storage devices (disk and/or flash drives), something traditional scale-up storage arrays couldn t do. These systems can also be designed as software-defined storage, leveraging the economics of industry-standard server hardware and commodity storage devices, to keep cost down as they scale. Summary Agility is becoming a priority of IT organizations as companies strive to keep up with the real-time pace of business and stay ahead of competitors. Infrastructures help determine how agile an environment is and storage systems are a big part of the infrastructure. A well-designed storage system can increase IT agility by using the power of metadata to reduce time-consuming inefficiency, often by eliminating the multiple copies of production data sets associated with complex workflow processes and with everyday tasks like data protection and recovery. Leveraging metadata can require combining multiple data sets, often consisting of different data types and platforms, onto the same storage system. But this consolidation can generate potential problems as different hosts compete for resources. One answer is to deploy a storage system with intelligent qualityof-service functions that can prioritize workloads and dynamically allocate resources accordingly to maintain consistent performance. These consolidated enterprise storage systems also need to scale capacity and performance, and do so cost-effectively. A scale-out design based on software-defined storage technology can allow the use of lower-cost server and storage hardware, while providing the flexibility of a modular architecture. About Formation Data Systems FormationOne is a modern, enterprise, software-defined storage system that provides storage and data virtualization to support file, block and object data types. Using a loosely coupled, distributed architecture as described in this report, FormationOne can consolidate multiple discrete application workloads for multiple use cases across many industry verticals. FormationOne enables self-service with

IT Agility, Consolidated Storage and the Power of Software 6 simplified provisioning, orchestration and REST APIs in order to support advanced metadata-based data services that help provide an agile IT environment. The system offers a dynamic QoS feature that helps support multi-tenancy, assuring that workloads from different hosts won t impact each other, delivering consistent performance at scale. The FormationOne Timeline feature uses snapshots and I/O journaling to enable continuous data protection, providing lower RPO and RTO by eliminating the delays and overhead of traditional backup processes. This paper is sponsored by Formation Data Systems. About Evaluator Group Evaluator Group Inc. is a technology research and advisory company covering Information Management, Storage and Systems. Executives and IT Managers use us daily to make informed decisions to architect and purchase systems supporting their digital data. We get beyond the technology landscape by defining requirements and knowing the products in-depth along with the intricacies that dictate long-term successful strategies. www.evaluatorgroup.com @evaluator_group Copyright 2016 Evaluator Group, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written consent of Evaluator Group Inc. The information contained in this document is subject to change without notice. Evaluator Group assumes no responsibility for errors or omissions. Evaluator Group makes no expressed or implied warranties in this document relating to the use or operation of the products described herein. In no event shall Evaluator Group be liable for any indirect, special, inconsequential or incidental damages arising out of or associated with any aspect of this publication, even if advised of the possibility of such damages. The Evaluator Series is a trademark of Evaluator Group, Inc. All other trademarks are the property of their respective companies.