HyperQ Storage Tiering White Paper



Similar documents
How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

HyperQ DR Replication White Paper. The Easy Way to Protect Your Data

HyperQ Remote Office White Paper

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

White Paper: Nasuni Cloud NAS. Nasuni Cloud NAS. Combining the Best of Cloud and On-premises Storage

Traditional v/s CONVRGD

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Amazon Cloud Storage Options

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Realizing the True Potential of Software-Defined Storage

EMC CLOUDARRAY PRODUCT DESCRIPTION GUIDE

Cloud OS Vision. Modern platform for the world s apps

SOFTWARE DEFINED STORAGE IN ACTION

Maxta Storage Platform Enterprise Storage Re-defined

Getting More Performance and Efficiency in the Application Delivery Network

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

All-Flash Storage Solution for SAP HANA:

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Scala Storage Scale-Out Clustered Storage White Paper

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

NEXT GENERATION EMC: LEAD YOUR STORAGE TRANSFORMATION. Copyright 2013 EMC Corporation. All rights reserved.

broadberry.co.uk/storage-servers

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

June Blade.org 2009 ALL RIGHTS RESERVED

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Data center virtualization

Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS

Nimble Storage Replication

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

Increasing Storage Performance

Dell s SAP HANA Appliance

VDI Solutions - Advantages of Virtual Desktop Infrastructure

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module

A Cloud WHERE PHYSICAL ARE TOGETHER AT LAST

HOW TO SELECT THE BEST SOLID- STATE STORAGE ARRAY FOR YOUR ENVIRONMENT

WHITE PAPER RUN VDI IN THE CLOUD WITH PANZURA SKYBRIDGE

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief

Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

SQL Server Virtualization

Arif Goelmhd Goelammohamed Solutions Hyperconverged Infrastructure: The How-To and Why Now?

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

FLASH STORAGE SOLUTION

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

EMC VFCACHE ACCELERATES ORACLE

Big data management with IBM General Parallel File System

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?

WHITEPAPER It s Time to Move Your Critical Data to SSDs

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module

Got Files? Get Cloud!

Nexenta Performance Scaling for Speed and Cost

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Protect Data... in the Cloud

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

Technology Insight Series

Comparison of Hybrid Flash Storage System Performance

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

How To Build A Cisco Ukcsob420 M3 Blade Server

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC Unified Storage for Microsoft SQL Server 2008

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Taming Big Data Storage with Crossroads Systems StrongBox

Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows

NEXSAN NST STORAGE FOR THE VIRTUAL DESKTOP

Everything you need to know about flash storage performance

Silver Peak s Virtual Acceleration Open Architecture (VXOA)

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

DataStax Enterprise, powered by Apache Cassandra (TM)

Introduction to AWS Economics

Best Practices for Architecting Storage in Virtualized Environments

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

SOFTWARE-DEFINED STORAGE IN ACTION

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

It s Not Public Versus Private Clouds - It s the Right Infrastructure at the Right Time With the IBM Systems and Storage Portfolio

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Server Virtualization: Avoiding the I/O Trap

Violin Memory Arrays With IBM System Storage SAN Volume Control

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

All-Flash Arrays: Not Just for the Top Tier Anymore

Big Fast Data Hadoop acceleration with Flash. June 2013

The Revival of Direct Attached Storage for Oracle Databases

Why is the V3 appliance so effective as a physical desktop replacement?

Optimize Your SharePoint 2010 Content Using Microsoft s New Storage Guidance

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Transcription:

HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com

Introduction Architecting a shared storage infrastructure is a difficult task. Application administrators and end-users always request more capacity at faster speeds. In the world of storage, three conditions always appear to be present: 1) Storage is too slow - There are tradeoffs between performance and capacity. In general, if you want high performance storage, it is going to be expensive. And the more expensive the storage is, the less you can afford. Therefore, a compromise is needed and you generally end up with storage that slower than desired. 2) Storage is too full - Storage needs continue to grow by 40% per year. It is not a question of if your storage is going to fill up, it is a question of when your storage will fill up. This situation is exacerbated by the fact that a tradeoff between performance and capacity has already resulted in purchasing less storage in favor or higher performance. 3) Storage is too expensive - Because of the tradeoffs mentioned above, purchasing a storage solution becomes a series of difficult choices restrained by available funding. You probably end up paying more than you would like for storage that is either too slow or too small. The storage market offers a wide variety of different tiers of storage and storage solutions. The most expensive solutions feature high-performing flash arrays over enterprise class Fibre SAN Storage. Also available are more moderately priced solutions such as iscsi SAN storage and high density JBOD or NAS devices. Whatever your needs are, there are lots of choices that can provide the features or price points that are desired. Yet the actual storage implementation seems to be dictated by the highest performance and reliability requirement. The management overhead of maintaining multiple tiers of storage across the IT infrastructure and mapping end-users and applications into the correct storage tier is too big of a challenge for many companies. As a result, company data often ends up on a storage layer designed to strike the right balance between being still good enough to satisfy the highest requirements while being affordable enough to host the entire company s data. Once installed, the company faces a challenge to constrain data growth in order to keep their storage layer from filling up. The biggest challenge in this battle is managing unstructured user data. Users across the company are usually less aware of the cost of data storage and are often unaware of the impact of their digital footprint. In many companies, employees end up storing large amounts of less relevant data on their network share causing the central storage layer to fill up quickly. Enforcing capacity limits on network shares is not only difficult but also impractical as many end-users continue to save unnecessary data. As a result, many IT departments find themselves 2

making unplanned changes and replacing their storage infrastructure earlier than anticipated. Maintaining unstructured data growth and constantly adding storage results in additional and unplanned costs but also increases the cost to backup that data. Some storage vendors address the problem by offering new storage arrays that have intelligent storage tiering built into the device. However this approach renders existing storage useless and requires the data to be migrated to the new platform. These vendors also require that you only purchase their devices moving forward as their storage tiering and replication only works across their product lines. Parsec Labs liberates you from these challenges. The HyperQ storage router solves these problems in a new, innovative and vendor agnostic way. It s your data, your device, and your destination. Figure 1 The HyperQ The HyperQ storage router can virtualize the entire shared storage infrastructure and provide policy-based storage tiering and SSD acceleration across the board. The HyperQ storage router provides a way to easily introduce a variety of storage tiers into the environment. Through the use of easy-to-create policies, administrators can identify less relevant data across a storage infrastructure and move that data transparently from one device to another. This empowers an administrator to move data to less expensive tiers, saving both money and time. Data will be moved transparently and automatically while maintaining a single namespace to the end-user. A user s data can now span multiple storage devices, across multiple vendors, and still appear to reside in the same destination directory. Tiering and migration with the HyperQ is seamless so end-users are unaware of any underlying changes due to a virtual namespace that remains unchanged. 3

Figure 2 Virtualized Namespace with HyperQ The Storage Architecture Today Satisfying capacity and performance needs while maintaining a reasonable IT budget is a challenging task. Storage vendors have introduced flash disk arrays that can perform 10X the speed, and up to 100X the IOPS of traditional spinning disk arrays. However, these storage arrays are extraordinarily expensive, provide rather small capacities, and therefore must be reserved for the most important applications. To keep the complexity and management overhead at a reasonable level many companies implement reasonably fast and affordable shared storage tiers that are used for most of their data. Enforcing an effective storage tier is nearly impossible. Many times performance must be sacrificed for more capacity to ensure enough storage is available to host everything while staying within budget limitations. The performance/capacity conundrum: In most cases IT departments need to make a choice between performance vs capacity. Given their budget limitations, companies can either afford a smaller amount of high performing storage, or a larger amount of low performance storage. As a result companies end up with a compromise of not nearly enough storage to store everything comfortably and not quite enough performance to satisfy the end-users and applications. Cost: Because all data is treated the same, companies end up storing large quantities of less important data on expensive storage and maintaining multiple backup copies of it. 4

Although storage vendors offer very reasonable prices for high density storage options utilizing 6TB drives, many IT departments find it challenging to adopt those technologies and introduce less expensive tiers of storage into their environment. The Storage Architecture with the HyperQ Parsec Labs takes a completely different and highly innovative approach to the problem. The HyperQ storage router provides a simple and robust way to virtualize all of the storage across different devices, vendors, and tiers of storage. The HyperQ storage router is a network device that will sit between the clients accessing the storage and the actual storage device or devices. A powerful feature set virtualizes the namespace, migrates data from one device (storage tier) to another device (storage tier) and enhances the capabilities of the underlying storage array. This approach allows the customer to virtualize and manage storage arrays from different vendors with different types of storage through a single device. It also allows Parsec to maintain a single namespace across multiple tiers of storage including the cloud. The HyperQ storage router can utilize any existing storage. It can be deployed without requiring data to be migrated from the current storage location to another and it can move data across the storage infrastructure without exposing the client to any noticeable change. It can support physical as well virtual clients across Windows and Linux/Unix platforms. 5

The deployment of the HyperQ storage router can be done in less than 30 minutes and the benefits of accelerated access and virtualized namespace are effective immediately without any impact or configuration change to the machines accessing the storage. When the HyperQ storage router is first deployed it appears to be a pass-through device. All the existing data is still accessible and unchanged. The first noticeable impact is the faster access to the existing data. The massive network bandwidth of the HyperQ, along with its high-end CPU and memory, provides extremely fast access to its SSD (flash) cache instantly making the storage more responsive to clients accessing it. In the next phase, the administrator can start to create new policies to identify lower priority data, based on user, file type, age of the file, or location of the file. Once the policy is set, the administrator can allocate lower cost, high-density storage and designate it as a migration target for that data. Within a few hours, the HyperQ appliance will start migrating data off the expensive SAN storage onto the less expensive higher density storage tier. The end-user s view of the data remains unchanged, and, should the end-user start accessing the migrated data, the HyperQ cache will accelerate access to the lower cost storage tier enabling the end-user to experience seamless access to their data regardless of whether it is stored on the SAN or on the lower tier of storage. Once the data has been removed from SAN, the backup workload is reduced and storage is freed up for more important data to reside on the new free capacity on the SAN. In addition, the HyperQ provides the administrator with the ability to expand any namespace by simply adding a new storage device onto the network and making a small configuration change to the storage router. Administrators are no longer forced to change the storage layout when they run out of empty disk bays in the array. Administrators are no longer forced to stay with the same vendor in order to take advantage of the scale-out capability of a specific storage vendor. And, most importantly, the Parsec Labs service includes access to cloud storage that can be added to the on-premise storage in a transparent way. Eliminating the need to educate the end-user about the x:\ drive that is the cloud resource and the y:\ drive that is the network share. With the HyperQ storage router, the client s namespace can span multiple on-premise devices as well as the cloud. Key Features: 1. Acceleration Acceleration reduces latency and increases burst throughput and IOPS to existing storage by means of SSD write cache in the HyperQ. Acceleration is especially beneficial for database and virtual machine performance. 6

2. Expansion Expansion allows a managed file system to overflow to newly added storage without service interruption or complicated configuration / migration steps. Even as a file system overflows to the configured target storage, all files appear to fully reside in the original file system; the managed file system simply appears to be larger. 3. Migration Migration allows infrequently accessed data to automatically migrate under control of user-defined migration policies to lower-cost, lower-performance storage, increasing available capacity for more frequently accessed data. As an added benefit, data will also be automatically retrieved to higher-performance media when accessed, therefore providing the best of both worlds.. This enables optimal exploitation of storage media according to cost. For example, inactive data can be automatically migrated to slower, larger, lower-cost media, while active or warm data can be stored on the high-performing SSD or SAN on the local network. Irrespective of where a file physically resides, it always appear to remain in the managed file system. Migration is fully transparent to applications that access migrated files. 4. Hybrid Cloud Never run out of storage again! Cloud back-end leverages the HyperQ migration feature by enabling an almost unlimited amount of data to be stored to cloud potentially virtually endless storage. As with all HyperQ migration, a file migrated to cloud appears to remain in its original location, and migration to and retrieval from cloud (if the file is subsequently read) is transparent to applications, aside from an increase in latency. Solving the performance/capacity conundrum: By using the HyperQ storage router, companies will enjoy an immediate performance increase as well as increased managed capacity without having to replace their entire storage infrastructure. The HyperQ acceleration can help speed up existing storage and manage the utilization of higher performing disk based storage in the environment. The HyperQ makes it easy to expand existing storage networks with inexpensive highcapacity NAS devices that make it more affordable to retain more data. Dramatically reducing cost: Adding storage to existing infrastructure can cost between $3 per GB up to $10 per GB. High density NAS storage can be as little as 20 cents per GB. Managing the complexity of the multiple storage tiers and accelerating the performance of low-end storage can be accomplished by simply signing up for the Parsec Labs subscription which starts as low as $500 a month. There is no additional cost for labor and all maintenance and support is included. 7

A Closer Look The HyperQ Architecture Hardware: The HyperQ is delivered as a standard 1U rack-mounted server chassis or as a desktop server. This server is constructed with cutting edge processing capability using the latest CPUs from Intel coupled with the strategic application of award winning solid state drives (SSDs) in order to deliver significant performance advantages. The HyperQ also features multiple 1GigE or 10GigE network interfaces in order to provide optimal network throughput. The HyperQ is inserted into your storage environment between shared storage and the clients. It also acts as a router and cloud gateway. Figure 3 Depiction of the HyperQ in Your Environment Software: The HyperQ consists of the following software components: 1. A Linux operating system. 2. The Parsec File System manages file system expansion and data migration between storage tiers at the sub-file level. 3. The Parsec Cache Manager that exploits SSD and HDD storage in the HyperQ for storage acceleration. 8

4. The Parsec Engine that selects files for migration according to migration policies. Once a file is selected for migration, the migration engine interacts with the Parsec File System to relocate the file. A migration policy is typically executed periodically, and can also be executed as a one-time event. 5. Diagnostic utilities for the Parsec File System. 6. A web GUI for HyperQ configuration. 7. Diagnostic tools, including reporting and telemetry. Figure 3 Inside the HyperQ How to deploy, administer, and monitor the HyperQ : The Storage Administrator s job is done in four easy steps: 1. Configure new IP address for existing file server from which managed file systems are exported. 2. Define target storage for migration and/or expansion. 3. Define managed file systems. 4. Configure migration policies. A single dashboard that depicts local storage and cloud storage usage allows ongoing monitoring of the HyperQ and file servers. It is simple and efficient, yet powerful and effective. 9

Key Advantages Virtualize and Accelerate Existing Storage Virtualize your data and remove the boundaries between islands of individual storage devices. Use the HyperQ and your existing storage will perform better than you thought possible. Simple Deployment and Management The HyperQ will seamlessly integrate with the existing network, no changes are required on existing servers that access the storage. After the HyperQ is deployed; its management GUI will allow the admin to manage the entire storage infrastructure through a single pane of glass. Cloud Gateway - Utilize cloud storage as needed. Never run out of storage again. WAN Optimization the HyperQ has special provisions to optimize WAN traffic between sites and to the cloud. Scalability Scale to your needs in cost and storage requirements. Choose between cost-effective pricing tiers. Conclusion The HyperQ offers a new solution to the performance/capacity conundrum giving you both high performance and space savings together at a reduced cost. By shifting the focus from storage provisioning to intelligent, automated data management, the HyperQ will change the way data centers operate. Parsec Labs addresses the storage user's real need: to store and retrieve data at the lowest cost, while making optimal use of existing storage capacity. The storage administrator will find the HyperQ to be an indispensable aid in managing the ever-growing volume of data. Contact Us Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com 10