HyperQ Remote Office White Paper



Similar documents
HyperQ DR Replication White Paper. The Easy Way to Protect Your Data

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

HyperQ Storage Tiering White Paper

Transporter from Connected Data Date: February 2015 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Sr. Lab Analyst

Riverbed WAN Acceleration for EMC Isilon Sync IQ Replication

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Amazon Cloud Storage Options

Protect Data... in the Cloud

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Dell s SAP HANA Appliance

Automated file management with IBM Active Cloud Engine

Real World Considerations for Implementing Desktop Virtualization

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Driving Big Data with OCZ Enterprise SSDs

Big data management with IBM General Parallel File System

Silver Peak s Virtual Acceleration Open Architecture (VXOA)

NEXSAN NST STORAGE FOR THE VIRTUAL DESKTOP

June Blade.org 2009 ALL RIGHTS RESERVED

EMC VFCACHE ACCELERATES ORACLE

Cisco Wide Area Application Services Software Version 4.1: Consolidate File and Print Servers

Selecting the Right NAS File Server

The Benefits of Purpose Built Super Efficient Video Servers

Realizing the True Potential of Software-Defined Storage

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

SOLUTIONS PRODUCTS INDUSTRIES RESOURCES SUPPORT ABOUT US ClearCube Technology, Inc. All rights reserved. Contact Support

CISCO WIDE AREA APPLICATION SERVICES (WAAS) OPTIMIZATIONS FOR EMC AVAMAR

WHITE PAPER 1

SOFTWARE DEFINED STORAGE IN ACTION

Inside Track Research Note. In association with. Key Advances in Storage Technology. Overview of new solutions and where they are being used

The Open Cloud Near-Term Infrastructure Trends in Cloud Computing

Why is the V3 appliance so effective as a physical desktop replacement?

Kaseya IT Automation Framework

Protezione dei dati. Luca Bin. EMEA Sales Engineer Version 6.1 July 2015

WanVelocity. WAN Optimization & Acceleration

Scala Storage Scale-Out Clustered Storage White Paper

The Revival of Direct Attached Storage for Oracle Databases

Business Process Desktop: Acronis backup & Recovery 11.5 Deployment Guide

Solid State Architectures in the Modern Data Center

Analysis of VDI Storage Performance During Bootstorm

EMC CLOUDARRAY PRODUCT DESCRIPTION GUIDE

HYBRID STORAGE WITH FASTier ACCELERATION TECHNOLOGY

Cisco WAAS for Isilon IQ

F5 and VMware Solution Guide. Virtualization solutions to optimize performance, improve availability, and reduce complexity

A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

WHITE PAPER RUN VDI IN THE CLOUD WITH PANZURA SKYBRIDGE

SCALABILITY AND AVAILABILITY

How To Understand The Advantages And Drawbacks Of Cloud Based Computing

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card

Introduction. Silverton Consulting, Inc. StorInt Briefing

Dot Hill Storage Systems and the Advantages of Hybrid Arrays

A Dell Technical White Paper Dell Compellent

Solution Overview. Business Continuity with ReadyNAS

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Nexenta Performance Scaling for Speed and Cost

Array Networks & Microsoft Exchange Server 2010

Simplified Management With Hitachi Command Suite. By Hitachi Data Systems

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Hyper-converged IT drives: - TCO cost savings - data protection - amazing operational excellence

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

IBM Storwize Rapid Application Storage solutions

Cloud Sure - Virtual Machines

Highly Available Unified Communication Services with Microsoft Lync Server 2013 and Radware s Application Delivery Solution

12 Key File Sync and Share Advantages of Transporter Over Box for Enterprise

Everything You Need to Know About Network Failover

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

Intel RAID SSD Cache Controller RCS25ZB040

How To Store Data On A Small Business Computer Or Network Device

THE VX 9000: THE WORLD S FIRST SCALABLE, VIRTUALIZED WLAN CONTROLLER BRINGS A NEW LEVEL OF SCALABILITY, COST-EFFICIENCY AND RELIABILITY TO THE WLAN

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Cloud Gateway. Agenda. Cloud concepts Gateway concepts My work. Monica Stebbins

Quantum StorNext. Product Brief: Distributed LAN Client

Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions

Using Synology SSD Technology to Enhance System Performance Synology Inc.

A virtual SAN for distributed multi-site environments

WHITE PAPER. Drobo TM Hybrid Storage TM

VMware Software-Defined Storage Vision

F5 PARTNERSHIP SOLUTION GUIDE. F5 and VMware. Virtualization solutions to tighten security, optimize performance and availability, and unify access

Automated Data-Aware Tiering

Redefining Microsoft SQL Server Data Management. PAS Specification

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

All-Flash Arrays: Not Just for the Top Tier Anymore

EasyConnect. Any application - Any device - Anywhere. Faster, Simpler & Safer Networks

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments

Deploying in a Distributed Environment

Cisco Application Networking for IBM WebSphere

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Increasing Storage Performance

Real-time Compression: Achieving storage efficiency throughout the data lifecycle

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Base One's Rich Client Architecture

Astaro Gateway Software Applications

Transcription:

HyperQ Remote Office White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com

Introduction One of the biggest IT challenges that companies face is how to efficiently and effectively share data across the organization. The problem is even more complicated when employees and data are distributed across several office locations. In most cases the decision is made to provide a centrally hosted file share as a means of sharing data across different sites. In some cases organizations are going outside of a regular file share and are using public cloud services such as OneDrive, GDrive, and DropBox. All of these solutions will enable some level of sharing and provide a level of protection because the data no longer resides on a remote laptop, yet there are major downsides and compromises with each one of those solutions. The centrally hosted file share is by far the most flexible and secure way to solve the problem. However, with traditional technologies, network latency and bandwidth limitations result in a less than optimal experience. The end user is fully exposed to slower access through WAN, making the end user wait a very long time large files to save or to open. Given the major performance issues involved going across the WAN, many users decide to ignore the corporate mandate to store data on that share and simply keep it local on their PC. In other cases, corporations spend large amounts of money trying to improve the experience by deploying very costly WAN acceleration appliances. The public cloud service approach yields a completely different set of problems. The major benefit is that most services can be accessed from the public internet without requiring a VPN connection. However, that very benefit is one of the major downsides. Public cloud services are generally much less secure than the corporate infrastructure. In most cases, services can be accessed through single factor authentication with no intrusion detection or protection. On the other hand, most corporate networks feature two factor authentication plus ongoing intrusion detection and logging therefore making it much harder to exploit. Another problem with such cloud services is that the use of them can cause violations of corporate policies or laws such as HIPPA or PCI compliance. And last but not least, none of those public cloud services enforce or support of corporate authentications services such as Active Directory. The lack of that integration makes it very hard to revoke access in case an employee leaves the corporation. In a corporate environment, the user account is disabled in Active Directory and all access is immediately revoked. In the case of a public cloud service the access is based on credentials maintained outside the corporate framework, making it very difficult to manage them. 2

In addition to the issues raised above, most solutions fail to address one of the biggest problems: Bandwidth is expensive! None of the solutions mentioned above provide a centralized cache in the remote office. The lack of a centralized cache at the edge causes the data travel to the remote office multiple times (at least once per end user at the edge, since the cache for all those solutions resides on the endpoint - Laptop or PC). Supporting such high data transfer rates to and from the remote office cause many organizations to upgrade the their bandwidth at very high cost to the corporation. The innovative HyperQ storage router can help to eliminate these problems. The HyperQ storage router is a network appliance with a large local SSD and spinning disk cache. This can be used to maintain a centrally cached local copy of the relevant data at the remote site. The HyperQ is equipped with a smart cache, which can be configured to keep specific data in the cache at all times. This makes the data available to remote employees when they need it. The HyperQ cache is also engaged when data is being written to the central repository. As a result data relevant to the remote site is always available at LAN speeds and all data saved at the remote office will be committed to the cache at SSD speeds reducing the wait time from minutes to seconds. The HyperQ appliance can be deployed in different sizes and form factors. The smallest appliance provides a 256GB SSD cache and a 1.5TB spinning disk cache. The appliance comes in a rack-mounted option for locations with server racks or as a desktop appliance for offices without a server rack. Figure 1 The HyperQ Introducing the HyperQ to a remote site provides instant acceleration. Users in the remote office will be able to access data at the speed of a flash disk array rather than the speed of a slow WAN connection. The HyperQ allows for asynchronous data transfer between remote sites and a central data location on a scheduled basis. This makes the data available to the user when they need it, rather than relying on a user to initiate the request and then forcing the user to wait for several minutes or hours for the request to complete. 3

The Remote Office World Today Most remote offices struggle with the management of unstructured data. In some cases, the data is stored locally on NAS or SAN devices, which causes backup and maintenance problems for IT teams. In other cases, remote locations are forced to access central storage pools. This approach generally causes performance problems such as long wait times for writes to commit, slow downloads on read access and major bandwidth issues if a certain document is accessed by multiple users at a remote office. Due to this, performance is slow and some users at remote sites refuse to save data back to the network share causing a potential security problem. The performance/bandwidth conundrum: Bandwidth and latency make data transfer from a central site to a remote office a costly and difficult challenge. Many companies attack the problem by upgrading to faster services to increase bandwidth and reduce latency or deploying WAN acceleration technologies. However, the biggest problem is the lack of a centralized data cache at the remote site and an intelligent, policy-based automatic prefetch of data to the remote site. These needs are usually overlooked and remain unaddressed. Typically, data is fetched via the WAN, on demand, as a user requests it. If the same data is requested by multiple users, the same data is pulled across the WAN multiple times consuming a large amount of the available bandwidth. Slow performance and competing requests over shared bandwidth not only causes major frustration but also causes major productivity losses. The HyperQ solves this with a simple, easy-to-use platform. Cost: In many cases, maintaining the remote office infrastructure can be very expensive. WAN acceleration solutions are costly and require expensive appliances on both sides of the wire. Upgrading WAN bandwidth in remote locations can be extraordinarily expensive and, in many cases, does not fully address the problem. 4

The Remote Office World with the HyperQ The HyperQ storage router has innovative and unique capabilities that can be used to address this problem in a very cost-efficient way. The HyperQ storage router acts as a local cache at each remote site. The HyperQ is simply installed as a network appliance, which sits in the data path between the central storage source and the end users at the remote offices. End users therefore transparently interact with the local cache rather than the WAN attached data store. Every read and write a user performs is seamlessly serviced out of the SSD cache providing unmatched data access performance. New data from the remote site is saved back to the central data store by the storage router asynchronously. This means that the users can save a file within seconds and move on with their day. Meanwhile, the Parsec HyperQ picks up the slack and spends the time copying data via the WAN. Additionally, data that appears at the central storage repository, because a different remote site just saved data there or the main office published new data, is preloaded onto the local cache automatically without user interaction. The installation of a HyperQ storage router greatly reduces overall WAN traffic because the data is transferred only once across the WAN. Once the data is cached at the remote site, all local users will share this copy therefore avoiding multiple downloads of the same file. 5

Using the HyperQ in remote offices will allow remote users to access data as efficiently as employees at the main site. This improvement is made possible by these key features: 1. Policy Driven Read Cache The HyperQ appliance has a built in policy engine that monitors the central data store for new, relevant data. Once new data appears, the policy will automatically seed the local cache on a scheduled basis. This reduces and in many cases eliminates the wait for downloads to happen. The data will already be there by the time a user would like to access it. 2. Write Cache Any data written to the central storage pool is first committed to the local SSD cache. Saving a 100 MB file will be done in seconds rather than minutes. The HyperQ appliance then commits the data via the WAN to the central storage device without impacting the user. 3. Shared Cache Because the HyperQ appliance sits in the network at the remote site, the cache is being shared across all users at that site. This approach avoids the need to download multiple copies of the same file, which would immediately saturate the available bandwidth. 4. Network Throttling The HyperQ appliance has a sophisticated network throttling solution built-in. The administrator can define off peak and on peak time windows and set bandwidth limitations for each period. This provides full control of how much bandwidth should be dedicated to the data replication between sites and how much bandwidth needs to be reserved for other services such as VOIP. Solving the performance/bandwidth conundrum: The best way to deal with the problem of slow performance and limited bandwidth is to hide the performance issue from the user and reduce the amount of data that is being copied across the WAN. The HyperQ s ability to quickly provide data to the user and quickly commit a write creates a great user experience. By ensuring that the data transfer is being done in the background, the user does not experience annoying delays. The centralized shared cache of data at the remote site will greatly reduce the amount of data that travels across the WAN therefore leaving more room for other services. Examples of such services include accessing WEB based applications like Salesforce or using internet services such as VOIP. Dramatically reducing cost: 6

All of this functionality is available at a fraction of the cost of WAN acceleration or high speed WAN services. A subscription to Parsec starts as low as $500 a month. There is no additional cost for labor and all support and maintenance is included. A Closer Look The HyperQ Architecture Hardware: The HyperQ is delivered in a standard 1U rack-mounted server chassis or as a desktop server. This server is constructed with cutting edge processing capability using the latest CPUs from Intel coupled with the strategic application of award winning solid state drives (SSDs) in order to deliver significant performance advantages. The HyperQ also features multiple 1GigE or 10GigE network interfaces in order to provide optimal network throughput. The HyperQ is inserted into your storage environment between shared storage and the clients. It acts as a router and an optional cloud gateway. Figure 2 Depiction of the HyperQ in Your Environment 7

Software: The HyperQ consists of the following software components: 1.A Linux operating system. 2.The Parsec File System manages file system expansion and data migration between storage tiers at the sub-file level. 3.The Parsec Cache Manager that exploits SSD and HDD storage in the HyperQ for storage acceleration. The cache manager also includes a policy based pre-fetch solution that will seed the cache automatically to ensure data is available when needed. 4.The Parsec Migration Engine that selects files for migration according to migration policy. Once a file is selected for migration, the migration engine interacts with Parsec File System to relocate the file. A migration policy is typically executed periodically, and can also be executed as a one-time event. 5.Diagnostic utilities for the Parsec File System. 6.A web GUI for HyperQ configuration. 7.Diagnostic tools, including reporting and telemetry. Figure 3 Inside the HyperQ How to deploy, administer, and monitor the HyperQ : After performing a simple installation sequence, the storage administrator specifies the file systems to be managed by the HyperQ, the pre-fetch policies for the remote site and the network throttling requirements. Thereafter, the HyperQ manages acceleration and prefetch automatically, with no disturbance to clients. The Storage Administrator s job is done in four easy steps: 1. Define managed file systems. 2. Configure prefetch policies. 3. Configure Network Throttling 8

A single dashboard that depicts local storage and cloud storage usage allows ongoing monitoring of the HyperQ and NAS hosts. It is simple and efficient, yet effective. Key Advantages Shared Cache at Remote Site Using the HyperQ remote office solution, the WAN traffic will be greatly reduced, because all remote users will share the same cached copy of a file rather than requesting a private copy thus causing the same file to be copied multiple times. Policy Based Prefetch of Data The HyperQ provides the ability to ensure that data is accessible at the remote site when the user needs it without waiting for slow downloads across the WAN. The HyperQ features a policy engine that will monitor a central data store for new relevant data to appear and seed the local cache on predefined schedule so that the cache is always hot and users don t need to wait for a download. Fast Writes Large files save within seconds and the user can move on. The write via the WAN is performed by the HyperQ appliance without the user being impacted. Network Throttling the HyperQ has special provisions to preserve bandwidth for purposes other than data replication. Remote offices need their bandwidth also to access WEB bases applications and services as well as VOIP. Therefore, the storage replication needs to reserve enough bandwidth to ensure smooth operations. The HyperQ has a very sophisticated solution to allow fine tuning of that behavior. Scalability Scale to your needs in cost and storage requirements. Choose between cost-effective pricing tiers. Conclusion The HyperQ offers a new solution to the performance/bandwidth conundrum in remote offices by giving you both high performance and bandwidth savings together with a reduced cost. By implementing a shared cache and deploying an intelligent automatic replication between a central data store and the HyperQ, remote offices will be able to access and share data with unprecedented performance and to yield a great user experience at a minimal cost. 9

Contact Us Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com 10