Data Center Performance Insurance

Similar documents
Server Virtualization: Avoiding the I/O Trap

Disk Storage Shortfall

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Accelerating Microsoft Exchange Servers with I/O Caching

The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

EMC VFCACHE ACCELERATES ORACLE

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Data Center Storage Solutions

Nexenta Performance Scaling for Speed and Cost

WHITE PAPER 1

Technology Insight Series

SSD Performance Tips: Avoid The Write Cliff

Accelerating Server Storage Performance on Lenovo ThinkServer

Flash-optimized Data Progression

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

Benchmarking Cassandra on Violin

Flash Memory Technology in Enterprise Storage

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

Getting the Most Out of VMware Mirage with Hitachi Unified Storage and Hitachi NAS Platform WHITE PAPER

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

Speed and Persistence for Real-Time Transactions

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Accelerating Web-Based SQL Server Applications with SafePeak Plug and Play Dynamic Database Caching

All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at

Benchmarking Hadoop & HBase on Violin

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

ioscale: The Holy Grail for Hyperscale

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

BUSINESS INTELLIGENCE ANALYTICS

SCI Briefing: A Review of the New Hitachi Unified Storage and Hitachi NAS Platform 4000 Series. Silverton Consulting, Inc.

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Everything you need to know about flash storage performance

SOLID STATE DRIVES AND PARALLEL STORAGE

Big data management with IBM General Parallel File System

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Proactive Performance Management for Enterprise Databases

EMC XTREMIO EXECUTIVE OVERVIEW

Memory Channel Storage ( M C S ) Demystified. Jerome McFarland

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

IBM PureFlex and Atlantis ILIO: Cost-effective, high-performance and scalable persistent VDI

MS Exchange Server Acceleration

An Oracle White Paper July Accelerating Database Infrastructure Using Oracle Real Application Clusters 11g R2 and QLogic FabricCache Adapters

Boost Database Performance with the Cisco UCS Storage Accelerator

Microsoft Windows Server Hyper-V in a Flash

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May Copyright 2014 Permabit Technology Corporation

Vormetric and SanDisk : Encryption-at-Rest for Active Data Sets

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper.

Promise of Low-Latency Stable Storage for Enterprise Solutions

The Next Evolution in Storage Virtualization Management

Virtualization of the MS Exchange Server Environment

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Brocade Network Monitoring Service (NMS) Helps Maximize Network Uptime and Efficiency

Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage

Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era

Benefits of Deploying VirtualWisdom with HP Converged Infrastructure March, 2015

Introduction to NetApp Infinite Volume

Scala Storage Scale-Out Clustered Storage White Paper

IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?

An Oracle White Paper October Realizing the Superior Value and Performance of Oracle ZFS Storage Appliance

NEXSAN NST STORAGE FOR THE VIRTUAL DESKTOP

Understanding Data Locality in VMware Virtual SAN

Microsoft Windows Server in a Flash

WHITE PAPER Addressing Enterprise Computing Storage Performance Gaps with Enterprise Flash Drives

All-Flash Arrays: Not Just for the Top Tier Anymore

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all.

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Data Center Solutions

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

Virtual server management: Top tips on managing storage in virtual server environments

Advanced Core Operating System (ACOS): Experience the Performance

Information management software solutions White paper. Powerful data warehousing performance with IBM Red Brick Warehouse

Easier - Faster - Better

Microsoft Windows Server Hyper-V in a Flash

PARALLELS CLOUD SERVER

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

PRODUCTS & TECHNOLOGY

Accelerate the Performance of Virtualized Databases Using PernixData FVP Software

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server

VDI Appliances Accelerate and Simplify Virtual Desktop Deployment

WHITE PAPER. Get Ready for Big Data:

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

Transcription:

Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010

2 Saving Millions By Making It Easier And Faster Every year slow data centers and application failures cost companies hundreds of millions of dollars. Centralized storage caching applies the well-known concept of caching using high-speed DRAM & flash memory, but adds a new and innovative architecture which offers data center performance insurance. Data Center Challenge: Surviving Peak Workloads Typically, a data center s inability to process peak workloads stems from the I/O bottleneck inherent to traditional storage architectures. Facing pain from slow and sequential data access using mechanical hard disk drives, attempts to solve the problem range from over-provisioning parallel disks to placing cache memory directly in compute servers or storage devices. All of the proposed solutions have been expensive and unable to close the widening server-storage performance gap. 1 Shortfall Of Existing Solutions 1. Parallelizing disk I/O does not accelerate response time. It still takes milliseconds to access data on a mechanical disk drive no matter how many of them are available. 2. Traditional cache capacity is very limited in servers or storage systems. Storage experts recommend sizing a disk cache at ten percent of the disk s capacity. Following this rule-of-thumb a terabyte disk would need 100GB of cache which is unheard of. 3. Server and storage devices contain closed caches in that the cache resource is not usable by any other devices. Disk Drive Performance Shortfall Using multiple disk drives and striping data across them to increase I/O operations per seconds (IOPs) can improve throughput but not reduce I/O response times. 2 The root cause is the mechanical process of accessing disk data. 1 The Server Storage Performance Gap, Whitepaper, Violin Memory and The IO Performance Gap, StorageIO Group 2 The Disk Drive Shortfall, Technical Whitepaper, Violin Memory

3 Figure 1. Typical Disk Drive Based Storage Device Performance Profile Moving physical parts rotating the magnetic platter and the actuator implies a significant millisecond delay in responding. As additional activity or application workload increases, subsequent I/O requests stall, causing an I/O request queue. Although I/O queues can be reduced by parallel processing, individual I/O response times stay the same. For typical drive-based storage devices the I/O response time increases with a growing number IOPs. When I/O bottlenecks emerge, response time exceeds acceptable Service Level Agreements (SLAs) as shown in Figure 1. To date there has not been any way to insure a specific service level for IOPs. Caching Caching is a well known method that has proven extremely effective in many areas of computer design. Mitigating I/O bottlenecks with local caches duplicating original values has often been used to accelerate data access. Once the data is stored in a local cache, future operations access the cached copy rather than re-fetching data from the mechanical disk drive. Until now the caching concept has been used primarily in compute servers and storage devices both of which have strict limitations.

4 Server- based Caching Shortfall Server-based caching uses part of a compute server s main memory to cache data either within the application or in the storage device driver. The amount of usable memory within a computer server is typically capacity constrained as the application consumes most of the memory itself. Figure 2. Server- based Caching Sever-based caching does not scale passed the compute server, thus making it a nonsharable limited resource, as shown in Figure 2. Storage Device Caching Shortfall Storage device caching equips the storage subsystem with memory to cache frequently accessed data. Typically the cache is proprietary and small. As an example, for example 300GB disks contain 16MB of cache, but would and would need actually 30GB to be consistent with experts sizing recommendations discussed earlier. A 100TB storage system may need 10TB of cache, but the typical NFS storage system only supports 1TB of relatively slow MLC flash. Storage device caching does not scale passed the particular storage subsystem, making it a non-sharable resource, as shown in Figure 3.

5 Figure 3. Storage Device Caching And placing cache in a storage subsystem is a costly affair that customers are forced to accept because of the proprietary nature of most storage systems. Solution: Centralized Storage Caching Centralized storage caching applies the well-known concept of caching to create a central, sharable, and scalable caching resource that works with existing data center architectures. It keeps frequently accessed data in a very large central memory pool instead of relying solely on traditional hard disk drives. For example the vcache system leverages flash technology to enable 1-15 terabytes of NFS Cache. Centralized storage caching enables high-performance data access by avoiding time-consuming disk I/O and accelerates applications due to minimal I/O response times and increased data throughput. Figure 4. Centralized Storage Caching Centralized storage caching can be implemented with a sharable and scalable caching system that transparently integrates with existing data center architectures. This means no software to be installed or hardware to be added to existing compute servers or storage subsystems. It can keep frequently accessed data from hundreds of storage systems at hand and service I/O requests from thousands of concurrent clients in parallel with minimal response times.

6 Consolidating cache resources maximizes their use through sharing and simplifies management as a single scalable resource.

7 Violin Memory vcache Technologies The Violin Memory architecture for scalable NFS caching systems, called vcache, is based on a number of indispensable technologies that provide minimal response times and high-performance parallelism for a large cache. Figure 5. vcache Technology Architecture. Connected Memory Architecture The Connected Memory Architecture combines DRAM and flash memory into a unified large cache pool. This patent-pending technology is the foundation for scalable caching appliances and responsible for sharing data across all modules. Memory Coherence Protocol The Memory Coherence Protocol ensures constant response times across a large number of DRAM-based cache modules. Real Time Compressor The real time data compressor dramatically accelerates the internal network throughput and the cache memory. This patent-pending technology is responsible for a solution that goes beyond the traditional physically available limits. Cache Directory The Cache Directory is a shared resource, available across all caching modules, that contains the data and intelligence about current cache content. It is used by the policy engine, managed by the cache manager, and accessed by storage clients.

8 Cache Manager The Cache Manager provides a simple cache resource management framework for cache memory policies to be created or modified. System Manager The System Manager provides a simple cache module management framework to manage cache modules. Policy Engine The Policy Engine enforces heuristic or user defined policies for caching algorithms, or event driven caching. Application Caching Profiles The Application Caching Profiles are settings optimized for particular workloads and data center applications such as databases, small number of large files operations, or large number of small file operations. Data Center Performance Insurance Service level agreements for data center applications commit to an acceptable response time. Typically the acceptable threshold level is based on customer requirements specifying measurable objectives. When additional workload hits an I/O constrained disk based storage subsystem, response time increases and performances suffers. The more severe the bottleneck and the higher the peak workload, the faster response time will deteriorate (e.g. increase) from acceptable levels. Many times when data center applications drive more than 100k IOPs performance SLA will fall short as shown in the Disk Drive Profile of Figure 6.

9 Figure 6. Data Center with Performance Insurance Centralized storage caching provides protection by guaranteeing minimal response times (smaller than 1ms even above 100K IOPs). This keeps response times at constantly acceptable levels, even through peak workloads. Conclusion Centralized storage caching provides data center performance insurance by neutralizing and eliminating I/O bottlenecks. Its seamless integration with existing infrastructure makes it easy to deploy without disrupting IT or business operations. This central, sharable, and scalable approach efficiently ensures meeting robust service level agreements in the face of escalating data center demands.

10 Violin Memory accelerates storage and delivers real time application performance with vcache NFS caching. Deployed in the data center, Violin Memory scalable vcache systems provide scalable and transparent acceleration for existing storage infrastructures to speed up applications, eliminate peak load disruptions, and simplify enterprise configurations. 2010 Violin Memory. All rights reserved. All other trademarks and copyrights are property of their respective owners. Information provided in this paper may be subject to change. For more information, visit www.violin-memory.com Contact Violin Violin Memory, Inc. USA 2700 Garcia Ave, Suite 100, Mountain View, CA 94043 33 Wood Ave South, 3rd Floor, Iselin, NJ 08830 888) 9- VIOLIN Ext 10 or (888) 984-6546 Ext 10 Email: sales@violin- memory.com www.violin- memory.com