1 Scaling Web Applications on Server-Farms Requires Distributed Caching A White Paper from ScaleOut Software Dr. William L. Bain Founder & CEO Spurred by the growth of Web-based applications running on server-farms, a new category of data is beginning to play a key role in driving high performance. Called workload data, this information claims the unique properties of being simultaneously mission critical, frequently accessed, and often short lived. Workload data can either be generated by the application or it can represent data cached from recent access to a database server. Examples include intermediate business logic state, Web session-state, and cached data sets. This unusual combination of characteristics has created the fast-growing need for applications to store workload data in a distributed cache that is shared across the server-farm. A distributed cache holds key workload data in memory for high performance. It also allows simultaneous access from any server in the farm to simplify application design. Distributed caching enables server-farms to meet the new performance, scalability and availability challenges created by the management of workload data. This white paper examines the need for distributed caching and its importance in building server-farm applications. It describes ScaleOut StateServer, a distributed cache for the.net platform, and shows how it can be used to easily integrate distributed caching into the design of new applications. The traditional method of storing fastchanging application state in database servers no longer meets the needs of today's server-farms. The growing use of server-farms in mission-critical applications has made distributed caching a key requirement in delivering high performance and scalability. Growth of Server-Farms Over the past several years, server-farms have rocketed in popularity because they remove a crippling barrier to the growth of Web-based applications. By enabling applications to both scale in capacity and provide non-stop availability, server-farms have played a major role in the rapid expansion of Web-based computing. The emergence of hardware and software IP load-balancers in the late 1990s catalyzed the development of server-farms by transparently distributing incoming client requests among the servers of a farm. However, Web-based applications and services were not designed to run on server-farms. As a result, all application data had to be stored in a database server so that the application could retrieve it from any server in the farm. Copyright 2006 by ScaleOut Software, Inc.
2 Scaling Web Applications on Server-farms Requires Distributed Caching Page 2 of 10 This constraint limits the application s performance and scalability, and it imposes an additional load on the database server. In essence, server-farms move the performance bottleneck from the Web server to the database server. Internet Database Server Typical Web Farm Architecture Distributed caching solves these problems by enabling Web-based applications to maintain memory-based state and avoid repeated round-trips to the database server. For distributed caching to be effective, application state must be uniformly accessible from all servers. This enables the IP load-balancer to distribute client requests to any server in the farm. Distributed caching thereby creates a breakthrough in application performance, and it eliminates a bottleneck at the database server. It also can help maintain the simplicity of application design. Workload Data For Web-based applications, workload data represent the application s state information that must be maintained between client requests. For example, e-commerce applications use shopping carts to record each customer's progress in a shopping session. Online banking applications might maintain pro forma account balances and login state to back-end mainframe servers during customer sessions. In addition, all Web-based applications, typically maintain cached database information, such as category lists, for rapid access in producing output pages. Other types of server-farm and grid applications also make use of workload data that represent the state of a distributed application. For example, scientific modeling and weather forecasting applications keep application state in the form of a distributed grid that is partitioned among the servers. Likewise, multi-player games often distribute the various regions of a game and its players among the servers of a farm.
3 Scaling Web Applications on Server-farms Requires Distributed Caching Page 3 of 10 All of these examples of workload data share a common set of unique characteristics that are important in their usage on server-farms: mission-critical: Loss of data can have a significant impact on the overall application; for example, loss of a shopping cart can result in the loss of a sale in an e-commerce transaction. short lived: The data are only used for the duration of a session or of the overall application, and once a transaction is committed, they may no longer be needed; for example, after an e-commerce purchase is made, the shopping cart is removed. Note that some workload data cached from a database server may persist beyond the duration of an individual session. frequently accessed: Workload data are frequently accessed and updated during their relatively short lifespan; for example, the state of an online banking transaction may change rapidly as the user progresses through a session. Unlike cached database data, fast-changing workload data have a much higher ratio of writes to reads. The characteristics of workload data present a striking contrast to long-lived, line-ofbusiness (LOB) data, such as accounting information, customer profiles, scientific data, census information, and other similar data, which are typically held in database servers. The following table shows how these two types of data tend to differ: LOB Data Workload Data Volume High Low Lifetime/turnover Long/slow Short/fast Access patterns Complex Simple Data preservation Critical Less critical Fast access/update Less important More important Comparison of Workload Data to LOB Data These differences create new challenges for effectively storing workload data on server-farms. Traditional Approaches for Storing Workload Data As the need to efficiently manage workload data on server-farms has become increasingly important, Web architects have explored various approaches to this problem, each of which has important shortcomings: client-side cookies: This approach, which is often used in client/server systems, transparently stores session data on the client system. Since the information must travel back and forth between client and server on each page access, the data size must be kept very small, and security can be a challenge. This approach is only suitable for client-server applications.
4 Scaling Web Applications on Server-farms Requires Distributed Caching Page 4 of 10 in-memory storage on the server: This technique stores workload data in memory on the server. For client-server applications, it requires that session affinity to a particular server be maintained by the load-balancer, and this limits scalability. Many ISPs routinely change IP addresses mid-session, which can cause sessions to be lost. Also, a server cannot be taken down for maintenance until all sessions are drained from it. Two additional limitations of this approach are that in-memory storage is lost if a server fails, and this storage is not accessible to other servers in the farm. memory-based storage server: This solution uses a dedicated server to hold workload data for a server-farm and solves the issues associated with session affinity by making workload data accessible to all servers. However, it also introduces a single point of failure that can make all session data unavailable farm-wide in the case the storage server fails. Also, this method of storage does not scale with growth in the server-farm and can easily become a performance bottleneck for larger server-farms. database server: Because database servers have been optimized over many years for reliably storing and searching large volumes of long lived, line-ofbusiness data, they typically are not well suited for holding workload data. The repeated reads and writes of workload data from a database server tends to impede application performance and slow down the database server. Also, unless the database server is running on a server cluster, it represents a single point of failure for the farm. Because clustering tends to be expensive and complicates management of the farm, it usually is only implemented in very high-end operations. The limitations of these traditional storage techniques can be summarized in the following table. Note that none of these techniques simultaneously solves the challenges of providing performance, scalability, and high availability. The table also shows that distributed caching meets all these challenges; this will be explored in detail in the next section. Required for Server-farms Storage method Performance Scalability Hi-av Client-side cookies In-process Memory-based server Database (not clustered) Database (clustered) Distributed caching Comparison of Storage Methods on Server-Farms Because there was no choice in the past, Web architects were forced to choose among the traditional alternatives for storing workload data. However, these methods of
5 Scaling Web Applications on Server-farms Requires Distributed Caching Page 5 of 10 storing workload data are no longer sufficient for serious applications running on server-farms. The growing use of server-farms in mission-critical applications has made the effective handling of workload data increasingly important in maintaining high performance and availability. Why are each of these techniques, insufficient for today's server-farms? The potentially voluminous and sensitive nature of workload data rules out the use of client-side cookies as a general solution; cookies are primarily used only to hold login information. In-process storage is too volatile for mission-critical workload data, and its use impedes the IP load-balancer s ability to scale performance. A stand-alone memory-based server or database server introduces a new bottleneck to performance scaling, and it also creates a single point of failure. Clustered database servers avoid the single point of failure, but they do not remove the performance bottleneck. The key limiting factor for scalability on server-farms is the performance bottleneck for workload data storage. The Next Generation: Distributed Caching The recent recognition of workload data as being uniquely demanding has required the development of new technology. To maximize performance and scalability, workload data should be kept as close as possible to the application that uses it and ideally in memory on the server-farm. This can be accomplished using a distributed, in-memory cache readily accessible to applications on all servers. Distributed caching boosts performance and eliminates the bottleneck of repeatedly saving and retrieving workload data in a database server. It also minimizes overall cost by avoiding the use of expensive clustered database resources. Interestingly, this approach has been successfully used for many years to maximize performance in high-performance scientific computing. However, keeping workload data in a distributed, in-memory cache creates three important new challenges that the cache designer must solve for this approach to be viable: scalable access: To avoid any bottlenecks to scaling performance as the serverfarm grows, workload data must be automatically distributed, or partitioned, across the server-farm for parallel access. This allows applications running on all servers to simultaneously access different stored data. Partitioned data must be dynamically load-balanced among the servers to avoid creating hot spots in the farm over time. uniform accessibility: Workload data must be uniformly accessible to all servers in the farm. This allows an application to access workload data that may be stored on a different server, and it enables the load-balancer to direct client requests to any server. By doing this, all workload data can be readily accessed independent of where it is stored on the server-farm. high availability: Storing workload data in a distributed, in-memory cache also requires that it be able to survive the failure of a server. The cache must copy, or replicate, workload data to additional servers in order to avoid data loss if a server becomes unavailable. However, to maintain scalability, workload data
6 Scaling Web Applications on Server-farms Requires Distributed Caching Page 6 of 10 must not be replicated to all servers; otherwise, storage requirements would increase too quickly as the server-farm and its workload grows. To be effective in managing workload data on a server-farm, the distributed, inmemory cache must incorporate and integrate these key mechanisms, i.e., loadbalanced data partitioning, uniform accessibility, and replicated storage. Together, these mechanisms, along with numerous capabilities such as optimized internal buffering and storage strategies, enable applications to quickly store and access workload data, and they help maintain uninterrupted service after failures occur or as servers are added to the farm. Implementing these mechanisms in turn requires that the distributed cache be able to effectively coordinate its actions on all servers and to detect and recover from server failures. This in turn requires that the distributed cache implement an efficient error detection and recovery mechanism that allows it to detect the health of servers within the farm and to heal the distributed cache when a failure occurs. By doing so, the storage architecture can present a very simple and compelling view to applications that hides the complexity involved in handling these distributed computing issues. In fact, applications need not be aware that they are running on a server-farm and that their workload data has been intelligently distributed across the farm to scale performance. Another advantage of a distributed, in-memory cache is that applications can keep workload data in its natural, object-oriented form and avoid converting it to a relational form for storage in a database server. A distributed, in-memory cache typically stores workload data as an opaque, binary stream of data that has been automatically serialized from the objects that the application manages. No additional application code is necessary to convert an application s objects into the form needed for out-of-process, cache storage. ScaleOut StateServer ScaleOut StateServer from ScaleOut Software provides distributed caching for server-farms. This product runs as a software service on a Web or application serverfarm and implements the cache within the farm s random access memory (RAM). It can be used to create, read, update, and remove opaque workload data objects based on an identifying key. Stored objects either have been serialized from datasets or record sets that were previously accessed from the LOB database, or are generated as business logic objects. To ensure scalable performance and high availability, ScaleOut StateServer incorporates the three key mechanisms of data partitioning, uniform accessibility, and replicated data storage. It delivers scalable throughput by partitioning and dynamically load-balancing workload data across the servers within a farm, as illustrated in the following diagram. It works seamlessly with an IP load-balancer and enables every server to access any workload data object stored in the cache. Using patent-pending technology, it efficiently replicates all data across selected additional servers to ensure high availability and requires less than ten seconds to failover after a server failure (versus one or more minutes for a clustered DBMS).
7 Scaling Web Applications on Server-farms Requires Distributed Caching Page 7 of 10 When a new server is added to the farm, ScaleOut StateServer automatically integrates the server into the distributed cache, and a portion of the load is migrated to it. The cache automatically self-heals after a server failure to restore full redundancy. The product employs a peer-to-peer architecture that eliminates the use of a master server and enables the distributed cache to recover from multiple server failures. Distributed Cache Hi-av Data Partition A Hi-av Data Partition B Hi-av Data Partition C Hi-Av, Distributed Storage Manager Scale Additional Partitions Server Server Server More Servers Data Partitioning in ScaleOut StateServer The distributed, in-memory cache provides much faster response time than a database server by eliminating the need for all servers to repeatedly access a shared database. For example, consider the scenario of using a distributed cache for an e-commerce Web farm s session-state objects. Although the aggregate Web server load grows with the size of the farm (for example, 500 hits per second per server), the backend database server handles a fixed maximum request rate (850 requests per second in this example). If the farm doubles in size, the database server can only deliver half the original throughput to each Web server. In addition, each database update request may require a millisecond or more to write to disk and then confirm completion, and the database server's throughput decreases as the database table grows to accommodate many user sessions. As the Web server-farm grows in capacity to meet demand, the database server must be migrated to faster and more expensive multiprocessor hardware to increase its throughput rate.
8 Scaling Web Applications on Server-farms Requires Distributed Caching Page 8 of 10 Database Server (850 req/s) Web Farm Scenario with a Conventional DBMS Server Compare this scenario to the use of ScaleOut StateServer hosted directly on the Web serverfarm. The distributed cache s throughput grows as the sum of the throughput for each storage partition (800 requests per second in this example) so that overall throughput and application response time per Web server remains constant as the farm grows in capacity. Unlike a database server, the distributed cache s throughput increases to match the growing workload on the server-farm. Further, each update request completes more quickly since no disk access is required to commit the update. StateServer 800 req/s StateServer 800 req/s StateServer 800 req/s StateServer 800 req/s Distributed Cache Web Farm Scenario with ScaleOut StateServer
9 Scaling Web Applications on Server-farms Requires Distributed Caching Page 9 of 10 Performance Tests To measure the performance advantage of a distributed, in-memory cache, such as ScaleOut StateServer (SOSS), over a database server (DBMS) for storing workload data, a.net test application was used to compare response times during repeated accesses to these two storage mechanisms. This test measured the average response time for a repeated sequence of reads and updates to a pool of stored objects. The tests were run while varying the following parameters: object size (10B to 1MB), number for stored objects (2K to 5.6M), and number of servers.(1-28). (Note that for a Web site storing session-state, the number of stored objects corresponds to the number of concurrent users on the site.) The server-farms consisted of IBM BladeCenter servers with GHz CPUs and Gigabit. DBMS accesses were made to an IBM xseries 366 server with GHz CPUs, The test results clearly demonstrate the performance advantages of using a distributed cache for storing fast changing workload data. The first chart shows the average response time for accessing 200 objects per host on a farm with from one to 28 servers. SOSS s response time (blue bars) remains low as the farm and its storage load grows, while DBMS response time (red bars) grows with additional load. SOSS s scalable throughput delivers performance gains ranging from 2.3 for 6 hosts to 4.5 times for 28 hosts. msecs Average Response Time 1KB Objects, 200 Objects/Server SOSS SQL # Servers The performance advantages of using a distributed cache increase as more objects are stored. The second chart shows the average response time for stores with 50K, 100K, 200K, on a 10 server-farm and for 560K objects on a 28 serverfarm. SOSS s response time (the blue bars) remains low as more objects are added, while the DBMS s response time grows significantly. SOSS s performance gain, which ranges from 4.6 to 16.7 times, results from the distributed cache s ability to distribute the workload for accessing and updating objects across many servers. In contrast, for each access request, the DBMS must search a single Average Response Time 1KB Objects database table that grows as more objects are stored. In tests with 10KB objects, SOSS lowered average response time by up to 25.3 times in comparison to the DBMS. This enables online applications such as very large Web sites to maintain fast response times while handling workload data for very large numbers of concurrent users. Summary As companies move more and more key business applications to the Web, server-farms have become increasingly critical in maintaining the scalability and high availability of these applications. Also, as applications manage increasing volumes of workload data, effective storage techniques must be employed to avoid bottlenecks to scaling or loss of msecs SOSS 50, , , ,000 # Stored Objects SQL
10 Scaling Web Applications on Server-farms Requires Distributed Caching Page 10 of 10 mission-critical data. Traditional approaches to storing workload data, in particular, the use of a database server, do not meet today s requirements for delivering high performance on server-farms. Recent breakthroughs in workload data storage have eliminated the need to make a trade-off between performance, scalability, and high availability. Distributed, in-memory caching hosted directly on the server-farm, offers the opportunity to simultaneously meet all of these requirements and also to reduce the load on expensive database resources. However, this exciting new storage architecture presents challenges of its own. By integrating the techniques of data partitioning, uniform accessibility, and data replication, distributed caching can be used to dramatically scale the performance of server-farm applications while maintaining or even simplifying the application s view of the storage architecture. ScaleOut StateServer, a software product for distributed caching from ScaleOut Software designed for.net server-farms, incorporates patent-pending technology for scaling performance and replicating data. These features enable it to deliver highly scalable performance while maintaining high availability for today s server-farms. Measurements have shown that ScaleOut StateServer s performance quickly outpaces the performance of a database server as the load on a server-farm grows. By using a distributed, in-memory cache such as ScaleOut StateServer to provide the workload data storage, application architects can be assured of a server-farm that will meet all the needs of their latest Web-based applications. About ScaleOut Software: ScaleOut Software was founded in 2003 by William L. Bain. Bill has a Ph.D. (1978) from Rice University and has worked at Bell Labs research, Intel, and Microsoft. Bill founded and ran three start-up companies prior to joining Microsoft. In the most recent company (Valence Research, Inc), he developed a distributed Web loadbalancing software solution that was acquired by Microsoft and is now called Network Load Balancing within the Windows Server operating system. SCALEOUT SOFTWARE, INC. Headquarters Sales and Support NE 8 th Street SW Koll Parkway Suite 900 Suite J Bellevue, WA Beaverton, OR Web:
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
SCALEOUT SOFTWARE Using In-Memory Data Grids for Global Data Integration by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 B y enabling extremely fast and scalable data
SCALEOUT SOFTWARE Using an In-Memory Data Grid for Near Real-Time Data Analysis by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 IN today s competitive world, businesses
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth Overview Country or Region: United States Industry: Hosting Customer Profile Headquartered in Overland Park, Kansas,
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Recommendations and Techniques for Scaling Microsoft SQL To support many more users, a database must easily scale out as well as up. This article describes techniques and strategies for scaling out the
Session Code Here Active/Active 2 Clusters for HA and Scalability Ariff Kassam xkoto, Inc Tuesday, May 9, 2006 2:30 p.m. 3:40 p.m. Platform: 2 for Linux, Unix, Windows Market Focus Solution GRIDIRON 1808
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
Business-centric Storage for small and medium-sized enterprises How DX powered by Intel Xeon processors improves data management Data management requirements are increasing day by day Storage administrators
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
WHITE PAPER The Flash- Transformed Server Platform Maximizing Your Migration from Windows Server 2003 with a SanDisk Flash- enabled Server Platform.www.SanDisk.com Table of Contents Windows Server 2003
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Oracle9i Release 2 Database Architecture on Windows An Oracle Technical White Paper April 2003 Oracle9i Release 2 Database Architecture on Windows Executive Overview... 3 Introduction... 3 Oracle9i Release
Business-centric Storage for small and medium-sized enterprises How DX powered by Intel Xeon processors improves data management DX Online Storage Family Architecture DX60 S2 DX100 S3 DX200 S3 Flexible
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
Express5800 Scalable Enterprise Server Reference Architecture For NEC PCIe SSD Appliance for Microsoft SQL Server An appliance that significantly improves performance of enterprise systems and large-scale
Description: This application note aims to assist you in choosing the right edition of Microsoft SQL server for your ICONICS applications. OS Requirement: XP Win 2000, XP Pro, Server 2003, Vista, Server
Database Server Configuration Best Practices for Aras Innovator 10 Aras Innovator 10 Running on SQL Server 2012 Enterprise Edition Contents Executive Summary... 1 Introduction... 2 Overview... 2 Aras Innovator
SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2 SQL Server
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Laserfiche Hardware Planning and Specifications White Paper September 2012 Table of Contents Introduction... 3 Gathering System Requirements... 3 System Storage Calculations... 4 Evaluate Current State...
BACKUP BENCHMARKING OF VERY L ARGE MICROSOFT SQL S ERVER 7.0 D ATABASES DURING ACTIVE ONLINE T RANSACTION PERFORMANCE L OADING ON COMPAQ HARDWARE By Torrey Russell Backup & Disaster Recovery for Windows
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory
Application Brief: Using Titan for MS Abstract Businesses rely heavily on databases for day-today transactions and for business decision systems. In today s information age, databases form the critical
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
E-PAPER FEBRUARY 2014 Why DBMSs Matter More than Ever in the Big Data Era Having the right database infrastructure can make or break big data analytics projects. TW_1401138 Big data has become big news
www.raima.com Scott Meder Senior Regional Sales Manager email@example.com Short Introduction to Raima What is Data Management What are your requirements? How do I make the right decision? - Architecture
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
Using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC) Introduction In today's e-business on-demand environment, more companies are turning to a Grid computing infrastructure for
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Base One's Rich Client Architecture Base One provides a unique approach for developing Internet-enabled applications, combining both efficiency and ease of programming through its "Rich Client" architecture.
Disk-to-Disk Backup & Restore Application Note All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc., which are subject to change from
Fax Server Cluster Configuration Low Complexity, Out of the Box Server Clustering for Reliable and Scalable Enterprise Fax Deployment www.softlinx.com Table of Contents INTRODUCTION... 3 REPLIXFAX SYSTEM
TECHNICAL WHITE PAPER GRIDSCALE DATABASE VIRTUALIZATION SOFTWARE FOR MICROSOFT SQL SERVER Typical enterprise applications are heavily reliant on the availability of data. Standard architectures of enterprise
Postgres Plus Advanced Server An Updated Performance Benchmark An EnterpriseDB White Paper For DBAs, Application Developers & Enterprise Architects June 2013 Table of Contents Executive Summary...3 Benchmark
Neverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle Table of Contents Virtualization 3 Benefits of Virtualization 3 Continuous Availability
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Using Oracle Real Application Clusters (RAC) DataDirect Connect for ODBC Introduction In today's e-business on-demand environment, more companies are turning to a Grid computing infrastructure for distributed
Elastic Application Platform for Market Data Real-Time Analytics Can you deliver real-time pricing, on high-speed market data, for real-time critical for E-Commerce decisions? Market Data Analytics applications
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME? EMC and Intel work with multiple in-memory solutions to make your databases fly Thanks to cheaper random access memory (RAM) and improved technology,
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
ESG Lab Summary Hyper-V R2 SP1 Application Workload Performance Virtualizing SQL Server, SharePoint, and Exchange with Confidence This report presents a summary of Microsoft-commissioned testing of the
Microsoft SQL Server 2014 in a Flash Combine Violin s enterprise-class storage arrays with the ease and flexibility of Windows Storage Server in an integrated solution so you can achieve higher performance
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
SCHOONER WHITE PAPER Top 10 Reasons why MySQL Experts Switch to SchoonerSQL - Solving the common problems users face with MySQL About Schooner Information Technology Schooner Information Technology provides
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com firstname.lastname@example.org
MAS 200 MAS 200 for SQL Server Introduction and Overview March 2005 1 TABLE OF CONTENTS Introduction... 3 Business Applications and Appropriate Technology... 3 Industry Standard...3 Rapid Deployment...4
Microsoft Windows Server in a Flash Combine Violin s enterprise-class storage with the ease and flexibility of Windows Storage Server in an integrated solution so you can achieve higher performance and
Industry Trends and Technology Perspective Solutions Brief NetApp MultiStore Virtual Storage Server A look at MultiStore and how it addresses various requirements including storage consolidation, security,
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
White Paper Backup and Recovery for SAP Environments using EMC Avamar 7 Abstract This white paper highlights how IT environments deploying SAP can benefit from efficient backup with an EMC Avamar solution.
Parallel Replication for MySQL in 5 Minutes or Less Featuring Tungsten Replicator Robert Hodges, CEO, Continuent About Continuent / Continuent is the leading provider of data replication and clustering
Reference Architecture EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT Optimize scalability and performance of FAST Search Server 2010 for SharePoint Validate virtualization
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
Realizing the True Potential of Software-Defined Storage Who should read this paper Technology leaders, architects, and application owners who are looking at transforming their organization s storage infrastructure
WHITEPAPER How Flash Storage is Changing the Game Table of contents What you will learn 2 Why flash 3 When to consider flash 4 Server-side flash: Read-only acceleration 4 Hybrid flash arrays: Getting the
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability