Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Similar documents
Condusiv s V-locity 4 Boosts Virtual Machine Performance Over 50% Without Additional Hardware

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

Optimizing SQL Server Storage Performance with the PowerEdge R720

WHITE PAPER Optimizing Virtual Platform Disk Performance

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

PrimaryIO Application Performance Acceleration Date: July 2015 Author: Tony Palmer, Senior Lab Analyst

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

Deploying Affordable, High Performance Hybrid Flash Storage for Clustered SQL Server

The MAX5 Advantage: Clients Benefit running Microsoft SQL Server Data Warehouse (Workloads) on IBM BladeCenter HX5 with IBM MAX5.

Content-Based Caching by Infinio Maximizes I/O Workloads on vsphere NFS Datastores

Accelerating Server Storage Performance on Lenovo ThinkServer

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

WHITE PAPER Best Practices for Using Diskeeper on Storage Area Networks (SANs)

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

Maximum performance, minimal risk for data warehousing

Accelerating Web-Based SQL Server Applications with SafePeak Plug and Play Dynamic Database Caching

StarWind Virtual SAN Provides HA Storage for Hyper-V Clusters with DAS-Based Virtualized iscsi Devices

Condusiv Diskeeper 12 Intelligently Restructures Writes to Maximize Performance of Server and Desktop Systems

Accelerating Microsoft Exchange Servers with I/O Caching

Using Synology SSD Technology to Enhance System Performance Synology Inc.

SQL Server Virtualization

Microsoft Office SharePoint Server 2007 Performance on VMware vsphere 4.1

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

PARALLELS CLOUD STORAGE

Lab Validation Report

MS SQL Performance (Tuning) Best Practices:

Tableau Server 7.0 scalability

Microsoft SQL Server 2014 Fast Track

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

EMC Unified Storage for Microsoft SQL Server 2008

Data Center Storage Solutions

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

WHITE PAPER 1

Delivering Accelerated SQL Server Performance with OCZ s ZD-XL SQL Accelerator

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Analysis of VDI Storage Performance During Bootstorm

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

EMC VFCACHE ACCELERATES ORACLE

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Rackspace Cloud Databases and Container-based Virtualization

Virtuoso and Database Scalability

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

Microsoft SQL Server: MS Performance Tuning and Optimization Digital

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems. Presented by

Microsoft Exchange Server 2003 Deployment Considerations

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

EMC XtremSF: Delivering Next Generation Performance for Oracle Database

SQL Server Performance Tuning and Optimization

Maximizing SQL Server Virtualization Performance

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT

Desktop Virtualization and Storage Infrastructure Optimization

Benchmarking Cassandra on Violin

The Impact of Disk Fragmentation on Servers. By David Chernicoff

WHITE PAPER. SQL Server License Reduction with PernixData FVP Software

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Performance Tuning and Optimizing SQL Databases 2016

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

XenDesktop 7 Database Sizing

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Aras Innovator 10 Scalability Benchmark Methodology and Performance Results

The 8Gb Fibre Channel Adapter of Choice in Oracle Environments

Deep Dive: Maximizing EC2 & EBS Performance

Ready Time Observations

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Memory-Centric Database Acceleration

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

Express5800 Scalable Enterprise Server Reference Architecture. For NEC PCIe SSD Appliance for Microsoft SQL Server

SQL Server Business Intelligence on HP ProLiant DL785 Server

HGST Virident Solutions 2.0

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Everything you need to know about flash storage performance

LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment

MAGENTO HOSTING Progressive Server Performance Improvements

Data Center Performance Insurance

Samsung Solid State Drive RAPID mode

Benchmarking Hadoop & HBase on Violin

Disk Storage Shortfall

SafePeak Case Study: Large Microsoft SharePoint with SafePeak

Evaluation of Enterprise Data Protection using SEP Software

The Methodology Behind the Dell SQL Server Advisor Tool

A Survey of Shared File Systems

Esri ArcGIS Server 10 for VMware Infrastructure

Delivering Quality in Software Performance and Scalability Testing

Transcription:

openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01

Executive Overview V -locity Server increased SQL Server 2012 transaction processing rate by an average of 55% and improved response time by 56%. Why Read This Document? For this briefing, openbench Labs tested the ability of Server to optimize I/O in a SQL Server environment. Using SQL Server 2012, we tested a mix of a high volume of light-weight SQL select transaction processing (TP) queries, combined with heavy-weight background update queries. For the SQL Server benchmark testing, openbench simulated 1 to 40 daemon processes (1 daemon generates the equivalent of 70 normally-queued user processes) issuing queries non-stop. When a real application user interacts with SQL Server, there is lag between queries issued. In our test scenario, however, the daemon process issued queries without lag that is, no think-time, type-time, or pause-time between query activity. In a series of tests, openbench Labs measured the ability of Server s IntelliMemory to offload I/O on read operations through dynamic caching, in order to boost throughput and reduce latency. In addition we examined the ability of IntelliWrite technology to prevent unnecessary split I/Os, using its intelligence to extend current database files and create new log files as single contiguous collections of logical blocks. In a test of SQL Server query processing, openbench Labs benchmark findings revealed that, on a server running SQL Server, enabled higher transaction per second (TPS) rates and improved response time by reducing I/O processing on storage devices. What s more, in a SAN- or NAS-based storage environment, Server reduced I/O stress on multiple systems sharing storage resources. Overall, Server can improve scalability by reducing average response time and enabling SQL Server to support more users. Snapshot of Findings 1) In SQL Server 2012 processing tests involving 1 to 40 daemon processes, Server improved transaction performance by an average of 55%, before the I/O subsystem of our server began to physically limit the disk IOPS rate. 2) By improving the average response time of transactions by 56%, Server enabled SQL Server to scale enough to handle the transactions of 37% more daemon processes than the database could sustain without Server. 3) Server prevented 96% log file fragments from being created during a typical series of stress tests, contributing to the increase in transaction processing rate and improved response time. 02

Executive Briefing: Jack Fegreus April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers With the release of SQL Server 2012, Microsoft is positioning SQL Server as a database for use with both large structured and unstructured (big data) applications. Whether with traditional data warehouse scenarios, or as a link between unstructured data platforms such as Hadoop, sites with large databases with large numbers of rows are now a key target, which makes SQL Server 2012 a good way for CIOs to initiate big data pilot projects. Under Test: Server I/O Acceleration Condusiv Technologies Server 1) s read optimization (IntelliMemory ) technology reduces disk I/O requests by predictively caching active data within the server to increase local IOPS performance and reduces overhead on shared storage devices by reducing physical read I/O requests. 2) s write optimization (IntelliWrite ) technology provides continuous I/O write optimization using dynamic intelligence when creating or extending files to behave sequentially, eliminating split and other unnecessary I/Os for greater sequential throughput performance on both writes and subsequent reads. 3) By optimizing I/O processing on a server running, less I/O traffic is directed at an underlying SAN- or NAS-based storage system. Moreover, with IntelliWrite and IntelliMemory, both TP throughput and average response time were more predictable and scalable as the TP rate increased 55% and response time was improved by 56% over all test loads of 1 to 40 daemon processes on SQL Server 2012. 4) s advanced I/O optimization technology is compatible with all advanced storage features, such as replication, de-duplication, thin provisioning, and snapshots. To provide a test case and example of a large mission-critical application that could not be fully cached by SQL Server 2012, and would be highly dependent on disk access rates for essential performance, openbench labs created a 30GB instance of the ANSI AS3AP benchmark database and populated it with 32.5 million rows of data. For IT operations, optimizing I/O of transaction processing applications is complicated by the difference in the rate that CPU and memory performance have advanced versus the rate that disk access times have improved. Attempts to deal with these issues often create CPU bottlenecks as processing stalls waiting for the delivery of data. With I/O access time highly dependent on disk drive mechanics, IT frequently turns to costly hardware solutions involving solid-state drives (SSDs). Such hardware solutions, however, suffer from dependence on a hardware platform, under-utilization of resources, lack of flexibility with changing workloads, and high costs. In this analysis, openbench Labs examines the ability of Condusiv Technologies Server to maximize I/O performance in a high-tp SQL Server environment. By efficiently optimizing the way data is read from and written to disk for Windows systems, Server optimized the processing of database transactions sent from another application that was used to generate multiple SQL Server client requests. 03

Running a specific collection of SQL transactions for up to 40 daemon processes (1 daemon generating the equivalent of 70 normally-queued user processes) with randomlygenerated values, such as indices and row numbers, we initiated transaction loads for varyingly-sized groups of users. Over multiple load tests, we measured an average increase in SQL Server transactions of 55% with Server. Removing Barriers to I/O Performance To optimize I/O, Server has been designed to resolve two very important issues: 1) eliminates nearly all unnecessary I/O operations at the source when writing a file, which eliminates all unnecessary I/O operations on subsequent reads. 2) caches frequentlyaccessed data to keep read requests from traveling the full distance to storage and back. U sing sophisticated pattern analysis to determine which blocks will be placed in cache, IntelliMemory offloads a significant portion of I/O read operations from logical disks. IntelliWrite Technology Server solves the important issue of unnecessary I/O generation with IntelliWrite technology. IntelliWrite prevents the Windows OS from storing files as disjointed block sets in its logical block space representation of a logical storage volume. In a SAN- or NAS-based environment, the problem of unnecessary I/O is compounded as multiple systems share the same set of storage resources, which results in highly random I/O behavior pushed down to the disk subsystem. The generation of unnecessary I/Os not only slows the speed of the originating system, it also slows other systems sharing the same storage arrays. To solve this, IntelliWrite adds more intelligence to the way the Windows OS preallocates file space in order to continuously restructure writes in a coherent manner, store files as contiguous sets of blocks, and prevent performance penalties. Moreover, when a file is accessed and modified in the future, IntelliWrite will automatically restructure that file for optimal I/O performance. IntelliMemory Technology Optimizing writes and eliminating unnecessary I/O operations does not resolve all important data access issues, especially when reading data. To provide a full spectrum of I/O optimization, Server implements IntelliMemory, a highly efficient data caching solution that leverages available server memory to provide faster access to data and dramatically improved throughput. Using sophisticated pattern analysis to determine which blocks will be placed in cache, IntelliMemory offloads a significant portion of I/O read operations from logical disks. More importantly, by offloading physical I/O, Server improves performance on all systems sharing storage resources via a SAN or NAS. 04

SQL Stress Test The Test To assess the performance capabilities of Server, we configured a test scenario using three servers. The following is an overview of the test environment and process: 1) A Dell 1950 PowerEdge server, with a quad-core CPU and 8GB RAM, was used to run Windows Server 2012, Server and SQL Server 2012. 2) We set up an independent logical disk for the ANSI AS3AP benchmark We configured a 30GB instance of the ANSI AS3AP benchmark database, which had 32.5 million rows in data tables. With 8GB of RAM on the server, the entire database could not fit within the SQL Server cache. More importantly, SQL Server claimed 3.5GB of RAM; however it utilized less than 2GB as a workspace. As a result, a significant amount of I/O was directed at the disk subsystem. database. For our tests, we configured the database with 32.5 million rows, which translated into a 30GB database. 3) We configured SQL Server to use a maximum of 3MB of server memory. 4) To generate both a batch and an interactive transaction processing load of SQL queries, we configured two servers with 2 CPUs running Windows Server 2008 R2 and a SQL load generator, which used a random number generator to modify scripted shell transactions directed at the target database server. 5) We generated queries for each daemon user and each batch process by inserting random data into fixed templates such that each TP query accessed one row and each batch processing task accessed at least 100 rows at a time. 6) We launched all batch transactions in stress mode without pausing for user think time or keyboard typing time. 05

Server TP Performance SQL Transactions per second (TPS) Average Transaction Response Time (ms) 600 500 400 300 200 100 OPENBENCH LABS SQL Server OLTP Load ANSI AS3AP Benchmark Database Schema w OLTP Daemons Server running Server not running 0 0 10 20 30 40 50 60 Number of daemon processes 100 90 80 70 60 50 40 30 20 10 v ANSI AS3AP Benchmark Database Schema u OLTP Daemons Server running Server not running 0 0 10 20 30 40 50 60 Number of daemon processes Server did not change the internal scalability of our SQL benchmark, but rather created a more responsive environment for the benchmark to leverage. In particular, Server, provided our benchmark with a significantly lower initial response time 26 ms vs. 55 ms from which the benchmark could extend TP loads. While adding daemons increased transaction response time at the same rateu with Server active, TP throughput increased more rapidlyv and more daemons could runw. As a result, we were able to complete 39% more work at peak query execution rates. The Results Our goal was to assess the ability of Server to create an environment in which our SQL Server 2012 benchmark would be able to reach maximum performance and scalability. To achieve our goal, we first created a set of jobs that represented a fixed level of batch production processing. In particular, three simultaneous batch jobs independently accessed and updated a minimum of 100 database rows at a time as rapidly as possible. To our base level of production processing, we added a sequence of TP daemon processes to represent interactive users, who would typically be creating and placing online orders. To maximize application stress on our system, we introduced no pauses for queuing time and forced each TP daemon to continuously launch simple queries that accessed one row of data per transaction. To test scalability, we added five TP daemons at a time, until the average query response time for each daemon exceeded 100 ms. As a result, the number of queries processed and the average query response time measured on each daemon load cycle provided a measure of the ability of Server to create an optimal environment in which to run an application that is driven by a SQL Server 2012 database. We began by running our stress tests on SQL Server 2012 with the default memory configuration, which in theory allowed SQL Server to utilize all of the memory provisioned in our Dell PowerEdge server. On this test, we measured the amount 06

of memory claimed by SQL Server 2012 and the amount of memory that SQL Server 2012 utilized. In all of these tests, SQL Server 2012 conservatively claimed about 3GB of memory and utilized less than 2GB of the 8GB that were physically installed. Next, we reset the maximum memory for SQL Server from its default value to 3GB. When we made this change, overall TP throughput improved between 5% to 10% on nearly all of our test loads. As a result, we imposed a 3GB memory limit for all further testing. The Results: A Closer Look The following table highlights the test results: Server Performance: SQL Server Stress 1 to 55 Daemon Loads Applied to a 30GB Database I/O Activity TPS Rate With TPS Rate Without TPS Rate Delta Improvement With Response Time With Response Time Without Response Time Delta Improvement With 1 daemon* 5 daemons 10 daemons 15 daemons 20 daemons 25 daemons 30 daemons 35 daemons 40 daemons 45 daemons 50 daemons 55 daemons 038* 141 248 348 364 410 454 484 492 513 527 559*** 018 079 161 187 253 283 342 381 402** 20 62 87 161 111 127 112 102 90 108% 78% 54% 86% 44% 45% 33% 27% 22% 26 ms. 35 ms. 40 ms. 43 ms. 54 ms. 60 ms. 66 ms. 72 ms. 81 ms. 87 ms. 91 ms. 99*** ms. 55 ms. 62 ms. 62 ms. 80 ms. 78 ms. 88 ms. 87 ms. 91 ms. 97** ms. 29 ms 27 ms 22 ms 37 ms 24 ms 28 ms 21 ms 19 ms 18 ms 112% 77% 55% 86% 44% 47% 32% 26% 22% Server Database I/O Caching (IntelliMemory) Disk Reads Cache Reads I/O Latency 60,466-58% 31,568-42% 1.2 ms N.A. N.A. 6 ms. Server Database Log File Fragment Prevention (IntelliWrite) Log File Fragments 768 Prevented 803 Fragmented 96% *One daemon generates the equivalent of 70 normally-queued user processes **Without Server running, we reached the benchmark s maximum response time at 40 daemon processes ***With Server running, we were able to reach 55 daemon processes When we ran SQL Server with Server, IntelliMemory came into play for reads not cached by SQL Server. In particular, IntelliMemory cached 42% of the physical reads associated with running SQL Server. By lowering the number of physical I/Os, improved the overall IOPS performance of the server s entire I/O subsystem, which included faster writes to log files which in turn reduced log file wait time. 07

For workloads of 1 to 40 daemons (approximately 70 normally-queued users per daemon), Server improved SQL Server TP performance by an average of 55%. Similarly, Server improved transaction response time by an average of 56%. With the average transaction response time on the initial single-user test reduced by more than 100% in a Server environment, we were able to complete tests on which the average query response time was less than 100 ms for workloads that included up to 55 daemons without the system maxed out at 45 daemons. What s more, we were able to scale workloads and process close to the maximum number of transactions that our physical server was capable of handling while keeping the average transaction response time under 100 ms. Bottom Line On a server running SQL Server 2012, Server created an environment that enabled up to 55% higher TPS rates, improved transaction response time by 56%, and enabled SQL to process 39% more transactions at peak transaction rates. As a result, IT has a powerful tool to maximize the ROI associated with any business application initiative driven by SQL Server at the back end. Westborough, Mass.-based openbench Labs was founded in 2005 by Dr. Jack Fegreus. openbench Labs is a trusted IT industry source, providing hands-on evaluation and certification of Information Technology products and services. openbench Labs enjoys a unique position in the information technology sector. As the premier independent test lab and provider of third-party validation services, OBL has worked with virtually every major vendor and evaluated the most important products and technologies to appear over the past decade. 2013 OpenBench. All Rights Reserved., IntelliWrite, InvisiTasking and IntelliMemory are trademarks or registered trademarks of Condusiv Technologies Corporation. All other trademarks are the property of their respective owners. 08