1 Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory 7300 FSP Evaluation report sponsored by Violin Memory Introduction Today s datacenter is responsible for delivering multiple application services. In a typical datacenter, all-flash arrays (AFAs) are often deployed for one or two workloads. While allflash storage arrays like the Violin Memory 7300 Flash Storage Platform may provide a large number of I/O operations per second (IOPS) to the datacenter, customers have been reluctant to stress these devices because the IOPS numbers quoted on product datasheets are not representative of how real-world workloads behave. In many cases, the real-world workloads involve larger block sizes than the 4K block sizes typically used to obtain the datasheet IOPS numbers, and often these workloads vary the block size as the work progresses. These real-world workloads have variable and sometimes unpredictable effects on the storage array as compared to synthetic workloads. Violin Memory commissioned Demartek to test the Violin Memory 7300 FSP by running four workloads against it at the same time. Demartek ran web servers, Microsoft SQL Server (MSSQL) OLTP, Microsoft Exchange Jetstress, and Microsoft SharePoint on the storage volumes provided by the Violin Memory 7300 FSP. The flash in the Violin Memory 7300 FSP takes the form of the Violin Intelligent Memory Module (VIMM), a key component of the Flash Fabric Architecture that is designed to perform more work in parallel than a storage array built around the typical SSD form factor. This parallelism helps make Violin Memory storage systems capable of supporting multiple concurrent workloads. Key Findings In our tests, the combination of real-world workloads achieved approximately 300,000 IOPS consisting of larger application I/O block sizes, mostly ranging from 8K bytes to 64K bytes, and some of the application servers were saturated by their workloads. As a check, we also performed a synthetic 4K block size read-only Iometer test with the same storage configuration and achieved close to one million IOPS with the synthetic workload.
2 Page 2 of 16 We observed latencies in the low hundreds of microseconds at the storage system while all four workloads were running. Latencies observed at the host VMs routinely spiked slightly higher than 1 ms, however, overall latencies did not exceed 1.4 ms, showing that the Violin Memory 7300 FSP provided consistent performance without experiencing the increase in latency frequently observed with mixed workloads. The Violin Memory 7300 FSP system can deliver sub-millisecond latencies in a mixed-workload environment. However, slightly higher latencies were observed at the host server, especially from within the virtual machines. We observed an additional 250µs to 600µs or more latency due to the storage area network (SAN) infrastructure, physical machine and virtual machine host software stacks. These additional latencies, independent of the storage system, emphasize the need for streamlined host I/O configurations and flat storage networks. As the workload levels increased, the observed latency also increased. I/O profiles of the combined four real-world workloads were messy with variable read-write ratios and average block sizes ranging from 8K to 64K. During the 4-workload test, the MSSQL OLTP contributed the largest proportion of the IOPS and throughput.
3 Page 3 of 16 Test Setup Servers and VMs Two identical servers (DMRTK-SRVR-M and DRMTK-SRVR-N) were used for the host application workloads: Supermicro X9DRE-TF+/X9DR7-TF+ motherboards 2x Intel Xeon E v2, 3.00 GHz, 20 total cores, 40 total threads 256 GB RAM Windows Server 2012 R2 with Hyper-V Workload generating VMs were evenly distributed between the hypervisor servers. The first server hosted an MSSQL OLTP VM and a SharePoint VM. The second server hosted four web server VMs, one MSSQL OLTP VM, and one Exchange Server Jetstress VM. All guest operating systems (OS) used Microsoft Windows Server 2012 R2. Thirty-six (36) LUNs were configured on the Violin Memory 7300 FSP for use by the four workloads. A single Fibre Channel switch was used for these tests. The best practice in a production datacenter would be to use redundant Fibre Channel switches. Default round robin MPIO settings were used on all FC adapters.
4 Page 4 of 16 The four IIS web server VMs were deployed with memory limited to 460 MB each to encourage more I/O to the LUNs instead of the server s cache. The web servers each had access to 20 processor cores, and each had different IPs but identical webpages. The two MSSQL OLTP VMs were both allocated 32 GB of memory and 20 processor cores, but MSSQL was limited to 8 GB memory to encourage more I/O. The Exchange VM had access to 8 GB of memory and 4 processor cores. The SharePoint VM had access to 32 GB of memory and 20 processor cores. Pass-Through vs. NPIV In preliminary tests, the LUNs were passed through to the workload generator VMs. When passed through, the hypervisor could collect metrics on the LUNs that could be more reliable than those collected at the VM due to issues that sometimes happen with VM clocks. LUN performance metrics were collected from the hypervisor and from the VM. These two sets of collected metrics were equivalent except for queue length and
5 Page 5 of 16 latency. The VMs recorded higher latencies and queue depths than the hypervisor, as the IOPS approached 100K. Hyper-V pass-through was suspected of causing the bottleneck, so the VMs were reconfigured to use N-Port ID Virtualization (NPIV). A maximum of four Fibre Channel (FC) adapters are allowed per VM in Hyper-V, so the hypervisor s physical FC ports were split evenly between 4 Virtual SANs, and each VM s FC adapter was connected to a different Virtual SAN. When the VMs used NPIV to access the Violin Memory 7300 FSP LUNs, the latencies and queue depths observed in the VMs came in line with what had been seen previously from the hypervisor. The use of NPIV means that the hypervisor is removed from the I/O path on those ports, which improves performance at the VM, but also means that the hypervisor cannot measure the performance on those ports. Workloads Prior to running the 4-workload test, we performed a coordinated synthetic Iometer workload run using all VMs against all available logical drives mapped to each VM using a 4K block size read-only synthetic workload. Each VM in turn ran against the storage LUNs mapped to it for five minutes. At the end, all VMs ran against all LUNs. The VMs used were optimized for their real-world workloads, not for synthetic workloads. Consequently, this synthetic test was not an optimal way to measure the full capability of the Violin Memory system but rather to validate the test setup. After the Iometer test, the 4-workload test was run for one hour, starting with the web server workload and adding new workloads incrementally every 15 minutes: 00:00 Web server (four instances): NeoLoad traffic generator started 15:00 SQL Server OLTP (two instances): OLTP transaction generator started 30:00 Exchange Server Jetstress started. 45:00 Microsoft SharePoint/NeoLoad traffic generator started. 60:00 Test Completion. In order to test the web server and SharePoint workloads, Neotys NeoLoad was installed on the DMRTK-SRVR-M hypervisor. Each of the four web servers was assigned ten users to connect to it and download random webpages and images. Ten virtual users were also created to connect to SharePoint. The SharePoint users would login to SharePoint to change display themes, browse the data store, and download or upload working files. The SharePoint virtual user was programmed to run more slowly than the web server users in order to mimic a SharePoint user accessing a version controlled
6 Page 6 of 16 document and then working on the document rather than continuing to access more of the SharePoint database. The MSSQL OLTP VMs each had their own transactional database and each VM had 540 users executing transactions against their database. The Exchange VM used Jetstress to execute a workload against 6x500 Mailbox Exchange Databases with a Mailbox Quota of 2000 and a Mailbox IOPS of 0.2. Even Distribution Across Controllers The Violin Memory 7300 FSP sets up each controller to have separate pools from which to allocate LUNs. In order to make sure that each workload was exercising both controllers, data was spread between two controllers using two methods: For the following: SQL OLTP databases SQL tempdb databases Exchange Database and Logs SharePoint Database and Logs Two LUNs, one from each controller, were obtained and separately formatted into two separate logical drives. The data was split into separate files and these files were evenly split between the two logical drives. For the following: Web server data SQL tempdb logs SQL OLTP logs Hypervisor LUNs where the VMs were deployed. Two LUNs, one from each controller, were striped in the OS to obtain one logical drive on which to put the data.
7 Page 7 of 16 Results IOPS - Iometer Test vs Real World Workload The Synthetic Iometer test on the left side of the graph can be compared to the real-world tests on the right side of the graph. The Iometer test is represented on the graph as follows from left to right: Test Phase Graph Commentary 1) Exchange VM The leftmost purple bar 2) SQL 1 VM The yellow/orange/brown bar 3) SharePoint VM The grey bar 4) SQL 2 VM The pink/red/rust bar Almost 400,000 IOPS from one VM running against multiple LUNs Almost 700,000 IOPS from one VM running against multiple LUNs Approx. 550,000 IOPS from one VM running against multiple LUNs Approx. 550,000 IOPS from one VM running against multiple LUNs
8 Page 8 of 16 5) Web server 1 VM The blue bar 6) Web server 2 VM The teal bar 7) Web server 3 VM The bright green bar 8) Web server 4 VM The green/yellow bar 9) All 8 VMs running concurrently The tall multicolor bar Over 100,000 IOPS, all from running against a single logical drive. Over 100,000 IOPS, all from running against a single logical drive. Over 100,000 IOPS, all from running against a single logical drive. Over 100,000 IOPS, all from running against a single logical drive. Almost 1,000,000 IOPS without tuning/optimizing the workload. The Iometer test shows that some VMs were capable of generating more IOPS than others. Part of this difference is due to some VMs having access to more memory or processor than others whose memory was hobbled in order to drive more I/O to LUNs when operating their real-world workload. More importantly, some VMs had only one logical drive to work against while multiple logical drives would be a more optimal configuration for the Iometer synthetic workload tests. This can be seen when viewing the web server tests. It should be noted that any limitation in IOPS for the individual webservers is a product of the Windows operating system and setup, not a limitation of the storage system. This is because we attempted to use a typical configuration and then maximize the IOPS of that configuration. The Webserver test did not involve any e-commerce, post, database accesses, or the like. The webserver test was a read only test that brought up webpages containing text and pictures. While not optimal, using a single logical drive for the webserver data is a typical minimal IIS configuration and appropriate for a simple webserver. In order to make sure that both storage pools were utilized, that logical drive is comprised of two LUNs, one from each storage pool. Also, while again not an optimal configuration, in order to maximize IOPS and simulate a larger webserver, webserver VM memory was limited in order to limit caching of webpages in webserver memory and push more I/O to the storage. A good balance between OS memory requirements and limiting webserver caching was easier to achieve on a simple webserver. In order to scale up the webserver workload, multiple webserver VMs were deployed in lieu of making one big webserver. The web server tests are represented by bars of a single color because the Iometer was running against only a single logical drive. The other tests are represented by multicolored bars, each color representative of I/O from a different logical drive. It can be observed that a single logical drive worth of IOPS, represented as a single-color from a
9 Page 9 of 16 multi-color band, is similar in size to the single logical bar of I/O generated by the web server VMs. The Iometer test was performed to test our setup and make sure it was capable of generating a satisfactory amount of I/O. The test verified that each VM was capable of generating more IOPS than was necessary to support the real-world workload that was to be run on it. For example, when comparing the height of the blue/green of the web servers in the synthetic test to the thin sliver of blue/green of the web servers seen at the base the multi-workload test graph (on page 7), we can verify that our test setup and Violin Memory 7300 FSP is capable of more IOPS than the real-world workload is generating. Our setup is sufficient and not hindering performance. No reconfiguration of the VMs or servers was done between the synthetic and real-world tests, so we can also conclude that the real-world workload is creating performance constraints that are not present for a simple synthetic small block size test. Similar comparisons can be made between the synthetic and real-world contributions of the SQL OLTP, Exchange, and SharePoint VM LUNs. In addition, when all VMs were running Iometer at the same time, almost one million IOPS were generated on the Violin Memory system, showing the test setup and system is capable of sustaining an IOPS level well above what the real-world tests generate.
10 Page 10 of 16 Each workload has a peak IOPS that it generated due to read-write ratios, block sizes, number of data LUNs, effects of caching in memory, and likely other factors that we have not considered. That IOPS number did not come near the totals generated by the 4K readonly synthetic tests, and proved that more IOPS are not necessary to support a single realworld workload that has IOPS-limiting factors inherent to the workload itself. Rather, the value of having a high number of IOPS is the ability to support multiple different realworld workloads on a single all-flash storage array. Hence, a high number of IOPS proffers greater levels of consolidation across the datacenter. An example of this is the SQL OLTP workload. This workload was responsible for the majority of the IOPS on the system during the real-world tests. As we will see, the typical block-sizes for SQL OLTP are the smallest the closest to the 4K read-only size used by the synthetic test that achieved close to one million IOPS. Additionally, the SQL OLTP test was over 97% read, which is close to the 100% read used by the synthetic Iometer test. It is not a coincidence that the real-world workload whose I/O profile is closest to the synthetic 4K read-only I/O profile produced the most IOPS. It may be that the greater difference there was between a workload I/O profile and the synthetic I/O profile, the more the workload limited IOPS.
11 Page 11 of 16 Real-World I/O characteristics The 4-workload test I/O was a messy mix of various block sizes and a combination of read/write accesses, as opposed to the 4K read-only workloads typically used to generate large numbers of IOPS on flash systems. There are a few interesting things to note on the graph above. We can see that even without the SharePoint workload running, the SharePoint VM in the ready idle state issued large block writes at five-minute intervals. The actual number of writes was small, varying between 15 and 25 writes in all but one instance. The block sizes for these writes ranged between 24,914 bytes and 154,828 bytes, with an average of 111,562 bytes. Crawling was turned off during this time, and no reads were observed, so this is most likely due to logging or keep-alive activity. Once SharePoint traffic generation started, read accesses were seen and the write accesses changed in character. Another interesting block size is shown when Exchange Jetstress began. The Jetstress VM had reads of 222,788 bytes in the less than one minute it took to prepare for testing. After this, the I/O sizes were very regular. We observed a similar spike with the start of the SQL Server workloads.
12 Page 12 of 16 Also of note is the short ramp-up period where the web server workload block sizes were slightly lower than they were during the rest of the test. Ignoring spikes, we can approximate the I/O profile for each workload once it had finished ramping-up: Web server: 20KB reads, 100% read, contributed 1.61% of workload IOPS SQL OLTP: 8KB reads, 8KB writes, 97% read, contributed 95.66% of workload IOPS Exchange: 32.5K writes, 35K reads, 53% read, contributed 2.18% of workload IOPS SharePoint: 32K and 64K reads, large writes, writes exceed reads by a factor greater than 100, contributed almost nothing to workload IOPS. Our four workloads combined were an average of 96% read, 9K block size. None of these workloads generated an I/O profile the same as the 4K read-only synthetic workload, which highlights the difference between real-world workloads and synthetic ones.
13 Page 13 of 16 Real-World I/O Throughput Although the larger block-size workloads did not contribute much to the overall IOPS of these tests, they produced a larger throughput for their small number of IOPS. The Exchange workload in particular produced a large ribbon of throughput at the top of the graph and contributed a more significant portion to the overall bandwidth than the overall IOPS. The SharePoint workload was small enough overall that it did make a significant impact on the system throughput, despite the large block size of the SharePoint I/O. The SharePoint workload was programmed with more sparse server accesses to mimic downloading and then working on a downloaded document. This programming contributed to the negligible throughput generated by SharePoint. It is interesting to note that despite the smaller block sizes, the OLTP workload still dominated the throughput. Overall, each workload contributed to the throughput as follows: Web server: 3.26% of throughput SQL OLTP: 80.03% of throughput Exchange: 16.71% of throughput SharePoint: negligible percent of throughput.
14 Page 14 of 16 Real-World I/O Latencies Latencies were measured at the host VMs and at the storage. All host metrics were taken using Perfmon, and the storage metrics were taken using Violin Memory s Symphony management software that includes the ability to export performance data to a CSV or PDF file. The latencies were not available to be measured at the hypervisor due to the use of NPIV. The latencies measured at the storage were lower than those measured at the VM due to the impact of the SAN infrastructure and the host software stacks. Latency Representation We can see that the difference between latency measured at the storage and latency measured at the VM host started at around 250µs with the web server test and jumps up
15 Page 15 of 16 to around half a millisecond minimum as the MSSQL OLTP workload started. Both latency at the storage and latency at the VM host increased as each workload was added. All workload latencies were less than 1 ms as measured by the storage system. Single workload latencies were below 1 ms at the host VM also. Latencies up to 1.2 to 1.4 ms at the host VM were seen when multiple workloads of different types were added.
16 Summary and Conclusion Violin Memory 7300 FSP Supports Multiple Primary Storage Workloads Concurrently Page 16 of 16 It is not enough to say that an all-flash array can deliver sub-millisecond latencies and one million or more IOPS for a single, synthetic, read only 4K block size workload. Multiple real-world workloads, with messy I/O profiles that include broad ranges of block sizes, varying read/write ratios, and non-uniform I/O patterns will really show the mettle of a storage array. Our testing shows that deploying flash for primary storage, supporting multiple mixed workloads on a single array is not just theoretically possible, but a viable option for enterprise data centers. Effective storage solutions for the enterprise data center require predictable performance that delivers consistent results as storage needs grow over time. Unleashing the full potential of flash to meet these needs requires storage solutions to achieve the highest level of integration and optimization to maximize the value and flexibility of customer s investments in flash storage solutions. The Violin Memory 7300 FSP sustained sub-millisecond performance at the storage array while supporting four different real-world workloads. Instead of purchasing multiple storage arrays, one for each type of workload running in a datacenter, a single Violin Memory 7300 FSP could be purchased instead to do the work of four storage arrays. The most current version of this report is available at on the Demartek website. Intel and Xeon are registered trademarks of Intel Corporation in the United States and/or other countries. Microsoft, SharePoint and Windows are registered trademarks of Microsoft Corporation. Violin Memory is a registered trademark and Flash Fabric Architecture, Flash Storage Platform and Symphony are trademarks of Violin Memory, Inc. Demartek is a registered trademark of Demartek, LLC. All other trademarks are the property of their respective owners.
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array Evaluation report prepared under contract with Lenovo Executive Summary Virtualization is a key strategy to reduce the
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
Achieving a Million I/O Operations per Second from a Single VMware vsphere 5.0 Host Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Software and Hardware...
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
WHITE PAPER DATA CENTER FABRIC Benchmarking Microsoft SQL Server Using VMware ESX Server 3.5 The results of a benchmarking study performed in Brocade test labs demonstrate that SQL Server can be deployed
Storage I/O Performance on VMware vsphere 5.1 over 16 Gigabit Fibre Channel Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Executive Summary... 3 Setup... 3 Workload... 4 Results...
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer email@example.com 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
WHITE PAPER Accelerate the Performance of Virtualized Databases Using PernixData FVP Software Increase SQL Transactions and Minimize Latency with a Flash Hypervisor 1 Virtualization saves substantial time
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Executive summary... 2 Introduction... 2 Exchange 2007 Hyper-V high availability configuration...
White Paper OPTIMIZING MICROSOFT EXCHANGE AND SHAREPOINT WITH EMC XTREMIO EMC XtremIO 4.0, Microsoft Exchange Server 2013, Microsoft SharePoint Server 2013 All-flash array with optimized performance Constant
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
ISE 820 All Flash Array Performance Review March 2015 Performance Review March 2015 Table of Contents Executive Summary... 3 SPC-1 Benchmark Re sults... 3 Virtual Desktop Benchmark Testing... 3 Synthetic
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Using Iometer to Show Acceleration Benefits for VMware vsphere 5.5 with FlashSoft Software 3.7 WHITE PAPER Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table
ESG Lab Summary Hyper-V R2 SP1 Application Workload Performance Virtualizing SQL Server, SharePoint, and Exchange with Confidence This report presents a summary of Microsoft-commissioned testing of the
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Product test: DataCore Virtual Desktop Server 2.0 Virtual desktops made easy Dr. Götz Güttich The Virtual Desktop Server 2.0 allows administrators to launch and maintain virtual desktops with relatively
Storage Grid Survives Node Failures A DeepStorage Technology Validation Brief Prepared for Gridstore About DeepStorage DeepStorage, LLC. is dedicated to revealing the deeper truth about storage, networking
VMware Virtual SAN 6.0 Performance TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Virtual SAN Cluster Setup... 3 Hybrid Virtual SAN Hardware Configuration... 4 All-Flash
Storage I/O Control: Proportional Allocation of Shared Storage Resources Chethan Kumar Sr. Member of Technical Staff, R&D VMware, Inc. Outline The Problem Storage IO Control (SIOC) overview Technical Details
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
Technical Report Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users Reliable and affordable storage for your business Table of Contents 1 Overview... 1 2 Introduction... 2 3 Infrastructure
Paper 3290-2015 SAS Analytics on IBM FlashSystem storage: Deployment scenarios and best practices ABSTRACT Harry Seifert, IBM Corporation; Matt Key, IBM Corporation; Narayana Pattipati, IBM Corporation;
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning
Technical Brief Caching Software Performance: FlashSoft Software 3.8 for Microsoft Windows Server with Hyper-V and SQL Server 2012 Western Digital Technologies, Inc. 951 SanDisk Drive, Milpitas, CA 95035
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Atlantis USX Hyper- Converged Solution for Microsoft SQL 2014 atlantiscomputing.com Table of Contents Executive Summary... 4 Solution Overview... 5 Atlantis USX and Microsoft SQL Architecture... 5 Microsoft
PERFORMANCE BENCHMARKS PAPER Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks Arvind Pruthi Senior Staff Manager Marvell April 2011 www.marvell.com Overview In today s virtualized data
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
Microsoft Windows Server in a Flash Combine Violin s enterprise-class storage with the ease and flexibility of Windows Storage Server in an integrated solution so you can achieve higher performance and
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
1 Deploying SQL Server 2008 R2 with Hyper-V on the Hitachi Virtual Storage Platform Reference Architecture Guide By Rick Andersen October 2010 Month Year Feedback Hitachi Data Systems welcomes your feedback.
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Lab Evaluation of Astute Networks ViSX G4 iscsi Flash Storage Appliance Evaluation report prepared under contract with Astute Networks Introduction The demands on IT professionals to meet the performance
VMware vsphere 6 and Oracle Database Scalability Study Scaling Monster Virtual Machines TECHNICAL WHITE PAPER Table of Contents Executive Summary... 3 Introduction... 3 Test Environment... 3 Virtual Machine
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Presented by July, 2010 Table of Contents EXECUTIVE OVERVIEW 3 TEST EQUIPMENT AND METHODS 4 TESTING OVERVIEW 5 Fragmentation in
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in
Load DynamiX Storage Performance Validation: Fundamental to your Change Management Process By Claude Bouffard Director SSG-NOW Labs, Senior Analyst Deni Connor, Founding Analyst SSG-NOW February 2015 L
Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...
Sitecore Experience Platform 8.1 Performance White Paper Rev: March 11, 2016 Sitecore Experience Platform 8.1 Performance White Paper Sitecore Experience Platform 8.1 Table of contents Table of contents...
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 188.8.131.52 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Optimizing Cloud Performance Using Veloxum Testing Report on experiments run to show Veloxum s optimization software effects on Terremark s vcloud infrastructure Contents Introduction... 3 Veloxum Overview...
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,