Enterprise Storage Options for High Performance



Similar documents
IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Evaluation of Enterprise Data Protection using SEP Software

Comparison of Hybrid Flash Storage System Performance

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

HGST Virident Solutions 2.0

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

FOR SERVERS 2.2: FEATURE matrix

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Building a Flash Fabric

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Evolving Datacenter Architectures

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Red Hat enterprise virtualization 3.0 feature comparison

FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures

SQL Server Virtualization

Technology Insight Series

Windows Server 2008 R2 Hyper V. Public FAQ

White Paper. SAP NetWeaver Landscape Virtualization Management on VCE Vblock System 300 Family

HP EVA to 3PAR Online Import for EVA-to-3PAR StoreServ Migration

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Oracle Database Scalability in VMware ESX VMware ESX 3.5

MS Exchange Server Acceleration

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

WHITE PAPER 1

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Deployment Options for Microsoft Hyper-V Server

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Comparing Free Virtualization Products

Windows Server 2008 R2 Hyper-V Live Migration

Understanding Data Locality in VMware Virtual SAN

Is Hyperconverged Cost-Competitive with the Cloud?

Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

PARALLELS CLOUD STORAGE

HP reference configuration for entry-level SAS Grid Manager solutions

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Enterprise Deployment: Laserfiche 8 in a Virtual Environment. White Paper

Maxta Storage Platform Enterprise Storage Re-defined

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

IOmark Suite. Benchmarking Storage with Applica4on Workloads August, Evaluator Group, Inc.

AT&T Connect Participant Application & VDI Platform Support

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer

Analysis of VDI Storage Performance During Bootstorm

EMC Business Continuity for Microsoft SQL Server 2008

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Cloud Sure - Virtual Machines

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

Bosch Video Management System High Availability with Hyper-V

EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Virtualization of the MS Exchange Server Environment

Virtual SAN Design and Deployment Guide

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Remote PC Guide Series - Volume 1

CA ARCserve Replication and High Availability Deployment Options for Hyper-V

Storage for Virtualized Workloads

White Paper. Recording Server Virtualization

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Increasing IT Efficiency in a Dynamic Datacenter with a Virtualized Storage Solution

JovianDSS Evaluation and Product Training. Presentation updated: October 2015

EMC Unified Storage for Microsoft SQL Server 2008

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Zadara Storage Cloud A

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

FUSION iocontrol HYBRID STORAGE ARCHITECTURE 1

Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server

Increasing Storage Performance

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Microsoft Exchange Solutions on VMware

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

PHD Virtual Backup for Hyper-V

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

Cloud Server. Parallels. Key Features and Benefits. White Paper.

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Increasing performance and lowering the cost of storage for VDI With Virsto, Citrix, and Microsoft

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

What s New with VMware Virtual Infrastructure

Table of contents. Matching server virtualization with advanced storage virtualization

Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware vsphere 5

Stratusphere Solutions

The Zadara Storage Cloud A Validation of its Use Cases and Economic Benefits

SanDisk Lab Validation: VMware vsphere Swap-to-Host Cache on SanDisk SSDs

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

How To Write An Article On An Hp Appsystem For Spera Hana

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Transcription:

Test Validation Enterprise Storage Options for High Performance Author: Russ Fellows October 28, 2014 Enabling you to make the best technology decisions 2014 Evaluator Group, Inc. All rights reserved.

p. 1 Table of Contents... 2 Executive Summary... 3 Evaluation Process... 3 Enterprise Storage... 4 Validation Overview... 5 Test Objectives... 5 Test Cases... 5 Evaluation of HGST FlashMAX... 5 Test Workload... 6 Test Findings... 7 Overview... 7 Test 1 - VMmark Workload with Thick Provisioning... 8 Test... 8 Results and Issues... 9 Relevance in Enterprises... 9 Test 2 - VMmark Workload with Thin Provisioning... 9 Test... 9 Results and Issues... 10 Relevance in Enterprises... 10 Analysis of Results... 11 HGST FlashMAX Features... 12

p. 2 Evaluation Summary... 13 Performance... 13 Issues and Concerns... 13 Final Observations... 14 Appendix A Configuration Overview... 15 Appendix B - VMmark Comparison References... 17 Appendix C - HGST FlashMax Performance... 18 Appendix D - IOmark- VM Overview... 19

p. 3 Executive Summary Solid state storage has dramatically changed the way enterprise application owners and IT managers evaluate storage products. Until recently, server based Flash storage has had significant trade- offs when compared to SAN attached storage. The high availability (HA) options have been limited, or unavailable, and storage capacity is dedicated to individual servers. However, server based Flash storage is able to provide very high price performance levels, often exceeding external storage. As a result, IT decision makers have been forced to choose between high availability and shared access storage or high performance server based Flash storage. The majority of these decisions have been to use external shared storage, since the overall value and features are better suited to most enterprise applications. With the advent of new software designed to provide shared access and high availability to server based Flash storage cards, IT professionals now have another option. For enterprises that demand high performance along with shared access, pooled capacity and high availability, the HGST FlashMAX hardware and software products are a strong consideration. In this lab evaluation, we test HGST FlashMAX cards with HGST Virident Space software to verify their ability to provide high availability and pooled access to storage across multiple servers using server based Flash storage PCIe cards. A real- world application set known as VMmark is used as the test workload, enabling IT users to compare the performance results against other storage systems running the same workloads. Evaluation Process Testing occurred in September 2014 and focused on performance when running enterprise application workloads, along with other enterprise storage characteristics. The test cases were designed to recreate actual enterprise use cases in order to create an accurate assessment of the configurations used in enterprise environments. The following aspects were the primary evaluation criteria: o Performance of HGST FlashMAX with Share running mixed enterprise application workloads o Ability to provide pooled, HA access to storage across all server nodes The storage utilized was a high- speed, server- side SSD flash pool with 10Gb Ethernet attachment between servers. This provided a test bed that removed most hardware performance bottlenecks, reducing the impact of the servers, eliminating the SAN, and maximizing storage systems on performance results.

p. 4 Evaluator Group performed the testing in the HGST labs, using HGST equipment for the testing. All test preparation and testing was performed exclusively by Evaluator Group without assistance or involvement of HGST personnel. This report details the testing process, equipment and other findings. Enterprise Storage Defining the required elements that must be present in Enterprise Class storage is somewhat subjective. Additionally, features and capabilities tend to increase over time; products today include more features than they did several years ago. At a minimum, the following features are required for enterprise storage: High availability of each individual component Fail- over capabilities, or features that enable continued operations during a component failure Systems should have no single points of failure, enabling continuing operations Serviceability- meaning it is possible to replace or service failed components without taking the entire system offline Additional features and capabilities are often desirable for enterprise storage and typically include: Point in time data protection points, known as snapshots Local data copy capabilities, known as a clone, mirroring or local replication Remote data copy capabilities, known as remote replication One of the limitations with PCIe based storage, or non- volatile memory technologies, has been the lack of enterprise storage capabilities. In particular, shared host access is inherently difficult with storage designs that are physically located within a server enclosure. Evaluator Group Comments: It is important to utilize storage that delivers the reliability and availability characteristics necessary for the application it supports. To date, PCIe based storage has provided high performance, but with limited enterprise class features. Historically, PCIe and other server based storage have had single points of failure, are not easily serviced and do not provide shared access; these limitations all restrict their use as Enterprise Class storage devices.

p. 5 Validation Overview Test Objectives The testing was primarily quantitative in nature, measuring the performance of the HGST Virident Space software solution. A qualitative measure was performed on the ability to access the storage and validate enterprise availability features of the tested solution. This report highlights the results obtained from our performing testing. Evaluator Group commentary provides context and a narrative assessment of the results as experienced by Evaluator Group personnel. The results of the tests are outlined in the remainder of this report. Configuration details for all the tests including hardware and software environments are provided in appendices. Test Cases The testing occurred at the HGST labs, using HGST equipment. All test setup and test executions were performed exclusively by Evaluator Group personnel. All tests were configured and run by Evaluator Group, with assistance from HGST only when configuration issues occurred. The HGST cards and HGST Virident Space software used for testing was supplied by HGST for this evaluation. 1. Test a clustered HGST configuration to determine performance a. Utilize VMmark application workloads generated using IOmark- VM b. Find the maximum workload as limited by either performance or capacity c. Capacity will utilize thick provisioned storage capacity only 2. Performance- test the HGST cluster running VMmark workloads generated using IOmark- VM a. Utilize VMmark application workloads generated using IOmark- VM b. Find the maximum workload as limited by either performance or capacity c. Capacity will utilize thin provisioned storage capacity 3. Validate the ability to access the HGST shared- pooled capacity from any node Evaluation of HGST FlashMAX Several test scenarios were run as detailed in the following section. For all tests, a similar environment was utilized, with multiple virtual machines (VMs) running application workloads. The VMs ran under the KVM hypervisor included with RedHat Enterprise Linux version 6.4. Each virtual machine accessed multiple logical volumes. All logical volumes resided on a single volume pool, with one for each physical server. The HGST Share software was used to create a shared, highly available pool of 8.8 TB raw, 4.4TB usable space that spanned all four nodes. The Linux LVM logical volume manager was used to create volumes groups and logical volumes that utilized the HA pool. These volumes were then allocated to the individual virtual machines running the application workloads.

p. 6 A logical diagram of the storage allocation and pooling is shown below in Figure 1. Hosts' VM s' " " " " HA'LUNs' " " " " HA' Volumes' Shared' Pool' 2.2'TB' 2.2'TB' 2.2'TB' 2.2'TB' Shared'HA'Pool' '8.8'TB' HGST' FlashMax' Test Workload Figure 1: Logical Storage Configuration The workload used to test the cluster consisted of multiple server application instances, running the applications portion of the VMmark workload. The workloads were run using the IOmark- VM 1 storage benchmark tools, which are able to generate multiple VMmark application and hypervisor workloads. An overview of the logical configuration is shown above in Figure 1 and detailed more extensively in the appendices. 1 IOmark- VM is an application workload tool and storage benchmark, see Appendix C for details

p. 7 Test Findings Overview On page 11, we compare the HGST results to other published results. However, in order to understand the performance results achieved for the HGST FlashMAX PCIe Flash storage with HGST Virident Space software, we present a listing of several currently published VMmark results. Table 1 below exhibits selected published results for reference. VMmark Tiles Server Storage Utilized 50 Fujitsu PRIMEQUEST 2800E (2 x 8 way partitions) 6 x Fusion iodrive PCIe cards, plus 3 x RAID controllers and 33 SSD disks total 24 HP BL660c Gen8 1 x HP 3PAR 7450 (4 controllers w/ 72 SSD s) 16 Cisco UCS B260 M4 4 x UCS Invicta Storage Nodes (all flash array) 14 HP DL380p Gen8 6 x Fusion ion PCIe flash cards 10 Dell R720 8 x Micron PCIe flash cards Table 1: Selected VMmark Results (Source: VMware 2 ) As above in the published results, several configurations utilized PCIe Flash cards as storage media. However, results with PCIe Flash cards utilized a minimum of 6 cards, others used 8 cards or a combination of PCIe cards and SSD drives. External, SAN attached storage systems were also used, as seen in configurations using UCS Invicta storage and HP 3PAR 7450 all flash array. Evaluator Group comments: The server infrastructure required to drive application workloads beyond 28 tiles are significant, requiring a significant number of CPU and memory resources. The storage system performance requirements are also substantial, as evidenced by the storage utilized for reported results. In analyzing all of published VMmark results and then plotting their values, it becomes evident that the results have a specific distribution. The distribution of results follows a log- normal distribution, with a significant number of results close to zero, and then progressively fewer results as scores increase. Below in Figure 2 is a chart showing the frequency of results at a specific number of tiles. This graph is a frequency distribution or a histogram of published VMmark scores. 2 Published VMmark results are available at VMware s website, for details see Appendix B

p. 8 Frequency'of'VMmark'Scores' 45" 40" 35" 30" 25" 20" 15" 10" 5" 0" 10" 20" 30" 40" 50" 62" VMmark'Tiles' Figure 2: HGST FlashMAX Performance Evaluator Group comments: As shown in Figure 2, the majority of reported results are below 20 VMmark tiles, with scores distributed along a log- normal distribution. Test 1 - VMmark Workload with Thick Provisioning This test utilized the IOmark- VM benchmark to run a VMmark application workload across multiple HGST FlashMAX cards with HGST Virident Space. The HGST Virident Space software enabled the capacity of all nodes to be combined together in a highly available pool as depicted previously in Figure 1. The storage capacity was completely allocated, with traditional, thick provisioned volumes. Test The goal of this test was to measure the performance of the HGST Virident Space storage pool while running applications. As stated, the workload utilized was the application portion of VMmark, generated using the IOmark- VM tool. The response time measurements are provided for each I/O request, with an overall average value along with response time values for each application set. The hypervisor portion of the benchmark was not run, due to the requirement of a VMware hypervisor and vcenter controller to perform the hypervisor workload tests. The workload was shared, with each physical host running VMmark / IOmark- VM workloads against a specific set of HA volumes from the shared pool.

p. 9 Results and Issues Across all four hosts, a total of 28 tiles were run for this test. The performance of the storage was not a limiting factor at this workload level. Rather, the capacity was the limiting factor when thick provisioned storage capacity was utilized. A total of 4.4 TB was the maximum number of full IOmark- VM / VMmark workload tiles able to run at this capacity. The performance showed average response times of approximately 1.32 ms., with a standard deviation of 1.13 ms. As a result, over 70% of response times were less than 2.5 ms. at this workload. Relevance in Enterprises A workload of 28 tiles of VMmark provides results that are the 15 th best out of 84 total published results, while requiring significantly less complex and costly storage than the top results. Evaluator Group comments: A performance of 28 tiles signify very strong results. In our testing, HGST FlashMAX storage provided sufficient storage performance and capacity to meet the application demands, with better power, space and cooling efficiencies than other storage configurations that provided this performance level. Test 2 - VMmark Workload with Thin Provisioning This test utilized the IOmark- VM benchmark to run a VMmark application workload across multiple HGST FlashMAX cards with HGST Virident Space. The HGST Virident Space software enabled the capacity of all nodes to be combined together in a highly available pool as depicted previously in Figure 1. The storage capacity was allocated using thin provisioned volumes in a LVM2 volume manager in order to overprovision storage capacity. Test The goal of this test was to measure the performance of the HGST Virident Space storage pool while running applications. As stated, the workload utilized was the application portion of VMmark, generated using the IOmark- VM tool. The response time measurements are provided for each I/O request, with an overall average value along with response time values for each application set. The hypervisor portion of the benchmark was not run, due to the requirement of a VMware hypervisor and vcenter controller in order to perform the hypervisor workload tests. The workload was shared, with each physical host running VMmark / IOmark- VM workloads against a specific set of HA volumes from the shared pool.

p. 10 Results and Issues Across all four hosts, a total of 48 tiles were run for this test. The performance of the storage was not a limiting factor at this workload level. The performance showed average response times of approximately 2.26 ms., with a standard deviation of 2.03 ms. As a result, over 70% of response times were less than 4.3 ms. at this workload. Relevance in Enterprises A workload of 48 tiles of VMmark provides results that are the 4 th best out of 84 total published results, while requiring significantly less complex and costly storage than the top three results. Evaluator Group comments: A performance of 48 tiles represents very strong results. The storage for all three of the top results did not use enterprise class storage configurations but used highly customized storage with multiple SSD drives, RAID controllers and 6 or more PCIe Flash cards in external servers. By comparison, the 4 HGST FlashMAX storage cards provide sufficient storage performance to meet the application demands, with better power, space and cooling efficiencies. The workloads used are the application portion of VMmark, which was driven by the IOmark- VM storage benchmark tool. Due to the fact that the VMmark hypervisor workloads are dependent upon a VMware hypervisor, the non- application portion of the workload was not run on this configuration.

p. 11 Analysis of Results For comparison, the following chart shows the cumulative percentage of values that are less than or equal to any given score. The HGST results are denoted with red markers, for their respective 28 tile and 48 tile scores. 120%$ Percent"of"Scores"Lower" %"Less"Than"Score" 100%$ 80%$ 60%$ 40%$ 48%$ 83%$ 90%$ 96%$ 96%$ 100%$ 20%$ 0%$ 10$ 28$ 30$ 40$ 48$ 62$ VMmark"Tiles" Figure 3: HGST FlashMAX Performance vs. Others (HGST in Red) Figure 3 above shows that for a score of 28 VMmark tiles, the HGST results are better than 83% of reported results. Similarly, the 48 VMmark tile score shown indicates that 96% of reported results were less than the HGST solution. The HGST FlashMAX cluster of four cards, running in four different servers with an HA pool enabled using HGST Virident Space, provided low response times as relatively high workloads were added to the system. The response times when running a 28 tile VMmark application workload had an average of 1.32 milliseconds. A 48 tile VMmark application workload had a higher average response time of 2.26 ms. Evaluator Group Comments: The testing was limited by storage capacity rather than performance. Reported VMmark results typically utilize thick provisioned storage. The 28- tile workload utilized approximately 8.5 TB. The 48- tile workload utilized thin provisioning to achieve a higher utilization of capacity, allowing for additional tiles.

p. 12 HGST FlashMAX Features The HGST FlashMAX PCIe cards provide high performance, non- volatile storage. The additional HGST software features that add pooled, high availability storage provide significant value beyond the raw performance of PCIe based flash storage. Evaluator Group comments: The HGST Virident Enterprise suite of products is a significant value for customers evaluating enterprise storage options. Without this software, the FlashMAX PCIe cards provide high- performance storage, but without pooling, or high- availability. Combining the FlashMAX cards with Virident software provides enterprise reliability along with the high performance PCIe access to flash storage can deliver. The HGST FlashMAX cards with Virident Enterprise Software provide a number of highly available storage options. Specifically, HGST Virident Enterprise Software Products for FlashMAX cards include: HGST Virident HA HGST Virident Space HGST Virident ClusterCache HGST Virident Space Supported environments and applications include a wide variety of operating systems, hypervisors, databases and applications. For a complete list of all supported OS, Hypervisors and Applications, please refer to the HGST Website. 3 As outlined previously, enterprise storage has several characteristics, starting with high- availability through no single points of failure. By definition, a single PCIe card is a single point of failure. Products that cannot leverage multiple PCIe cards across multiple server platforms do not provide enterprise storage. 3 http://www.hgst.com/software/hgst- Virident- solutions

p. 13 Evaluation Summary This evaluation was designed primarily to test the performance capabilities of HGST FlashMAX cards while using HGST Virident Space software to enable HA and pooled storage access. Overall, the HGST storage solution proved to provide very high performance for a number of enterprise applications running in a virtualized environment. The added HA and pooled access features significantly expand the use cases for PCIe Flash storage beyond their use as caching devices, or in other environments where outages or stranded storage access are not a concern. Performance Performance of real- world workloads was the primary focus of the testing. Beyond performance, two other factors are important considerations, cost and enterprise availability. Cost is always a consideration, with some environments being much more price sensitive than others. Certainly cost is not always the most important component in a decision, but often it is quite important. The other element that has a strong influence is the enterprise reliability and availability features of a storage solution. Without enterprise level features, which include support, most organizations will not consider a storage solution, regardless of the price and performance. As shown earlier, many of the highest performing storage configurations utilized are non- standard solutions that are unlikely to be supported and do not provide enterprise level availability. When using these considerations, the 4- PCIe HGST FlashMAX cards along with HGST Virident Space are currently the highest performing solution when cost, enterprise class availability and support are considered. Issues and Concerns During performance testing, the HA capabilities were not tested. It was verified that the entire storage pool created by the Virident Space software was available to each host. The cold- failover process was verified and is documented in Appendix A. Moreover, for use in any environment where high- availability is required, users are encouraged to test the implementation thoroughly before deployment. Linux and RedHat in particular provide a wide number of options for virtualization, volume management, clustering and high- availability. While most of these tools work quite well, they often lack integration or coordination. Thus, for clients intending to utilize these tools for the enterprise applications, it is suggested to research thoroughly and test each tool and capability.

p. 14 A limitation of the Space software is the lack of support for guest OSs other than Linux at the current time. HGST has announced planned support for Windows, Hyper- V and VMware hypervisors. The addition of these environments will remove the biggest limitation that exists presently. Final Observations Many applications can benefit from high performing storage, particularly database and messaging applications, and nearly any application running in a virtual environment. Until recently, IT organizations were faced with making trade- offs in order to attain high performance, particularly when considering storage dedicated to one or two physical servers. Although server based storage have space, simplicity and cost advantages, the lack of shared access and limited HA capabilities make server based storage untenable for many applications. The combination of HGST FlashMAX PCIe flash cards and HGST Virident Space software enables shared- pooled access to storage across multiple nodes and alleviates one of the biggest obstacles with using PCIe based Flash storage, the lack of access to storage upon a host failure. Moreover, with high performance, enterprise HA capabilities and good price/performance results, the HGST FlashMAX and HGST Virident Space solution should be a consideration for users looking to accelerate their applications on Linux.

p. 15 Application Environment OS and Hypervisors Appendix A Configuration Overview The Operating system utilized was RedHat version 6.4 (Kernel version 2.6.32) o Includes KVM, QEMU and LibVirt tools KVM Hypervisor included with RedHat Enterprise 6.4 Guest VM s running application workloads utilized Ubuntu 12.04 x64 HGST FlashMAX and Virident Software HGST Virident Space version 2.0 Software was utilized 4 - HGST FlashMAX II PCIe cards (1 / host, each with 4.8 TB capacity) Servers Total of 5 servers used, 4 for workloads and one for monitoring. Cisco, UCS C240 M3, performance servers (Physical Hosts #1 - #4) o 16 CPU cores, 64 GB RAM o 2 x 10 Gb NIC connections to LAN Monitoring server (Physical Hosts #5) o 4 CPU cores, 32 GB RAM o Dual, 1 Gb NIC Networking LAN: 10 Gb network as used between the UCS 240 hosts as the HA mechanism for the HGST FlashMAX cards with Space software. Storage Total of 4, HGST FlashMAX PCIe cards as previously documented Cluster and HA Support Clustering at the application, host OS or hypervisor was not utilized during testing. Using shared storage in a Red Hat OS or KVM cluster requires the use of the cluster logical volume manager daemon (clvmd) or the High Availability Logical Volume Management agents (HA- LVM). Shared access to the same logical volumes or underlying volume groups will result in data corruption without the use of these facilities.

p. 16 During testing, we utilized local LVM2 access to a single physical device per host. In this way, data was not shared while hosts were active. It is possible to deactivate a cluster and then activate it on another host in case of failure. However, for true HA or shared access, the above- specified capabilities must be utilized. Access for Failed Node The RedHat Linux procedure to access shared volume from another host when not using HA or clustering is as follows: Rescan for SCSI devices on the new system (if not previously visible) Run LVM pvscan for LVM to recognize the new devices (if not previously visible) Use the device- mapper- multipath software, to run multipath - r to rescan SCSI devices Run vgimport on the new host to add the volume to this host Enable the volume group on the new host vgchange - ay Mount the logical volumes in the volume group on the new host

p. 17 Appendix B - VMmark Comparison References Several VMmark results were provided in Table 1. The specific details of the equipment infrastructure, including the storage utilized may be seen in the published reports, which are listed as follows: The Fujitsu PRIMEQUEST 2800E @ 50 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2014-04- 01- Fujitsu- PRIMEQUEST2800E- 50.pdf The Dell R620 @ 30 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2014-09- 30- Dell- R620.pdf The HP BL660c Gen8 @ 24 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2014-04- 15- HP- ProLiantBL660cG8.pdf The Cisco UCS B260 M4 @ 16 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2014-03- 04- Cisco- B260M4.pdf The HP ProLiant DL380 Gen8 @ 14 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2013-09- 10- HP- ProLiantDL380pG8.pdf The Dell R720 @ 10 Tiles: http://www.vmware.com/a/assets/vmmark/pdf/2013-10- 01- Dell- R720.pdf

p. 18 Appendix C - HGST FlashMax Performance Shown below in Figure 4 is a chart showing the performance of the HGST, as measured by response time when running two different workloads. As shown, response time levels are still substantially below the required response time levels required for IOmark- VM, as outlined in the next section, Appendix D. 3.00% VMmark'Performance' Response'Time'(ms.)' 2.50% 2.00% 1.50% 1.00% 0.50% 0.00% Avg.%Resp.% Read%Resp.% Write%Resp.% Std.%Dev.% 28%Tiles% 1.32% 0.70% 1.43% 1.13% 48%Tiles% 2.26% 1.22% 2.43% 2.03% 28%Tiles% 48%Tiles% Figure 4: HGST FlashMAX Response times for IOmark- VM workloads

p. 19 Appendix D - IOmark-VM Overview The ability to recreate a known workload is important when comparing a system against potential alternatives. Establishing a reference or benchmark workload enables system vendors as well as resellers and IT users to compare several systems utilizing a known workload. Specifically, the IOmark- VM benchmark recreates a storage workload that typically occurs in a virtual infrastructure environment. The workload is non- synthetic and recreates several applications that are commonly found in virtualized server environments. The IOmark- VM benchmark utilizes the identical workloads generated by the VMmark benchmark. However, the workload is more easily scaled, is repeatable and is designed to test storage exclusively, rather than CPU, memory and other server operations. Mail*Server* * Web*Server* Standby* * * Olio* * Web*Server* Olio* * Database* Win*2008* Win*2008* Linux*64b* Linux*64b* DVD*Store* * Web*Server* DVD*Store* * Web*Server* DVD*Store* * Web*Server* DVD*Store* * Database* Linux*64b* Linux*64b* Linux*64b* Linux*64b* Storage(System(( ***(System*Under*Test)* Hypervisor* Figure 1: IOmark- VM Conceptual Overview

p. 20 IOmark-VM Measurements and Use Datacenters running applications in a virtual infrastructure contain multiple workloads running on a virtualization platform. Often multiple physical servers share the resources of a single storage system providing primary storage for both virtual machine OS and applications. Currently, several benchmarks have been developed that focus on the server aspects of infrastructure, including the CPU, memory and I/O bandwidth capabilities of the infrastructure. However, there has been no corresponding development of standardized workloads designed to drive storage workloads for these application environments. By establishing a set of standard applications and capturing their I/O streams, it is possible to recreate application based storage workloads for these complex environments. IOmark- VM is designed utilizing these concepts, and as such is the first benchmark designed to accurately generate application workloads for storage systems, enabling direct comparison of storage system configurations and their ability to support a specific number of applications. Additionally, IOmark- VM realizes that a significant impact on storage may occur from administrative functions common in virtual infrastructures. For this reason, several hypervisor- based functions are a part of the IOmark- VM workload. These additional operations include; cloning a virtual machine, booting a VM and updating software, while also migrating a virtual machine from one storage volume to another. How IOmark-VM Operates IOmark- VM uses the concept of workload replay. I/O streams are captured from actual running applications and then replayed so that the exact sequence and I/O commands are issued. This allows the creation of a workload that is indistinguishable from an actual workload to the system under test, while being reproducible and requiring fewer resources. Additionally, the test environment is less expensive, easier and faster to create since actual applications are not required. Because CPU and memory are not consumed running applications, a much higher I/O workload may be generated with a set of server resources than is possible using native applications. This ratio is typically 10: 1, but may vary. In Figure 1 on the previous page, a single set of applications is depicted running on a single physical host in a virtual infrastructure. In order to scale up the workload on a storage system, additional applications sets may be added to the same, or to other physical hosts. The only limitations to the scale of the test are the physical infrastructure supporting the workload. Sufficient, CPU, memory and I/O capabilities must be available to run additional workload sets.

p. 21 Unlike artificial workload generation tools, IOmark- VM recreates accurate read vs. write and random vs. sequential I/O requests. Another measurement of IOmark- VM is that it creates accurate access patterns, thus enabling storage cache algorithms to work properly. Finally, IOmark- VM maintains an accurate ratio of performance to capacity as workloads are scaled, ensuring that storage performance is measured with respect to storage capacity accurately. As a result, IOmark- VM maintains an accurate ratio of I/O to capacity, producing results applicable to IT users. Benchmark Application Workload Set A concept utilized for testing multiple applications is that of Application sets, also known as tiles. A set of 8 applications is run together, along with several common hypervisor infrastructure operations. In order to scale the workload up and place a higher load on the storage system, additional application sets are run. Application sets are always run together for official benchmark results, along with a defined set of infrastructure operations. The specific applications comprising a workload set are detailed below in Table 1. Application Guest OS Storage Capacity / Instance Microsoft Exchange 2007 Olio Database Microsoft Windows Server 2008, Enterprise, 64 bit SuSE Linux Enterprise Server 11, 64bit 80 GB 14 GB Olio Web server SuSE Linux Enterprise 11, 64bit 80 GB Idle Windows Server Microsoft Windows Server 2003 SP2 Enterprise Edition, 32- bit 10 GB DVD Store Database SuSE Linux Enterprise 11, 64bit 45 GB DVD Store Web Server 1 SuSE Linux Enterprise 11, 64bit 10 GB DVD Store Web Server 2 SuSE Linux Enterprise 11, 64bit 10 GB DVD Store Web Server 3 SuSE Linux Enterprise 11, 64bit 10 GB Hypervisor Clone & Deploy N/A - VMware vcenter required 15 GB Hypervisor Storage Migration N/A - VMware vcenter required 30 GB - - - - Total = 305 GB Table 8: IOmark- VM Application Overview

p. 22 The total capacity required for each set of applications is approximately 305 GB of capacity. Each additional workload set requires an additional 305 GB of capacity. Workload Details The Olio application consists of both a database server, and a web client running on different virtual machines with a pre- loaded data set. For more details on Olio see: http://incubator.apache.org/olio/ The DVD application consists of a single database server along with three web clients, each running on a different virtual machine using predefined workload and data set. For more details on the publicly available DVD database application see: http://linux.dell.com/dvdstore/ The Exchange server is a Microsoft messaging and email server. Only the server portion of Exchange is recreated in this workload set, with the client workloads not being a part of the I/O, only indirectly through their requests to the messaging server. In an official test, there are two hypervisor workloads that are performed in VMware virtual infrastructure environments and require the availability of a VMware vcenter server to perform the operations. Due to the use of the KVM hypervisor, this was not an official VMmark or IOmark- VM benchmark, and the hypervisor workloads were not run. Understanding Results IOmark- VM produces results indicating the response time of a storage system given a particular workload. Based on established criteria, these results in turn dictate how many total virtual machine sets are supported by a specific storage configuration and the average response time. The report is audited for accuracy and issued by Evaluator Group, Inc., an independent storage analyst firm. Benchmark Criteria IOmark has established the benchmark criteria for the IOmark- VM workload. The performance requirements are established as follows: For all application workloads: Workloads are scaled in sets of 8 workloads 70% of response times for I/O s must not exceed 30ms All storage must reside on the storage system under test The replay time must complete within 1 hour and 15 seconds for each 1 hour workload For hypervisor operations (Note, not run): o Each set of 21 workloads must run 1 instance of the following workloads: o Clone, deploy, boot, software upgrade, VM deletion o Storage migration (aka Storage vmotion) between storage volumes

p. 23 More Information about IOmark-VM For more information about the IOmark benchmark, a theory of operations guide, published results and more, visit the official website ( http://www.iomark.org ) Some content is restricted to registered users, so please register on the site to obtain all available information and the latest results. About Evaluator Group Evaluator Group Inc. is dedicated to helping IT professionals and vendors create and implement strategies that make the most of the value of their storage and digital information. Evaluator Group services deliver in- depth, unbiased analysis on storage architectures, infrastructures and management for IT professionals. Since 1997 Evaluator Group has provided services for thousands of end users and vendor professionals through product and market evaluations, competitive analysis and education. www.evaluatorgroup.com Follow us on Twitter @evaluator_group Copyright 2014 Evaluator Group, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written consent of Evaluator Group Inc. The information contained in this document is subject to change without notice. Evaluator Group assumes no responsibility for errors or omissions. Evaluator Group makes no expressed or implied warranties in this document relating to the use or operation of the products described herein. In no event shall Evaluator Group be liable for any indirect, special, inconsequential or incidental damages arising out of or associated with any aspect of this publication, even if advised of the possibility of such damages. The Evaluator Series is a trademark of Evaluator Group, Inc. All other trademarks are the property of their respective companies. This document was developed with HGST funding. Although the document may utilize publicly available material from various vendors, including HGST, VMware and others, it does not necessarily reflect the positions of such vendors on the issues addressed in this document. Virident and FlashMax are registered trademarks of HGST, Inc. and its affiliates in the United States and/or other countries. All other trademarks are the property of their respective owners.