Promise Pegasus R6 Thunderbolt RAID Performance



Similar documents
Q & A From Hitachi Data Systems WebTech Presentation:

AirWave 7.7. Server Sizing Guide

is605 Dual-Bay Storage Enclosure for 3.5 Serial ATA Hard Drives FW400 + FW800 + USB2.0 Combo External RAID 0, 1 Subsystem User Manual

SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform

IncidentMonitor Server Specification Datasheet

Taurus - RAID. Dual-Bay Storage Enclosure for 3.5 Serial ATA Hard Drives. User Manual

GraySort on Apache Spark by Databricks

Qsan Document - White Paper. Performance Monitor Case Studies

Use of Hadoop File System for Nuclear Physics Analyses in STAR

What is RAID? data reliability with performance

InfoScale Storage & Media Server Workloads

Price/performance Modern Memory Hierarchy

RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card.

Preparing a SQL Server for EmpowerID installation

Hydra Super-S Combo. 4-Bay RAID Storage Enclosure (3.5 SATA HDD) User Manual July 29, v1.3

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

RAID Performance Analysis

Introduction to I/O and Disk Management

RAID Utility User Guide. Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card

iscsi Performance Factors

Benchmarking Hadoop & HBase on Violin

RAID Storage System of Standalone NVR

User Manual. For more information visit

POSIX and Object Distributed Storage Systems

How To Test For Speed On Postgres (Postgres) On A Microsoft Powerbook On A 2.2 Computer (For Microsoft) On An 8Gb Hard Drive (For

Distributed File System Performance. Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame

User s Manual. Home CR-H BAY RAID Storage Enclosure

Xserve G5 Using the Hardware RAID PCI Card Instructions for using the software provided with the Hardware RAID PCI Card

An Affordable Commodity Network Attached Storage Solution for Biological Research Environments.

Linux Software Raid. Aug Mark A. Davis

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Performance Report Modular RAID for PRIMERGY

Deep Dive: Maximizing EC2 & EBS Performance

Optimizing LTO Backup Performance

Benefits of Intel Matrix Storage Technology

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (

Guide to SATA Hard Disks Installation and RAID Configuration

Configuring RAID for Optimal Performance

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Architecting High-Speed Data Streaming Systems. Sujit Basu

RAMDISK Benchmarks Test system: Software OS Setup Hypothesis License Author

Quantifying Hardware Selection in an EnCase v7 Environment

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

Guide to SATA Hard Disks Installation and RAID Configuration

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

1 Storage Devices Summary

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Parallels Cloud Server 6.0

Update: About Apple RAID Version 1.5 About this update

Comparing Dynamic Disk Pools (DDP) with RAID-6 using IOR

High Performance Tier Implementation Guideline

Storage benchmarking cookbook

Taurus Super-S3 LCM. Dual-Bay RAID Storage Enclosure for two 3.5-inch Serial ATA Hard Drives. User Manual March 31, 2014 v1.2

SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology

Solving Data Loss in Massive Storage Systems Jason Resch Cleversafe

HP reference configuration for entry-level SAS Grid Manager solutions

RAID Implementation for StorSimple Storage Management Appliance

Hydra esata. 4-Bay RAID Storage Enclosure. User Manual January 16, v1.0

White paper. QNAP Turbo NAS with SSD Cache

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Chapter Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig I/O devices can be characterized by. I/O bus connections

Outline. CS 245: Database System Principles. Notes 02: Hardware. Hardware DBMS Data Storage

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison

Big Fast Data Hadoop acceleration with Flash. June 2013

White Paper. Educational. Measuring Storage Performance

Virtuoso and Database Scalability

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

3PAR Fast RAID: High Performance Without Compromise

Best Practices for Disk Based Backup

RAID Overview

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

How To Limit Volume In Bacula

The Data Placement Challenge

TRACE PERFORMANCE TESTING APPROACH. Overview. Approach. Flow. Attributes

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011

RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1

Chapter 9: Peripheral Devices: Magnetic Disks

Redundant Array of Independent Disks (RAID)

Understanding Core storage, logical volumes, and Fusion drives.

File System Design and Implementation

I3: Maximizing Packet Capture Performance. Andrew Brown

Investigation of storage options for scientific computing on Grid and Cloud facilities

XenDesktop 7 Database Sizing

Definition of RAID Levels

Analysis of VDI Storage Performance During Bootstorm

The PCIe solution from ExaSAN

Atempo, Inc. LIVE BACKUP DEPLOYMENT GUIDE PLANNING AND DEPLOYING LIVE BACKUP IN YOUR CORPORATE ENTERPRISE. Author: Amy Gracer,

Chapter 7. Disk subsystem

Exploring Amazon EC2 for Scale-out Applications

How To Use A Davinci Resolve For Mac (Mac) With A Powerbook Or Ipad (Apple)

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect

The Performance of 2 and 4 HyperDrive4 units in RAID0

Database Management Systems

COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University

Transcription:

Introduction Promise Pegasus R6 Thunderbolt RAID Performance This document summarises the results of some performance testing I carried out on a Pegasus R6 Thunderbolt RAID drive from Promise Technologies Inc. Test Environment The unit I used was a Pegasus R6 Thunderbolt RAID from Promise Technologies Inc. It is fitted with 6 drives - each 1TB capacity (Hitachi HDS72101) I hosted the RAID array on an imac with a 3.4GHz Intel Core i7 CPU with 16GB of RAM and running OS X 10.7.4

Method I originally intended to use the Blackmagic Disk Speed Test software (available for free from Apple s App Store) to run the tests, but the results were very inconsistent at the levels of performance this RAID array achieves, so I devised my own method using simple, command line tools to test sustained throughput. Note I am not particularly interested in random I/O and did not test this. Basically I configured the RAID using the utility supplied, into many different combinations and then mounted the resulting array and tested it using the following method: 1. Mount disk as HFS+ journalled filesystem using DiskUtility 2. Use dd command line tool to write 100,000 blocks of 1MB each onto the Pegasus (100GB) in the following manner: dd if=/dev/zero of=pegasusfile.junk bs=1024k count=100000 3. Use OS X purge command to flush the disk cache 4. Use dd to read the file back from the Pegasus and discard it to /dev/null but time the operation like this: dd if=pegasusfile.junk of=/dev/null bs=1024k Using a large (100GB) file averages things out over 3-4 minutes to ensure results are consistent and caching effects are mitigated, since my imac has rather a lot of RAM installed. If you are unfamiliar with the Unix commands, you can start a Terminal and type man dd to see the manual pages for that command, or man purge to see the manual page for that command. Here is a script example of part of the testing: bash-3.2$ df -lh Filesystem Size Used Avail Capacity Mounted on /dev/disk0s2 931Gi 157Gi 774Gi 17% / /dev/disk1s2 3.6Ti 1.0Gi 3.6Ti 1% /Volumes/5DriveRAID5 bash-3.2$ pwd /volumes/5driveraid5 bash-3.2$ dd if=/dev/zero of=junk bs=1024k count=100000 100000+0 records in 100000+0 records out 104857600000 bytes transferred in 176.827257 secs (592994552 bytes/sec) bash-3.2$ purge bash-3.2$ dd if=junk of=/dev/null bs=1024k 100000+0 records in 100000+0 records out 104857600000 bytes transferred in 200.161851 secs (523864061 bytes/sec)

The performance numbers achieved are visible in parentheses above, the first one listed being for writing to the disk and the second for reading from it. In order to confirm my results, I ran OS X s built in Activity Monitor (to be found in Applications/Utilities) and monitored the Data read/sec (in green) and Data written/ sec (in red) fields at the bottom of the screen capture below:

Results Here is a table showing the results of the various combinations I found interesting - it is not an exhaustive test of all possible configurations the Pegasus is capable of. 2 Drives 2TB - 100% Write: 368 Read: 365 RAID0 RAID1E RAID5 RAID6 RAID50 3 Drives 3TB - 100% Write: 526 Read: 480 2TB - 66% Write: 351 Read: 358 4 Drives 4TB - 100% Write: 624 Read: 507 5 Drives 4TB - 80% Write: 592 Read: 523 6 Drives 6TB - 100% Write: 626 Read: 504 3TB - 50% Write: 199 Read: 477 5TB - 83% Write: 600 Read: 504 Note: Speeds are shown in millions of bytes/second, i.e. MiB/s. 4TB - 66% Write: 539 Read: 517 4TB - 66% Write: 568 Read: 510 How to interpret the results table above: Find the number of drives in your array down the left column and the RAID level you want to use across the top. Find the cell where those two intersect and read out the three pieces of information: a. The available capacity and its percentage space efficiency b. The sustained write rate in MiBytes/s c. The sustained read rate in MiBytes/s

Analysis Looking at a 2 drive RAID0 configuration, it seems to show that you can get around 180 MiB/s from each spindle. If you look at a 3 drive RAID5, you can improve the previous statement and say that you get around 180 MiB/s from each spindle that is delivering useful data (i.e. not parity data). Looking further down the table, you can see that reads saturate the Pegasus somewhere internally (probably the controller) at around 500 MiB/s and that writes saturate it somewhere around 600-630 MiB/s. Being fairly inquisitive, I wondered whether the Pegasus can deliver twice the performance when two RAID sets are running - and whether there is a way to get even more speed out of it than the maximum 630 MiB/s when writing and 500 MiB/s when reading. The two obvious possibilities were: 1. Create 2 separate 3 Drive RAID0 arrays on the Pegasus and stripe them together within OS X s software RAID in the hope of achieving write performance of 1,052 MiB/s (2x 526 MiB/s) and a read performance of 960 MiB/s (2x 480MiB/s) 2. Create a 4 Drive RAID0 array and a 2 Drive RAID0 array on the Pegasus and stripe them together within OS X s software RAID in the hope of achieving a write performance of 990 MiB/s (368 + 624) and a read performance of 872 MiB/s (365 + 507). Yes, this configuration wastes 2TB of space because the stripes are different sizes - I know that, but was purely interested in the maximum sustained throughput at whatever cost. Effectively, both of these configurations are 6 Drive RAID0 setups, but the striping is being done partially in the Pegasus and partially in OS X. Well, in the event the following performances were the very best I could achieve: Configuration 2 way OS X software stripe across a pair of 3 Drive RAID0 arrays on the Pegasus 2 way OS X software stripe across a 4 Drive RAID0 and a 2 Drive RAID0 array on the Pegasus Performance Write: 636 Read: 563 Write: 638 Read: 542 So, you can get marginally more performance doing something crazy like this, but you would have to be very desperate for those last few percent of performance to do something like this rather than take the obvious 5 or 6 Drive RAID5 or RAID 6.

Conclusion Promise Pegasus R6 Thunderbolt RAID Performance I note that the performance difference between 4 Drive RAID 0 and 6 Drive RAID 0 is negligible, or in other words, adding the 2 extra disk spindles going from 4 drives to 6 drives, doesn t increase the throughput, so I suspect something (the controller?) is reaching saturation inside the array. This would appear to be confirmed by the fact that no combination seems to be able to exceed 626 MiB/s write and 523 MiB/s read. I also noted that, as with nearly all disks, the performance is significantly better at the outer edge than towards the centre of the disk (nearer the spindle). This is because the inner and outer parts of the disk are turning at the same number of revolutions per second, but at the outer edge a greater area of disk surface passes underneath the head in the same time because the radius is greater there. In order to test this, I made a 5TB array and made the first partition 100GB in Disk Utility, followed by a 4,800GB partition, then a 100GB partition at the centre of the disk. The drive sustained 550+MB/s in the first 100Gb partition but barely 300MB/s in the final 100Gb partition. So, if you are using this drive and want to place things in the very fastest area, place it at the outer edge of the disk - or the first partition in Disk Utility! In summary, the Pegasus Thunderbolt RAID is blazingly fast - and the match of many SSDs, at least in sustained sequential throughput - though probably not in terms of random I/O. As regards the best configuration, well, that will depend on your intended usage - whether for databases, video/photo editing or whatever, but it is easy to see why Promise Technology ships the R6 configured as a 6 Drive RAID 5 because that is almost as fast as any other RAID level, offers a high space efficiency and a good amount of resilience to disk failure.