napp-it ZFS Storage principles for Video Performance Dual Xeon, 40 GB RAM Intel X540, mtu 9000 (JumboFrames) Server OS: Oracle Solaris 11.3 SMB 2.



Similar documents
Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

HA Certification Document Armari BrontaStor 822R 07/03/2013. Open-E High Availability Certification report for Armari BrontaStor 822R

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

STI Hardware Specifications for PCs

Test Report Newtech Supremacy II NAS 05/31/2012. Newtech Supremacy II NAS storage system

Professional and Enterprise Edition. Hardware Requirements

Fusion iomemory iodrive PCIe Application Accelerator Performance Testing

Molecular Devices High Content Data Management Solution Database Schema

Analysis of VDI Storage Performance During Bootstorm

Enterprise Edition. Hardware Requirements

Dynamode External USB3.0 Dual RAID Encloure. User Manual.

Autodesk Revit 2016 Product Line System Requirements and Recommendations

SUBJECT: SOLIDWORKS HARDWARE RECOMMENDATIONS UPDATE

AirWave 7.7. Server Sizing Guide

Running Windows on a Mac. Why?

QuickSpecs. PCIe Solid State Drives for HP Workstations

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Avid ISIS v4.7.7 Performance and Redistribution Guide

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools

White Paper. Educational. Measuring Storage Performance

Staying afloat in today s Storage Pools. Bob Trotter IGeLU 2009 Conference September 7-9, 2009 Helsinki, Finland

TECHNOLOGY PACKAGE TABLE OF CONTENTS

Silverton Consulting, Inc. StorInt Briefing

System requirements for A+

SYSTEM SETUP FOR SPE PLATFORMS

Flash for Databases. September 22, 2015 Peter Zaitsev Percona

Outline. Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

OBSERVEIT DEPLOYMENT SIZING GUIDE

Using Synology SSD Technology to Enhance System Performance Synology Inc.

MedInformatix System Requirements

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Adonis Technical Requirements

IncidentMonitor Server Specification Datasheet

System Requirements. SuccessMaker 5

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

CyberStore WSS. Multi Award Winning. Broadberry. CyberStore WSS. Windows Storage Server 2012 Appliances. Powering these organisations

Accelerating Server Storage Performance on Lenovo ThinkServer

StarWind iscsi SAN: Configuring Global Deduplication May 2012

Virtuoso and Database Scalability

Sage Grant Management System Requirements

Scaling from Datacenter to Client

Embedded Operating Systems in a Point of Sale Environment. White Paper

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

QUESTIONS & ANSWERS. ItB tender 72-09: IT Equipment. Elections Project

Autodesk Inventor on the Macintosh

Minimum Computer System Requirements

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Accelerating Microsoft Exchange Servers with I/O Caching

4 Channel 6-Port SATA 6Gb/s PCIe RAID Host Card

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

QHR Accuro EMR IT Hardware Requirements

SSDs: Practical Ways to Accelerate Virtual Servers

SLIDE 1 Previous Next Exit

Minimum Requirements for Cencon 4 with Microsoft R SQL 2008 R2 Express

System Requirements for Microsoft Dynamics GP 9.0

Network Guidelines and Hardware Requirements

PlexView Advanced Reporting System and Advanced Reporting System with Advanced Traffic Collection Software Release Notice R9.6.3.

vnas Series All-in-one NAS with virtualization platform

Xserve Transition Guide. November 2010

Optimizing SQL Server Storage Performance with the PowerEdge R720

SSDs: Practical Ways to Accelerate Virtual Servers

H ARDWARE C ONSIDERATIONS

About Parallels Desktop 10 for Mac

HARDWARE AND SOFTWARE REQUIREMENTS

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

Minimum Software and Hardware Requirements

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Enabling Technologies for Distributed and Cloud Computing

Comparing the Network Performance of Windows File Sharing Environments

Chapter 7A. Functions of Operating Systems. Types of Operating Systems. Operating System Basics

EMC Unified Storage for Microsoft SQL Server 2008

Get Success in Passing Your Certification Exam at first attempt!

High Performance Computing Specialists. ZFS Storage as a Solution for Big Data and Flexibility

An Oracle White Paper December Oracle Virtual Desktop Infrastructure: A Design Proposal for Hosted Virtual Desktops

Microsoft Windows Apple Mac OS X

Home storage and backup options. Chris Moates Head of Lettuce

Hard Disk Drive vs. Kingston SSDNow V+ 200 Series 240GB: Comparative Test

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:

Distribution One Server Requirements

Why Hybrid Storage Strategies Give the Best Bang for the Buck

Hardware Requirements

Enabling Technologies for Distributed Computing

How To Buy A Crikit For A Fraction Of The Price

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

PC Workstation 2015 RAID Service Guide

AIX NFS Client Performance Improvements for Databases on NAS

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

Workstation Virtualization Software Review. Matthew Smith. Office of Science, Faculty and Student Team (FaST) Big Bend Community College

NexentaStor Enterprise Backend for CLOUD. Marek Lubinski Marek Lubinski Sr VMware/Storage Engineer, LeaseWeb B.V.

Software and Hardware Requirements

HP Proliant BL460c G7

An Affordable Commodity Network Attached Storage Solution for Biological Research Environments.

White paper. QNAP Turbo NAS with SSD Cache

SQL Server Business Intelligence on HP ProLiant DL785 Server

Transcription:

napp-it ZFS Storage principles for Video Performance Server: Dual Xeon, 40 GB RAM Intel X540, mtu 9000 (JumboFrames) Server OS: Oracle Solaris 11.3 SMB 2.1 OmniOS 151017 bloody SMB 2.1 Client OS: MacPro OSX 10.10 with Sanlink2 Client OS: Win 8.1 Pro with Intel X540-T Questions: SMB1 vs SMB2 vs NFS mtu 1500 vs MTU 9000 NVMe vs SSD vs Disks Mirror vs RaidZ Single user vs multiple user Single pool vs multi pools Solaris 11.3 vs OmniOS beta published: Dezember 2015, Günther Alka CC-BY-SA see http://creativecommons.org/licenses/by-sa/2.0

Client Hardware: Apple MacPro, OSX 10.10 4 Core 3,7 GhZ, 12 GB RAM Network: 10 Gbase-T Sanlink 2 Local disk performance on Apple NVMe that is quite similar to an Samsung 950 Pro M2 Performance: write 994 MB/s read 961 MB/s Switch HP 5900, 48 x 10 Gbase-T Storage Dual Xeon, 40 GB RAM. Intel X540 10Gbase-T and Jumboframes enabled on all tests Server OS (Round 1) Oracle Solaris 11.3 as it offers SMB 2.1.

SMB1, 10 GbE (Solaris) Pool Layout: 3 x Mirror of 2 TB HGST 7k U/m SMB1, MTU 1500, 10 GbE (connect to cifs://) SMB 1, MTU 9000, 10 GbE (connect to cifs://)

SMB 2.1, 10 GbE (Solaris) Pool Layout: 3 x Mirror of 2 TB HGST 7k U/m SMB 2.1, MTU 1500, 10 GbE (connect to smb://) SMB 2.1, MTU 9000, 10 GbE (connect to smb://) SMB 2.1 gives a boost over SMB1 and a second boost when using JumboFrames SMB 2.1, 10 GbE Pool Layout: 7 x Raid-0 of 2 TB HGST 7k U/m Pool Layout: 10 x Raid-Z2 of 2 TB HGST 7k U/m SMB 2.1, MTU 9000, 7 x HGST 2TB raid-0 SMB 2.1, MTU 9000, 10 x HGST 2TB Raid Z2, single vdev There is no advantage of the raid-0 with a much higher iops rate (7 x a single disk vs 1 x a single disk)

Pool Layout: 10 x Raid-Z2 of 2 TB HGST 7k U/m (Solaris) Windows 8.1 pro, 10 GbE, MTU 1500 Windows 8.1 pro, 10 GbE, MTU 9000 On Windows, MTU 1500 is weak on reads With MTU 9000, read values are much higher Hardwarde is a 6 core Xeon with 32 GB RAM, an Intel X540-T1 and a local Sandisk Extreme Pro SSD Pool Layout: 10 x Raid-Z2 of 2 TB HGST 7k U/m OSX 10.9, MTU 1500 OSX 10.9, MTU 9000 There seems to be a problem with OSX 10.9 and MTU 9000 as reads are low even with MTU 9000 I made an update from 10.9 to 10.11 but results were the same. A clean install of OSX 10.11 fixes this problem.

SMB 2.1, 10 GbE, MTU 9000 (Solaris) Pool Layout: Single NVMe Intel P750 Pool Layout: 3 x Intel S3610-1,6TB, Raid-0 SMB 2.1, MTU 9000, SMB 2.1, MTU 9000, 1 x NVMe Intel p750-400 3 x Intel P3610-1,6 TB Raid 0 SMB 2.1, 10 GbE, MTU 9000 Single User on 10 disk Raid Z2 (HGST 2TB) Six concurrent clients on the same Raid Z2 Performance degration is noticable with a Raid-Z2 pool from 10 spindels. Write values for 6 concurrent users jumps between 130 MB and 200 MB/s with reads always 300 MB/s and better

SMB 2.1, 10 GbE, MTU 9000 (Solaris) Single User on 1 disk NVMe Intel P750 Six concurrent clients on the same basic pool The performance degration with 6 users on writes is lower due the higher iops of the NVMe. Read degration is quite similar to the spindle based Z2 pool. The ZFS Ram cache seems to equal this. SMB 2.1, 10 GbE, MTU 9000, Multipool-Test vs Z2 Single User on 1 disk NVMe Intel P750 This test is running with 5 user on Raid-Z2. concurrently to the left single user Two pools accessed at the same time The left NVMe pool runs with quite full speed concurrently to the Z2 pool with 5 users

Round 2: SMB2 on OmniOS bloody 151017 beta (Dez 2015) During this tests, the new OmniOS with SMB2.1 support arrived in a beta state. I installed the OmniOS 151017 beta to the same hardware with same settings. After the first test it seems that it behaves different to Solaris. While I saw the same behaviour like a good performance on some config and a very bad performance on other. But some configurations that were fine with Solaris are bad with OmniOS and vice versa: MacPro, OSX 10.5, Raid-Z2, MTU 9000, SMB 2.1 Values on Solaris were 500 MB/s and 800 MB/s Values on Solaris were 520 MB/s and 101 MB/s I updated all MacPros to 10.11.2 and Windows on MacPros with newest Sanlink driver for further tests after this, I saw a consistent high write and read perforrmance with OSX and good values on Windows Pool Layout: 10 x Raid-Z2 of 2 TB HGST 7k U/m Pool Layout: 1 x basic Intel P750 NVMe good values, slighty slower than Solaris 11.3 with similar on NVMe compared to Solaris with 501 MB/s write and 836 MB/s read 605 MB/s write and 830 MB/s read

MacPro, OSX 10.5, Raid-Z2, MTU 9000, SMB 2.1 (OmniOS) Concurrent access: 1 user to another NVMe pool 5 concurrent user/ MacPros Same like on Solaris With many user working on the same pool, especially writes values are quite low Another user that is working on a different pool das better values so pool is more a limiting factor than network MacPro with Win 8.1, Raid-Z2, MTU 9000, 1 user Same MacPro with Windows 8.1 older Promise Sanlink 2 driver (Begin 2015) but with newest Sanlink2 driver Nov. 2015 A newer driver can double read values. This is similar to OSX where another OS release can have the same effect. In general Windows is lightly slower than OSX with default settings. Solaris and OmniOS perform similar.

Fazit SMB1 vs SMB 2.1 You must use SMB2+ on OSX as performance of SMB1 on OSX is lousy MTU 1500 vs MTU 9000 (JumboFrames) You must use Jumboframes for performance. Performance improvement is about 30-50% SMB2 vs NFS I have had stability problems with NFS on OSX as disconnects happens as well as a wery bad performance of NFS. Even with Jumboframes the best what I got without any tweaks was 70 MB/s write and 150 MB/s read. Together with the missing authentication or authorisation of NFS, this is not an option. Raid-0 vs Mirror vas Raid-Z For a single user test, differences are minimal as long as your pool is capable of a sequential performance > 1GB/s In a multi-user/ multi-thread environment, there is a difference as iops performance of raid-z is lower NVMe vs SSD vs Spindle based pools For a single user test, differences are minimal as long as your pool is capable of a sequential performance > 1GB/s In a multi-user/ multi-thread environment, there is a difference as iops of SSDs is much higher than with spindels. A single NVMe has nearly no advantage over 3 x 6G SSDs Solaris 11.3 vs OmniOS 151017 beta Solaris is on OSX slightly faster than the OmniOS beta. On Windows results are similar. We will see where the final will be. Windows 8.1 vs OSX 10.11 I have not expected the result as OSX is slightly faster on 10G out of the box than Windows (same MacPro hardware) Windows results on a 6core Xeon PC (X540-T1) workstation are similar to the results on the MacPro with Windows. In General 10G Performance depend on basic settings like Jumboframes and SMB2. Beside that driver quality and general OS settings are essential. On some configs 10G performance is similar to an 1G network. While 1G networks are at about 100 MB/s, the 10G is in the best case 6-8x as fast, on Windows 3-6x. I am under the impression that the OS defaults are more optimized for 1G than 10G performance. Maybe some other tweakings can help Conclusion: Use SMB2+ and Jumboframes Care about OS version and drivers Solaris is out of the box slightly faster than current OmniOS beta. Both are a huge step forward compared to their predecessor especially on OSX. Prefer a single pool per user. Any pool layout is ok with > 1GB/s sequential performance, I would go Raid-Z2 but a 6G SSD mirror per user is also ok. For multiple users with lower video quality demands, such an average spindle pool is ok as well. For multiple users or multiple threads/processed and a higher video qualitity a high iops pool from SSDs disks is an option. There is room for tweaking especially on Windows. Lets hope that OS defaults care more about 10G