Performance Analysis and Testing of Storage Area Network



Similar documents
IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Future technologies for storage networks. Shravan Pargal Director, Compellent Consulting

Qsan Document - White Paper. Performance Monitor Case Studies

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Windows 8 SMB 2.2 File Sharing Performance

A Step-by-Step Guide to Synchronous Volume Replication (Block Based) over a WAN with Open-E DSS

Transitioning to 6Gb/s SAS (Serial-Attached SCSI) A Dell White Paper

Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment

Configuring RAID for Optimal Performance

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

A Performance Analysis of the iscsi Protocol

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

Analysis of VDI Storage Performance During Bootstorm

VTrak SATA RAID Storage System

The functionality and advantages of a high-availability file server system

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Lecture 36: Chapter 6

An Implementation of Storage-Based Synchronous Remote Mirroring for SANs

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

High Performance Tier Implementation Guideline

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)

PARALLELS CLOUD STORAGE

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

HP Smart Array Controllers and basic RAID performance factors

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

White paper. QNAP Turbo NAS with SSD Cache

Deployments and Tests in an iscsi SAN

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Storage Networking Foundations Certification Workshop

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Accelerating Microsoft Exchange Servers with I/O Caching

Storage Networking Management & Administration Workshop

SPC BENCHMARK 1 EXECUTIVE SUMMARY

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Benchmarking Guide. Performance. BlackBerry Enterprise Server for Microsoft Exchange. Version: 5.0 Service Pack: 4

Evaluation Report: HPE MSA 2040 Storage in a Mixed Workload Virtualized Environment

Technical Operations Enterprise Lab for a Large Storage Management Client

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

RAID Performance Analysis

RAID technology and IBM TotalStorage NAS products

Tyrant: A High Performance Storage over IP Switch Engine

Storage Area Network

RAID 5 rebuild performance in ProLiant

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

VMware VMmark V1.1.1 Results

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Deep Dive: Maximizing EC2 & EBS Performance

Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads

SLIDE 1 Previous Next Exit

Q & A From Hitachi Data Systems WebTech Presentation:

Cloud-Optimized Performance: Enhancing Desktop Virtualization Performance with Brocade 16 Gbps

VMware Best Practice and Integration Guide

Storage Performance Testing

RFP-MM Enterprise Storage Addendum 1

Operating System Recommended Support Contract

How To Evaluate Netapp Ethernet Storage System For A Test Drive

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

SIMULATION AND DESIGN OF STORAGE AREA NETWORK. ZHOU FENG (B.Sci., National University of Singapore)

Optimized Storage I/O for HPE Virtual Server Environments

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Using High Availability Technologies Lesson 12

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

2 Port PCI Express 2.0 SATA III 6Gbps RAID Controller Card w/ 2 msata Slots and HyperDuo SSD Tiering

An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iscsi in Terms of Performance, CPU Utilization & Power Consumption

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

RAID5 Scaling. extremesan Performance 1

Energy aware RAID Configuration for Large Storage Systems

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

VERITAS Storage Foundation 4.3 for Windows

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

HP 4 Gb PCI-X 2.0 Host Bus Adapters Overview

Comparison of Hybrid Flash Storage System Performance

PERFORMANCE TUNING ORACLE RAC ON LINUX

Online Remote Data Backup for iscsi-based Storage Systems

Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager

SPC BENCHMARK 1 EXECUTIVE SUMMARY IBM CORPORATION IBM STORWIZE V7000 SPC-1 V1.12. Submitted for Review: October 22, 2010 Submission Identifier: A00097

IP SAN BEST PRACTICES

Architecting a High Performance Storage System

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Using Synology SSD Technology to Enhance System Performance Synology Inc.

The IntelliMagic White Paper: Green Storage: Reduce Power not Performance. December 2010

Transcription:

Performance Analysis and Testing of Storage Area Network Yao-Long Zhu, Shu-Yu Zhu and Hui Xiong Data Storage Institute, Singapore Email: dsizhuyl@dsi.nus.edu.sg http://www.dsi.nubs.edu.sg

Motivations What should we do to optimize the storage system directly connected to storage network instead of directly connected to server? How to compare IP storage, Fibre Channel, and InfinBand? Do we need new algorithms to replace the RAID technology which introduced in 198s? How to evaluate and analyze the performance of the Storage Area Network easily and quickly? Modeling and simulation is a faster way to study these questions than implementation and testing Storage system Block level FC/DWDM FC protocol FC/DWDM Storage system FC-SAN FCIP FCIP FC-SAN Server NAS iscsi WAN iscsi NAS Server Clients LAN Router Router LAN Clients File level IP protocol IP storage system IP storage system

Key Points Build up a Queuing Network Model for total SAN performance analysis from perspective of networking Analyze the effects of the I/O workload on the SAN performance Analyze the effects of disk cache and Fork/Join model Analyze the s scheduling algorithms Comparison of the theoretical and experimental results

Modeling of SAN Disk Client IP switch Server FC-switch RAID controller Disk Client Server Disk

Queuing Network Model Disk Network s view points Client Client IP switch Server Server FC-switch RAID controller Disk Disk I/O response time Throughput (MB/s) IOPs λ host1 λ disk λ hda s λ fc-sw λ da λ fc-al Disk Controller & Cache center λ host2 FC-SW Network Disk Array Controller and Cache Network Disk Array Disk Unit IO Queue

SAN Performance for Big Sequential I/O 18 16 14 12 1 8 6 4 2 1 2 3 4 5 6 7 8 9 SAN performance (MB/s) System response time varies with system throughput for sequential 1MB I/O size. I/O workload: read System configuration: 1 server+ 1 DACC + 5 disks : Head-Disk Assembly (mechanical part) : Disk array controller and cache : FC fabric Switch Utilizations 1.8.6.4.2 FC = Bottleneck! Utilizations of service nodes vary with system request rate for sequential 1Mb I/O size. 2 4 6 8 1 Request rate (IOPs)

SAN Performance for Big Random I/O Response time (ms) 14 12 1 8 6 4 2 1 2 3 4 5 6 SAN performance (MB/s) response time is the largest portion (4%~6%) in the whole response time distribution. performance of 5 hard disks : 66MB/s 15 Hard disks : 79MB/s 25 Hard disks : 82MB/s Utilizations 1.8.6.4.2 System response time varies with system throughput for random 1MB I/O size. Utilizations of service nodes vary with system request rate for random 1MB I/O size Physical Drive = Bottleneck! 1 2 3 4 5 6 7 request rate(iops)

SAN Performance for Small Sequential I/O response time (ms) 4. 3.5 3. 2.5 2. 1.5 1..5. 2 4 6 8 1 12 14 16 18 System performance (MB/s) System response time varies with the throughput for sequential 4KB I/O (single user) Response time (ms) 3. 2.5 2. 1.5 1..5. 6 12 18 24 3 36 42 48 SAN performance (MB/s) System response time varies with the throughput for sequential 4KB I/O (2 users). Utilizations Utilizations 1..8.6.4.2. 1..8.6.4.2. Server = Bottleneck! 1 2 3 4 5 Request rate (IOPs) Utilizations vary with system request rate for sequential 4KB I/O (single user). (45 IOPs) Disk Array Controller = Bottleneck! 2 4 6 8 1 12 14 Request rate (IOPs) Utilizations varies with the system request rate for sequential 4KB I/O (2 users). (56 IOPs)

SAN Performance for Small Random I/O response time (ms) Response time (ms) 7 6 5 4 3 2 1 7 6 5 4 3 2 1.2.4.6.8 1 1.2 1.4 1.6 performance(mb/s) System response time varies with the throughput for random 4KB I/O (5 HDDs)...4.8 1.2 1.6 2. 2.4 2.8 3.2 3.6 4. 4.4 4.8 SAN performance (MB/s) System response time varies with the throughput for random 4KB I/O (15 HDDs). Utilizations 1.2 1.8.6.4.2 1 2 3 4 5 Request rate(iops) Utilizations vary with the system request rate for random 4KB I/O (5 HDDs). Physical Drive = Bottleneck! Performance of 5 hard disks : 42 IOPs 15 hard disks : 126 IOPs 25 hard disks : 21 IOPs

Empirical Model Comparison Throughput (MB/s) 1 8 6 4 2 theoretical experimental SAN performance varies with I/O size for sequential I/O workload 1 2 4 8 16 32 64 128 256 512 124 I/O size (KB) The tests are based on the multiple Pentium 733MHz hosts with 64bits and 66MHz PCI bus, HBA with Qlogic QLA 22A, 1G FC switch with Brocade Silkwarm 24, and a self-developed virtual FC Disk. IOMeter is used as the benchmark tool. Both of the testing and theoretic results show that the FC network is the system bottleneck for big I/O size (>32KB), and the storage system controller overhead is the system limitation for small I/O size (<16KB).

Lesson #1: Disk Cache / Controller Performance DCC Small I/O : DCC overhead larger than Physical Drive overhead Large I/O : Physical Drive overhead dominates λ disk Controller and Cache center λ hda center Hard Disk model Bandwidths of the disk unit, DCC and vary with IO size for single hard disk. 1 2 4 8 16 32 64 128 I/O size (KB) 45 esponse time varies with I/O size hen the request rate is 2 IOPs r hard disk. Single Disk Cache MAY be bottleneck Throughput( MB/s) 4 35 3 25 2 15 1 Disk Array Cache depends largely on Disk Stripe Size 5 1 2 4 8 16 32 64 128 256 512 124 IO size (KB) Disk Cache

Lesson #2: Disk Array and Fork/Join Model λ da Disk Array Controller and Cache equest j λ fc-al Fork/Join model Request j for disk 1 Network λ disk λ disk λ disk λ disk Disks Request j out Ratio 1.5 1.4 1.3 1.2 1.1 1 access time waiting time 1 2 3 Disk number Fork/Join access time and queue waiting time ratio vary with disk numbers. Request j for disk k δ * µ TR ( λ) = TS () + k k 2 disk * ρdisk * E[ TS µ λ disk disk 2 disk ] Response time ratio varies with disk numbers and system utilization.

Lesson #3: Algorithms Data waiting time (ms) 4 3 2 1 FAA RCF FLF CF.2.4.6.8 1 utilization Cmd waiting time (ms) 4 3 2 1 FAA RCF FLF CF.2.4.6.8 1 utilization Data average waiting time varies with utilization. Command average waiting time varies with utilization FAA: Fairness Access Algorithm RCF: Read Command First FLF: FL-port First CF: Commands First

Lastly. The queuing network model has been shown to be a useful tool for analyzing the overall SAN system performance The model was also used to analyze our advanced SAN storage technology, DA 2 We missed the proceedings publication. However, this paper will be made available on the conference website! More Questions? Visit our Poster Session Thank you for your attention!