Qsan Document - White Paper. Performance Monitor Case Studies



Similar documents
Qsan Document - White Paper. How to use QReplica 2.0

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

Analysis of VDI Storage Performance During Bootstorm

Performance Report Modular RAID for PRIMERGY

StarWind iscsi SAN: Configuring Global Deduplication May 2012

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

How To Evaluate Netapp Ethernet Storage System For A Test Drive

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Qsan AegisSAN Storage Application Note for Surveillance

Accelerate Oracle Performance by Using SmartCache of T Series Unified Storage

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

StarWind Deduplication vs Nexenta Deduplication

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array

HA Certification Document Armari BrontaStor 822R 07/03/2013. Open-E High Availability Certification report for Armari BrontaStor 822R

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

Version 1.0 March Backup Hyper V Virtual Machine Using VSS Provider on DPM 2012

Introduction to MPIO, MCS, Trunking, and LACP

White paper. QNAP Turbo NAS with SSD Cache

Comparison of Hybrid Flash Storage System Performance

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Maximizing SQL Server Virtualization Performance

Monitoring Databases on VMware

Tech Tip: Understanding Server Memory Counters

HP reference configuration for entry-level SAS Grid Manager solutions

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

White Paper. Recording Server Virtualization

Scalability. Microsoft Dynamics GP Benchmark Performance: Advantages of Microsoft SQL Server 2008 with Compression.

Avigilon Control Center Server User Guide

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

VMware Best Practice and Integration Guide

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

N /150/151/160 RAID Controller. N MegaRAID CacheCade. Feature Overview

Microsoft Exchange Server 2003 Deployment Considerations

Remote PC Guide Series - Volume 2b

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Using Synology SSD Technology to Enhance System Performance Synology Inc.

RAID 5 rebuild performance in ProLiant

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Optimizing SQL Server Storage Performance with the PowerEdge R720

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

Accelerating Microsoft Exchange Servers with I/O Caching

Accelerating Server Storage Performance on Lenovo ThinkServer

Frequently Asked Questions: EMC UnityVSA

StarWind Virtual SAN Best Practices

Virtuoso and Database Scalability

Quick Start Guide. GV-Redundant Server GV-Failover Server. 1 Introduction. Packing List

REFERENCE ARCHITECTURE. PernixData FVP Software and Splunk Enterprise

MS Exchange Server Acceleration

New!! - Higher performance for Windows and UNIX environments

Benchmarking Hadoop & HBase on Violin

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems. Presented by

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Deploying and Optimizing SQL Server for Virtual Machines

White Paper Open-E NAS Enterprise and Microsoft Windows Storage Server 2003 competitive performance comparison

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

WHITE PAPER Optimizing Virtual Platform Disk Performance

Remote PC Guide Series - Volume 2a

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Solution Brief: Creating Avid Project Archives

PERFORMANCE STUDY. NexentaConnect View Edition Branch Office Solution. Nexenta Office of the CTO Murat Karslioglu

Virtualization of the MS Exchange Server Environment

ADDENDUM 1 September 22, 2015 Request for Proposals: Data Center Implementation

Preparing a SQL Server for EmpowerID installation

High Performance Tier Implementation Guideline

Evaluation of Enterprise Data Protection using SEP Software

Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering

EMC Unified Storage for Microsoft SQL Server 2008

NVMe SSD User Installation Guide

SuperGIS Server 3 Website Performance and Stress Test Report

WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE

SanDisk SSD Boot Storm Testing for Virtual Desktop Infrastructure (VDI)

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Sizing of Virtual Desktop Infrastructures

Benchmarking Guide. Performance. BlackBerry Enterprise Server for Microsoft Exchange. Version: 5.0 Service Pack: 4

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Windows Server Performance Monitoring

HP Smart Array Controllers and basic RAID performance factors

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize

Deployments and Tests in an iscsi SAN

StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability storage with Windows Server 2003 and 2008

VTrak SATA RAID Storage System

Transcription:

Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014

Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced or transmitted without written permission from Qsan Technology, Inc. Trademarks All products and trade names used in this manual are trademarks or registered trademarks of their respective companies. Qsan Technology, Inc. 4F., No.103, Ruihu St., Neihu Dist., Taipei City 114, Taiwan (R.O.C.) Tel: +886-2-7720-2118 Fax: +886-2-7720-0295 Email: Sales@QsanTechnology.com Website: www.qsantechnology.com 2

Introduction In this document we are glad to help users who may encounter some performance issues when they are using Qsan products. Customers may feel slow performance while they operate some routine cases after more loading increased into the RAID group on the SAN storage. Now it is easy to figure out where the bottleneck is. We will take some cases in this document and guide you through how to determine which part could be the main cause of the slow performance. Usage The new feature - performance monitor - is located on the left menu on the web UI of Qsan SAN storage system. It supports the monitoring of the detected hard drives and the iscsi flow as well as the flow of Fiber channel. 1. Select Disk, iscsi, or Fiber Channel (if using FC series models) tab for what you would like to monitor. 2. You may choose to monitor the hard drives which are installed on the local unit (head unit, like AegisSAN LX F600Q-D316), or on the JBOD unit. 3. Select the detected hard drives that you want to monitor. 3

4. For example the Slot 1 and Slot 2 have been selected, there will be two charts displayed. 5. You will see the Throughput of the selected hard drive at the left side whereas the Latency at the right side listed on the showed charts. 6. Green color represents the Throughput of the hard drive while the amber is the Latency. 7. And there will be numbers displayed below the chart indicating the exact throughput the hard drive delivered. The iscsi and Fibre Channel monitoring pages work at the same method. 4

1. You can monitor two controllers in the same tab. 2. The TX with green line displayed represents the Transferred iscsi traffic which means the read commands generated from the connected host(s)/server(s). For example, the server connected to the iscsi volume, and copies some files out of the volume onto the local drive of the server. 3. The RX with amber line displayed represents the Received iscsi flow which means the Write commands established from the connected host(s)/server(s). For example, the server connected to the iscsi volume, and copies some files into the volume from the local drive of the server. Environment As we mentioned in the beginning of this document, we are going to take some examples as the real scenario that users may encounter. Here we introduce the testing environment in our laboratory. Host Intel Xeon CPU E5620, 2.40GHz, 16GB RAM Host OS Windows 2008 R2, 64-bits Storage AegisSAN Q500-P10-D316 (Dual controller) Controller Firmware V1.2.0 (20140731_1100) HDD WD2003FYYS-02W0B0, 2TB, 7200 RPM, 64MB cache (x4) RAID Level RAID 5 Virtual Disk 1000GB Diagram 5

Preparation Before we start, it is better to know how much throughputs one single HDD (we used in this document) can deliver. We first created a virtual disk of RAID 0 with only one HDD, and attached a LUN to be connected from the server, and tested the maximum throughput via IOmeter. Read commands triggered from the server, it s around 150MB/s. Write commands triggered from the server, it s around 100MB/s. 100% random read commands with 4K block size triggered from the server, it s around 150 IOPS. 100% random write commands with 4K block size triggered from the server, it s around 100 IOPS. 6

Case Studies Case 1 - The maximum throughput of one single iscsi port has reached Now we start to run the first testing case. Triggering the read commands via IOmeter from the server with one single iscsi connection. In theory, based on the result we got in the pre-test above, 4 hard drives with RAID 5 RAID group created shall deliver up to 450MB/s for read performance. However in the chart below, we can observe that the maximum throughput within one single iscsi connection (1GbE) has reached 903.97Mbps, it is around 112MB/s. In the Disk throughput monitoring, we can also watch the current delivering values from each hard drive in the RAID group, they are around 28MB/s for each one, total is around 112MB/s. So the issue in this case that users may meet shall be on the following. I got limited/slow performance after the I/O loadings increased to the iscsi volume from the server. I have only one iscsi connection logged in, but I am not sure if it is normal or not. Where may I try to sort out the problem? 7

Or some similar questions from users, you may suggest them to check: 1. If the maximum performance on that single iscsi port has been reached or not. 2. If there is any specific hard drive in the same RAID group generating the high latency value larger than others, it might be the one which is slow down the overall performance. Please suggest the user to replace the problematic hard drive and check again. 3. Add the iscsi connection(s) one by one between the SAN storage and server, and configure MPIO for improving the performance accordingly. The result in the following charts is the maximum throughput that 4 hard drives can theoretically provide. Check the values next to TX, 900Mbps for each. So it is 3600Mbps, around 450MB/s totally. The throughput for each hard drive is 113MB/s, and 452MB/s totally. 8

Note that we have tried to add one more (5 connections at the moment) iscsi connection in this case. The MB/s can be increased up to 500MB/s, and the throughput of the hard drives are close to 150MB/s. Theoretically the performance is not able to reach over 450MB/s due to the parity in the RAID 5 RAID group, but our system has read ahead feature which can help to enhance the sequential read performance, so you might see greater read performance. Case 2 - The maximum throughput of the hard drives in the RAID group has reached In this case we removed one of the hard drives and re-created the RAID 5 with 3 ones. According to the basic test above, we shall get around 300MB/s for read and 200MB/s for write in such scenario. Let s see how the results will be. It s about 700Mbps * 4 = around 350MB/s. 9

112640KB+114688KB+119304KB = 346MB/s, close to 350MB/s. In such case we can obviously see that we have 4 iscsi connections, (maximum value shall be around 440MB/s) but only 350MB/s delivered. Users might ask like this. why I got performance lower than 400MB/s while there were 4 iscsi connections logged in? You may advise users to check: 1. On each iscsi portal, if they have the similar performance value or not. 2. Check if all hard drives (in identical RAID group) have reached the maximum throughput or not (in this case, the maximum shall be around 112MB ~ 120MB per each). 3. If the maximum value of the hard drives has been reached, try to re-create the RAID group with more drives, otherwise this shall be the limitation that the current installed hard drives can provide in performance. Case 3 - Got less performance from hard drives and iscsi portals, check the latency This case inherits the environment from the prior one; we use the same RAID configuration which is a virtual disk of RAID 5 created with only 3 hard drives. You will know that even when you get less performance from hard drives and iscsi portals, in some situations it will be a normal and reasonable result. We try to simulate a scenario via IOmeter with random write commands inputting to the logged on iscsi volume. Imagine that there are lots of virtual machines (implemented by such as VMware vsphere or Hyper-V) stored on the volume and generating intensive, small I/O. The I/O might 10

cause slow performance and lots of latency due to the hard drives are handling or responding to the incoming commands too slowly. Here I use a 100% random write I/O pattern via IOmeter with 512B block size to simulate the behavior. The latency on each hard drives grows to over 30 ms. In reality, users may see higher values. The delivered throughput on the iscsi portals is relatively lower. In such scenario, users shall be capable to understand that the incoming I/O to this virtual disk is very intensive and heavy. You may consider to: 1. Separate the I/O to a different time frame for decreasing the loading during peak hours. 2. Move some jobs to a different RAID group. Case 4 - Got less performance, check the latency drive failure Here comes an abnormal case that users may encounter during daily works. We include one problematic hard drive (with S.M.A.R.T. error) in this test to simulate the case of one of hard drives gets failure. We created a virtual disk of RAID 5 with 4 hard drives (problematic one 11

included). Basically, you will see slow performance from the disk throughput and the iscsi portals, please refer to the following charts. You may see the iscsi performance is unstable and slow on each portal. And the problematic drive (in Slot 2) will deliver lower throughputs than others. Sometimes you may see the latency of the problematic drive is relatively higher than others. 12

So significantly, you shall suggest users to: 1. Replace the problematic hard drive with a new one, try to run the daily jobs again and check the status after the rebuilding process of the virtual disk is complete. 2. If the same behavior still occur, you may need to replace the installed MUX board (if any), then try again. 3. Unfortunately if the identical problem still happens, please try to remove the hard drive (with the MUX board) to another disk slot in the chassis, and try again to clarify the issue. After the operation 2 and 3 above, it is suggested to download the debug_info file (Maintenance\System Information on the web interface, click Download System Information button to download), and report this situation to Qsan support team (Support@QsanTechnology.com) for further assistance. Summary We take some case studies to show the function of performance monitor. Remind that the performance monitor is used for debugging or verifying purpose only. It is not suggested to keep enabling the monitoring objects all the time. The storage systems spend extra CPU and memory resource for the function, and this might affect the overall reaction of the storage system while operating on the web interface, and the performance/throughput as well. Applies To AegisSAN Q500: FW 1.3.0 AegisSAN LX FW 3.4.0 13