my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize



Similar documents
If you already have your SAN infrastructure in place, you can skip this section.

Maximizing SQL Server Virtualization Performance

High Performance Tier Implementation Guideline

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Windows Server 2008 R2 Hyper-V Live Migration

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Windows Server 2008 R2 Hyper-V Live Migration

Virtualization of the MS Exchange Server Environment

SQL Clusters in Virtualized Environments April 10 th, 2013

Accelerating Application Performance on Virtual Machines

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

MCTS Guide to Microsoft Windows 7. Chapter 10 Performance Tuning

Hardware/Software Guidelines

Hardware Performance Optimization and Tuning. Presenter: Tom Arakelian Assistant: Guy Ingalls

Best Practices for Virtualised SharePoint

Perfmon Collection Setup Instructions for Windows Server 2008+

A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

IP SAN Best Practices

MIGRATING LEGACY PHYSICAL SERVERS TO HYPER-V VIRTUAL MACHINES ON DELL POWEREDGE M610 BLADE SERVERS FEATURING THE INTEL XEON PROCESSOR 5500 SERIES

MONITORING YOUR PS SERIES SAN WITH SAN HEADQUARTERS

A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

Windows Server Performance Monitoring

Array Tuning Best Practices

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

If you re the unofficial administrator of your home or small

1 Data Center Infrastructure Remote Monitoring

MS Exchange Server Acceleration

Deploying and Optimizing SQL Server for Virtual Machines

DELL TM PowerEdge TM T Mailbox Resiliency Exchange 2010 Storage Solution

Q & A From Hitachi Data Systems WebTech Presentation:

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Windows Server 2012 Server Manager

Hypervisor Competitive Differences: Beyond the Data Sheet. Chris Wolf Senior Analyst, Burton Group

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

PERFORMANCE TUNING ORACLE RAC ON LINUX

Dell High Availability Solutions Guide for Microsoft Hyper-V

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version Fix Pack 2.

Cloud Optimize Your IT

11.1. Performance Monitoring

Deployments and Tests in an iscsi SAN

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Memory and SSD Optimization In Windows Server 2012 and SQL Server 2012

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

Virtual SAN Design and Deployment Guide

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Microsoft Exchange Server 2003 Deployment Considerations

EMC Virtual Infrastructure for Microsoft SQL Server

New Features in SANsymphony -V10 Storage Virtualization Software

VMware vsphere Design. 2nd Edition

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

white paper Capacity and Scaling of Microsoft Terminal Server on the Unisys ES7000/600 Unisys Systems & Technology Modeling and Measurement

Drobo How-To Guide. Use a Drobo iscsi Array as a Target for Veeam Backups

Hands-On Microsoft Windows Server 2008

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Qsan Document - White Paper. Performance Monitor Case Studies

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

IP SAN BEST PRACTICES

Windows Server 2008 R2 Hyper V. Public FAQ

How To Use Vsphere On Windows Server 2012 (Vsphere) Vsphervisor Vsphereserver Vspheer51 (Vse) Vse.Org (Vserve) Vspehere 5.1 (V

WHITE PAPER 1

Monitoring Databases on VMware

Microsoft Hyper-V chose a Primary Server Virtualization Platform

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide

Contents. Digital Video Surveillance Basic Troubleshooting Guide

Virtualization and Performance NSRC

AlliedWare Plus OS How To Use sflow in a Network

Evaluation Report: Supporting Multiple Workloads with the Lenovo S3200 Storage Array

pc resource monitoring and performance advisor

Seradex White Paper. Focus on these points for optimizing the performance of a Seradex ERP SQL database:

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Maximizing VMware ESX Performance Through Defragmentation of Guest Systems. Presented by

Bosch Video Management System High Availability with Hyper-V

SAGE 500 PRODUCT ROADMAP

Pivot3 Reference Architecture for VMware View Version 1.03

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Distribution One Server Requirements

The safer, easier way to help you pass any IT exams. Exam : TS: Windows Server 2008 R2, Server Virtualization. Title : Version : Demo 1 / 7

Scale-Out File Server. Subtitle

Sizing and Best Practices for Deploying Microsoft Exchange Server 2010 on VMware vsphere and Dell EqualLogic Storage

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Table of Contents Introduction and System Requirements 9 Installing VMware Server 35

Parallels Cloud Storage

Network Monitoring Comparison

EXAM Installing and Configuring Windows Server Buy Full Product.

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Virtualizing your Datacenter

N /150/151/160 RAID Controller. N MegaRAID CacheCade. Feature Overview

Benchmarking Guide. Performance. BlackBerry Enterprise Server for Microsoft Exchange. Version: 5.0 Service Pack: 4

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Server 2008 SQL. Administration in Action ROD COLLEDGE MANNING. Greenwich. (74 w. long.)

Transcription:

1) Disk performance When factoring in disk performance, one of the larger impacts on a VM is determined by the type of disk you opt to use for your VMs in Hyper-v manager/scvmm such as fixed vs dynamic. A great article explaining the performance impacts from the various virtual disk options can be found from MS HERE check it out. On the physical side, there are certain fundamental truths with drive performance, such as; A 15,000 rpm SAS drive is faster in every way compared to a 5,400 rpm IDE drive A raid 10 array provides much better random R/W performance than a raid 5 array The more spindles you have in an array, the faster it generally is An array with less activity will be more responsive than a heavily utilized one In reality, budget and availability will likely play a very big role in what disk subsystem you use and the fastest may either not be the best use of your IT budget, or SAS may not be able to provide the storage capacity you need. In this cluster, we have connected our nodes to a Dell MD3000i iscsi SAN. iscsi introduces a few more factors to consider regarding performance such as network bandwidth, parameters like jumbo frames, and contention issues from competing network traffic. As with any complex device such as this, I strongly recommend reading any and all performance tuning documentation the manufacturer offers for your unit. Dell has a couple, for this unit alone. HERE is an excellent Dell tuning document for this unit, and I would suggest starting with it first. It is pretty deep, and you could spend a long time sorting out the best configuration for your needs. The MD3000i we used has the maximum - a total of 15 SATA drives populating the device. S-ATA drives are the lowest performing option available for this SAN, but they do provide a massive amount of storage at a very reasonable cost, which in all reality fits our needs of this unit very well. The SAN does have an optional second raid storage module on the backplane which offers us twice as many connections out to our iscsi network, however by design this particular SAN processes asymmetrically which means that even though we have two modules and 4 iscsi ports, access to any specific virtual disk will always be via just two iscsi ports on one of the two modules. To address our constraints as best as possible, the SAN was configured based on the results of information gathered by following the Dell tuning guides. We are using numerous drives configured in a raid 10 providing us with the best possible overall performance at the disk level for our given needs. This had the highest disk cost configuration however, especially compared to a Raid 5 option, but the storage capacity after configuring the array was still well above

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize both iscsi ports actively on each raid controller module. The second was by creating two virtual disks on the SAN and assigning each a different module as its owner. a. The first area related to our cluster that we can monitor and make adjustments on is in the load carried by each of the raid storage modules, since each owns one of the two CSV disks. There are several different ways to do this, even perfmon will provide a fair bit of valid and useful information relating to disk performance. Here are three ways to view statistical information on your MD3000i SAN i. From the iscsi tab s View Statistics in the MDSM utility 1. Open the Dell MDSM utility 2. Click on the iscsi tab 3. Click the View iscsi Statistics link at the bottom of the page 4. Here you can view a large variety of statistics, set baseline statistics, and save information to your local workstation in CSV format from this page

A quick look at the byte counts columns alone shows that Raid Controller Module 1 is both transmitting and recieving over twice as many bytes over the past two week period since the baseline was set. This module is the owner of CSV Volume1. Moving some of the disk intensive VMs from Volume1 to Volume2 which is owned by Module 0, could help balance these numbers out but It is always best to take a sampling of several different statistics to get a clearer picture before making any significant changes. There could be a unique event that caused a large byte count on one module over another, factors like backups could influence these statistics as well. From gathered support statistics, find and track the values that are most important to your server needs. The needs vary greatly depending on server role like transmit for a read only website, or receive for an archive fileserver, on SQL overal IOPs are king. ii. Another location from within the utility where you can get excellent performance information is under the MDSM support tab. 1. Open the Dell MDSM utility 2. Click on the Support tab 3. Click the Gather Support Information link 4. Click Save Support information 5. Choose a name and location for the fileand click start 6. This process gathers a large amount of information, and will take some time. 7. Once compete, you will have a zip file 8. Inside the zip file are two files of interest to performance monitoring a. performancestatistics.csv b. statecapturedata.txt 9. 10. Open and review performancestatistics.csv

a. The statistics will show you virtual disk information seperated by which module it referrs to. This is helpful as you may have more than just your Hyper-v cluster virtual disks running on your SAN, and you need the whole picture to make the best choices. b. Notice the statistic of % read requests and Cache read check hits counter. 11. Open statecapturedata.txt in Excel 12. This is a large and detailed report of statistical information on the SAN. Although it does not have the be formatting, you can get most information that you will need about cache hits, IOPs, reads and writes within. The Dell documentation helps decipher this information to a degree, but forums help further. iii. Use the CLI to gather specific statistical information over a set period of time. 1. From the command line in the MDSM client directory run smcli -n MD3000i -c "set session performancemonitorinterval=5 performancemonitoriterations=250; save storagearray performancestats file=\"c:\\md3000iperfstats.csv\";" The name MD3000i after the n should equal your SANs given name The interval determines how often it will poll the information in seconds Iterations will set the maximum amount of polling it will do before enting the process. In the example, the SAN named MD3000i every 5 seconds 250 times before completing. (about 20 minutes) 2. Open the generated file once complete, and it will look similar to below 3. You can see that this simple command provides some excellent performance information which gives a clear and straight forward view of desirable information. Sort and filter the capture iterations to get quick total overall averages from each module, as well as from each Virtual disk on your SAN. This is my preferred tool for monitoring over a workweek, as it can be scripted, and provides good information for most of my assessment needs in an easily adjustable format 4. This link HERE covers this command in greater detail on how to create graphs and charts related to the information.

2) Network performance Not all network equipment is created equal. Testing shows that differing gigabit switches and differing network cards will provide different levels of performance. This holds true regardless of what traffic your network is supporting. On the iscsi side, we have mitigated our limitations as much as possible already by isolating all iscsi traffic from LAN traffic through the utilization of dedicated iscsi network equipment. We implemented two switches and used two network cards on each host creating a loadbalanced configuration. We then ensured that jumbo frames were enabled and functioning on the iscsi topology, which is noted to work well with iscsi protocol and improves performance when enabled. Monitoring the iscsi switches for load and port traffic for anomalies such as high collisions is recommended, but beyond this there is not much that can be tuned per-se in our configfuration other than possibly adjusting what ports are used on the switch, or replacing underperfroming equipment. If you are using QoS or vlans for iscsi traffic, you may find monitoring and adjusting will prove valuable as there are more variables involved in either of these options. On the LAN side, things are a bit different since hosts and VMs will require varying levels of network performance, and if you put too many network hungry VMs on one node, you will have a bottleneck. a. To tune LAN performance, monitor network utilization on each of the VMs and if sharing a NIC with the host, monitor the host network utilization as well. There are numerous network utilization tools available, feel free to use whichever you like most. The key here is to monitor all your VMs over a typical work week (and/or month) then use the gathered data to make informed adjustments to VM node placement and balance network loads. You may find that if you run a large amount of VMs on all your hosts, that it will be prudent to add another NIC or two to your each of your hosts and create additional virtual NICs for the cluster and help share the network load that way. 3) Processor performance I rarely find in our environments that the processors on our Hyper-v hosts are the source of a bottleneck. Generally one of the other resources constrains us first, however every environment is different and as with any key resource, it should be monitored regardless. Monitoring processor use in Hyper-v guests and hosts is not as clear cut as it is on a stand alone physical server. One of the issues is that processor utilization within a VM can be influenced greatly by the processor count that you set for the VM. You can potentially see low CPU usage within a VM guest while in fact the CPU is being taxed heavily. a. To accurately measure the overall processor utilization of the guest operating systems, use the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time performance monitor counter on the Hyper-V host operating system via a remote PerfMon session and/or through a user defined Data Collector Set. Use the

following thresholds to evaluate guest operating system processor utilization using the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time performance monitor counter: i. Less than 60% consumed = Healthy ii. 60% - 89% consumed = Monitor or Caution iii. 90% - 100% consumed = Critical, performance will be adversely affected Live PerfMon view Data Collector Log review 4) Memory allocation RAM is king in the VM world, and the more your host has, generally the more VMs you can handle. Where you can run into performance degradation is when one of two situations occur i. You have assigned insufficient RAM to a VM. 1. This will cause varying issues and performance loss through paging etc.

ii. Your host no longer has sufficient RAM free for its own use. 1. This too can cause performance issues to all the VMs it is hosting, as well as system stability issues. b. Monitoring the hosts for free memory is easy enough, and is even commonly reviewed in SCVMM. Where you will want to pay attention as well however, is that you have sufficient RAM for your running VMs. This too can be monitored just as easily. You may even find that some of your VMs have been over assigned memory, and you can reclaim some back for the host to reallocate. c. Another important point to note, especially in a Hyper-v cluster is ovecommitting of RAM. If you overcommit memory through all of your VMs residing on your cluster, you my find yourself with a problem should one of your nodes fail. You could end up having insufficient memory available to support the continued service to all the VMs on the failed node. Summary You now have a Hyper-v cluster configured and VMs online using SCVMM. From this point forward you can; Move running VMs between any node in your cluster with no real loss of service. Put nodes into maintenance mode to service the host without shutting down VMs. Continue to tune overall cluster performance by monitoring the nodes and SAN modules, making appropriate adjustments as dictated. Survive a node failure, possible a few failures depending on your level of node commitment