The DataCore Server. Hyper-converged and Virtual SAN Best Practices Guide. September The Data Infrastructure Software Company

Similar documents
Virtual SAN Design and Deployment Guide

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service

Configuration Maximums

QNAP in vsphere Environment

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Setup for Failover Clustering and Microsoft Cluster Service

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

JovianDSS Evaluation and Product Training. Presentation updated: October 2015

Configuration Maximums VMware vsphere 4.0

Configuration Maximums VMware Infrastructure 3

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Setup for Failover Clustering and Microsoft Cluster Service

Configuration Maximums VMware vsphere 4.1

Advanced VMware Training

Frequently Asked Questions: EMC UnityVSA

Compellent Storage Center

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Configuration Maximums

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

Introduction to MPIO, MCS, Trunking, and LACP

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

SAN Implementation Course SANIW; 3 Days, Instructor-led

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

EMC Celerra Unified Storage Platforms

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

VMware Best Practice and Integration Guide

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

StarWind iscsi SAN Software: Providing shared storage for Hyper-V's Live Migration feature on two physical servers

The functionality and advantages of a high-availability file server system

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

StarWind iscsi SAN Software: Using StarWind with VMware ESX Server

Technology Insight Series

Choosing and Architecting Storage for Your Environment. Lucas Nguyen Technical Alliance Manager Mike DiPetrillo Specialist Systems Engineer

Setup for Failover Clustering and Microsoft Cluster Service

Server Virtualization with QNAP Turbo NAS and Citrix XenServer How to Set up QNAP Turbo NAS as Storage Repositories on Citrix XenServer via iscsi

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

VMware vsphere-6.0 Administration Training

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Running vtserver in a Virtual Machine Environment. Technical Note by AVTware

SAN Conceptual and Design Basics

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Best Practices when implementing VMware vsphere in a Dell EqualLogic PS Series SAN Environment

VMware vsphere 5.1 Advanced Administration

Novell Cluster Services Implementation Guide for VMware

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Setup for Failover Clustering and Microsoft Cluster Service

Provisioning Server High Availability Considerations

StarWind iscsi SAN Software: Using with Citrix XenServer

Windows Host Utilities Installation and Setup Guide

VMware vsphere 5.0 Boot Camp

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

Post Production Video Editing Solution Guide with Apple Xsan File System AssuredSAN 4000

StarWind iscsi SAN: Configuring Global Deduplication May 2012

Getting the Most Out of Virtualization of Your Progress OpenEdge Environment. Libor Laubacher Principal Technical Support Engineer 8.10.

VMWARE WHITE PAPER 1

Merge Healthcare Virtualization

Database Virtualization

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VMware Virtual Machine File System: Technical Overview and Best Practices

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

VMware vsphere Design. 2nd Edition

RSA Security Analytics Virtual Appliance Setup Guide

Monitoring Databases on VMware

QNAP in vsphere Environment

13.1 Backup virtual machines running on VMware ESXi / ESX Server

RUNNING vtvax FOR WINDOWS

VMware vsphere Examples and Scenarios

StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability storage with Windows Server 2003 and 2008

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

Dialogic PowerMedia XMS

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Acronis Backup & Recovery 11 Virtual Edition

How to Backup and Restore a VM using Veeam

ArCycle vmbackup. for VMware/Hyper-V. User Guide

StarWind iscsi SAN Software: Using StarWind with MS Cluster on Windows Server 2008

The Shortcut Guide To. Architecting iscsi Storage for Microsoft Hyper-V. Greg Shields

QNAP Plug-in for vsphere Client: A User s Guide. Updated December QNAP Systems, Inc. All Rights Reserved. V1.0

EMC Data Domain Management Center

Windows Host Utilities 6.0 Installation and Setup Guide

EMC VNXe HIGH AVAILABILITY

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Evaluation of Enterprise Data Protection using SEP Software

Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN

Transcription:

The DataCore Server Hyper-converged and Virtual SAN Best Practices Guide September 2016 The Data Infrastructure Software Company

Table of contents Table of contents 2 Overview 3 DataCore Hyper-converged and Virtual SAN Deployment Options on Windows 4 DataCore Hyper-converged and Virtual SAN Deployment Options on ESXi 5 SANsymphony vs Virtual SAN Licensing 6 Configuring Windows Server 2012 R2 for SANsymphony 7 Example 1: Single Node with a DataCore Loopback Ports 8 Example 2: Single Node with two iscsi loopbacks on two virtual NICs 8 Example 3: DataCore Loopback Ports and FC HBAs with direct connect cables (Windows Server Failover Cluster) 10 Example 4: DataCore Loopback Ports and FC HBAs with direct connect cables 11 Example 5: iscsi with a switch 12 Example 6: iscsi with direct connect 13 SANsymphony in a guest VM on ESXi 5.5 and 6.0 14 Example 7: Single Node Configuration 15 Example 8: Two Node Configuration 16 Known Issues 18 Appendix A 20 Useful Resources 20 Appendix B 21 Deployment tools 21 Previous Changes 22

Overview DataCore Hyper-converged solution uses either DataCore SANsymphony or DataCore Virtual SAN package to create a high performing hyper-converged infrastructure using DAS or internal storage. For the purpose of this design document we will refer to both installations as DataCore Server after initially explaining the difference. This document covers the Best Practice Design Guidelines to configure a DataCore Hyper-Converged Configuration. Page 3

DataCore Hyper-converged and Virtual SAN Deployment Options on Windows Physical Windows Server (no server hypervisor installed) DataCore server runs directly on top of the Windows Server operating system. All local block storage devices that are not initialized are automatically detected as suitable for the pool. All existing filesystems can be used as pass-through disks. An application such as Microsoft Exchange or SQL may be installed alongside the DataCore Software. This is a typical Virtual SAN deployment that allows the running application to take full advantage of DataCore Caching and storage capabilities. Microsoft Failover Cluster or other clustering technology can be used to provide application failover between servers. Physical Window Server with Hyper-V DataCore server runs in the root partition (also referred to as the parent partition) on top of the Windows Server operating system. All local block storage devices that are not initialized are automatically detected as suitable for the pool. The Microsoft Hyper-V hypervisor role is installed alongside SANsymphony. DataCore does not recommend installing SANsymphony in a Hyper-V guest VM as it introduces virtualization layer overhead and obstructs DataCore Software from directly accessing CPU, RAM and storage. Page 4

DataCore Hyper-converged and Virtual SAN Deployment Options on ESXi DataCore Server in a Windows Guest VM on ESXi Backend disk storage configuration options: Assign uninitialized storage devices from the server hypervisor to the DataCore server virtual machine as raw storage devices (Physical RDMs in ESXi). Present VMDK virtual disks that reside on VMFS datastores to the DataCore server virtual machine. Use DirectPath to map the storage controller (RAID or PCIe flash device) directly to the DataCore virtual machine. Using Fiber Channel Connections in a Virtual Machine (VMware ESXi only) option: The DataCore Server must be running in a virtual machine with the HBAs set-up in VMDirectPath I/O mode (see: http://kb.vmware.com/kb/1010789 ). This will assign the Physical HBA directly to the Virtual Machine and allow the DataCore Fibre Channel Driver to be able to bind to it. For Fibre Channel HBA's supported by VMware using VMDirectPath I/O see http://www.vmware.com/resources/compatibility/search.php The DataCore Fibre Channel driver cannot bind to any VMware NPIV Fibre Channel Adapters so these cannot be used for any SANsymphony-V Front-End or Mirror Ports. Page 5

SANsymphony vs Virtual SAN Licensing The main difference between the regular SANsymphony license and Virtual SAN license is that the regular license allows you to serve virtual disks to any host. Installations with Virtual SAN licenses will only allow you to serve virtual disks to hosts that have DataCore Software installed, please see Exhibit 1. Another difference is the capacity. With Virtual SAN the capacity is licensed per node. With regular SANsymphony the capacity is licensed per node and per group. Virtual SAN licenses Exhibit 1 Page 6

Configuring Windows Server 2012 R2 for SANsymphony DataCore server in a Root Partition of Windows 2012 R2 Installing DataCore server in a root partition of Windows 2012 R2 means that the software is installed directly on the system and not in a guest VM. In this type of deployment the DataCore Server runs alongside other Windows OS applications such as Hyper-V, MS SQL, MS Exchange, etc. Single Node Configuration Examples Configuration Performance Cost DataCore Loopback Ports iscsi loopback on a virtual NIC Highest performance. Throughput is currently limited by CPU and VM Queue to 350-400MB/s per port, however setting up two virtual NICs doubles the throughput. Adding more than two virtual NIC ports does not increase throughput. Free - no additional hardware required. Free no additional hardware required. Synchronous Mirroring Will require an FC HBA for failover when setting up a mirrored pair Will require a NIC for failover when setting up a mirrored pair. Page 7

Example 1: Single Node with a DataCore Loopback Ports In this configuration a virtual disk is being served over the DataCore Loopback Port. The Loopback port is installed with DataCore Software. The end result is a very reliable and extremely fast I/O path that takes advantage of RAM caching including the entire suite of DataCore server features. The pool disk can be any enterprise RAW disk available in the Windows OS. The application runs alongside the DataCore server can be anything including SQL, Exchange or Hyper-V. Another DataCore node can be added to create synchronous mirrors. FC HBAs can be installed to achieve multipath I/O. For more details please see Example 3. Example 2: Single Node with two iscsi loopbacks on two virtual NICs Please add the Hyper-V role in order to create the Virtual NICs, even if there are no plans on using Virtual Machines. In this example we have MS iscsi initiator and DataCore target on the same IP address. First path initiator is logging in to and from 10.0.0.1/24, second path the initiator is logging in to and from 10.0.2.1/24, thus effectively creating two iscsi loopback connections. MS MPIO is setup with the default path policy of Round Robin which will aid in utilizing both paths. This configuration can be scaled to provide multipath I/O by adding another DataCore node. To take advantage of synchronous mirrors please add physical NICs, for more details please see Example 6. Page 8

Multi Node Configuration Examples Configuration Performance Cost Highest Performance DataCore Loopback Ports and FC HBA Requires FC HBAs for Frontend ports and Mirror ports iscsi with a switch Good Performance Requires a Network Switch iscsi with direct connect cable (must use a virtual NIC for at least one Front-End port) Good Performance Virtual NIC throughput is currently limited by CPU and VM Queue to 350-400MB/s per port, however setting up two virtual NICs doubles the throughput. Adding more than two virtual NIC ports does not increase throughput. Network Switch is not required Page 9

Example 3: DataCore Loopback Ports and FC HBAs with direct connect cables (Windows Server Failover Cluster) Two physical servers configured as a synchronous DataCore server pair and Windows Server Failover Cluster with DataCore Loopback and FC HBAs for multipath I/O. Mirrored vdisk1 is served to both Host1 and Host2 where the LUN is shared like a traditional SAN. The shared LUN is then utilized by the Windows Server Failover Cluster as a resource. DataCore is mirroring the data at the virtual disk level between DataCore Server A and B. Mirroring is done over two redundant MR ports configured in initiator\target mode which permits bidirectional I/O flow. Another option is not to use Front End FC HBAs ports for multipath I/O. From the above Example 3 remove FC Port 3 and FC port 4 from both nodes. In such configuration each node will only have access to the local storage and must rely on Windows Server Failover Cluster to handle the failover. In the diagram we display networks required for DataCore Software. Additional network connections required for guest VMs and WSFC are not part of this diagram. Page 10

Example 4: DataCore Loopback Ports and FC HBAs with direct connect cables Two physical servers configured as a synchronous DataCore server pair with DataCore Loopback and FC HBAs for multipath I/O. Mirrored vdisk1 is served to Host1 and mirrored vdisk2 is served to Host2. The LUNs are not shared but are accessed locally by their respective hosts. DataCore is mirroring the data at the vdisk level between DataCore Server A and B. Mirroring is done over two redundant MR ports configured in initiator\target mode which permits bidirectional I/O flow. Page 11

Example 5: iscsi with a switch In this configuration we have an iscsi implementation with redundant switches to avoid a single point of failure. MS iscsi configuration is as follows: Initiator Host Initiator IP Target IP Port Role Target Host Host1 10.0.1.1 10.0.1.2 MR Host2 Host1 10.0.2.1 10.0.2.2 MR Host2 Host1 10.0.3.1 10.0.3.1 FE Host1 Host1 10.0.4.1 10.0.4.1 FE Host1 Host1 10.0.3.1 10.0.3.2 FE Host2 Host1 10.0.4.1 10.0.4.2 FE Host2 Host2 10.0.1.2 10.0.1.1 MR Host1 Host2 10.0.2.2 10.0.2.1 MR Host1 Host2 10.0.3.2 10.0.3.2 FE Host2 Host2 10.0.4.2 10.0.4.2 FE Host2 Host2 10.0.3.2 10.0.3.1 FE Host1 Host2 10.0.4.2 10.0.4.1 FE Host1 Page 12

Example 6: iscsi with direct connect With direct connect cables between servers at least one virtual NIC per DataCore instance is required. This is to ensure that at least one local I/O path stays online, when the partner server is shutdown, rebooted or otherwise unavailable. If the virtual NIC for local FE connection is not configured the link down condition on the direct connected FE physical NIC will take down the iscsi network on the surviving node and I/O will stop. MS iscsi configuration is as follows: Initiator Host Initiator IP Target IP Port Role Target Host Host1 10.0.1.1 10.0.1.2 MR Host2 Host1 10.0.2.1 10.0.2.2 MR Host2 Host1 10.0.4.1 10.0.4.1 FE (Virtual NIC) Host1 Host1 10.0.3.1 10.0.3.2 FE Host2 Host2 10.0.1.2 10.0.1.1 MR Host1 Host2 10.0.2.2 10.0.2.1 MR Host1 Host2 10.0.4.2 10.0.4.2 FE (Virtual NIC) Host2 Host2 10.0.3.2 10.0.3.1 FE Host1 Page 13

SANsymphony in a guest VM on ESXi 5.5 and 6.0 Configure ESXi host and Guest VM for DataCore Server for low latency PHYSICAL NIC Disable interrupt moderation DATACORE GUEST VM CPU NUMA / Configure vcpu affinity Memory Disk Virtual SCSI Controller Virtual NIC Disable interrupt coalescing Reduce idle - wakeup latencies Virtual SCSI Controller type esxcli system module parameters set -m -p "InterruptThrottleRate=0" Set to High Shares Reserve at least x4 Core CPU frequencies. Example: with Physical CPU at 2 GHz, set the reserved to 8 GHz VM Settings Options tab Advanced General Configuration Parameters numa.nodeaffinity = 0 Set to High Shares Reserve the entire amount of memory assigned to the guest VM Set to High Shares for all disks attached to DataCore Guest VM Change from LSI Logic Parallel to VMWare Paravirtual. https://kb.vmware.com/kb/1010398 Use VMXNET3 VM Settings Options tab Advanced General Configuration Parameters ethernetx.coalescingscheme = "disabled" VM Settings Options tab Advanced General Configuration Parameters monitor_control.halt_desched = "false" Use VMware Paravirtual SCSI (PVSCSI) adapter https://kb.vmware.com/kb/1010398 For ESXi 5.5 only DATACORE GUEST VM on ESXi 5.5 only (Not needed for ESX 6.0) Enable adaptive RX ring sizing: disabled VMXNET3 Adapter Settings Small Rx Buffer Size: 1024 For 1GB NICs Set TCP Stack Buffer for MaxRcvDataSegLen to 32K (which is x8000). For details please see Answer 1626 - SANsymphony - iscsi Best Practices Page 14

Example 7: Single Node Configuration Page 15

Example 8: Two Node Configuration VMKernel on host ESXi A 10.0.1.3 is configured to login to two targets. First path is to the local DataCore Server A 10.0.1.1 and the second path is to the DataCore Server B 10.0.1.2. VMKernel on host ESXi B 10.0.1.4 is setup to login to two targets as well. First path is connected to the local DataCore B 10.0.1.2 and the second path to DataCore A 10.0.1.1. In this example there is only a single FE port per DataCore Server, this was done for clarity, not to clutter the diagram. Feel to configure 2 or 4 FE ports per DataCore Server for increased throughput. Do not configure VMKernel iscsi port binding as DataCore iscsi target does not support multi-session iscsi. For more information please see Answer 1556 - VMware ESXi Configuration Guide. Page 16

Pool Disk configuration on ESXi It is best to have dedicated physical RDMs designated as pool disks for DataCore guest VM, but that is not always possible because of SCSI Array controller limitations. In that case, setup a dedicated Datastore and use a VMDK virtual disk as the pool disk. For best pool disk performance setup one VMDK per Datastore. The VMDK should be provisioned as Eager Zeroed Thick. Page 17

Known Issues Preferred server is not honored for Virtual Disks served via iscsi to the DataCore server itself. Currently the ALUA protocol is not supported when serving virtual disks to the DataCore server via iscsi in a loopback fashion. This applies only to serving virtual disks to the DataCore server itself, typically when using Virtual SAN license. Please use the following workaround to control which paths are used as Active/Optimized: Setting MPIO Path policy for iscsi served Virtual disks. On each DataCore Virtual Disk served via iscsi set the remote server paths to Standby under the Microsoft MPIO. In order to find the remote paths, use the MS iscsi initiator to identify the disk Target: a) Open MS iscsi initiator and under Targets, select the remote target iqn address b) Click devices and identify the Device target port with disk index to update Addition information - https://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx Navigate to MS MPIO and view DataCore Virtual Disk. c) Select a path and click Edit d) Verify the correct Target device e) Set the path state to Standby and click OK to save the change Addition information - https://technet.microsoft.com/en-us/library/ee619752(v=ws.10).aspx refer to section Configure the MPIO Failback policy setting Page 18

Known Issues Continued Citrix XenServer Considerations We are aware of a limitation on XenServer with regards to automatic reattachment of Storage Repositories (SRs) after a reboot when the DataCore Server VM resides on the same XenServer to which the SRs are served. Therefore only serve storage from the DataCore Server's virtual machine to the other XenServers and not for use with virtual machines that reside on the same XenServer that the DataCore Server's virtual machine is running on. Please consult with Citrix for more information. Page 19

Appendix A Useful Resources Use Answer 1626 - iscsi Best Practices Guide to optimize TCP/IP and NIC settings for low latency. Answer 1556 - VMware ESXi Configuration Guide. Best Practices for Performance Tuning of Latency-Sensitive Workloads in vsphere VMs. ( Note: Although VMware states to enable Turbo Boost to Tune Latency sensitive workloads we recommend to keep this disabled to stop any disruption to CPU) Answer 1348 - DataCore's Best Practice guidelines. Page 20

Appendix B Deployment tools To ease the deployment of Hyper-converged solutions DataCore Software provides the following installation packages. DataCore Hyper-converged Virtual SAN for Windows DataCore Hyper-converged Virtual SAN for vsphere Software download link https://datacore.custhelp.com/app/downloads/downloads. Page 21

Previous Changes September 2016 New document created Page 22

COPYRIGHT Copyright 2016 by DataCore Software Corporation. All rights reserved. DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners. ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED AS IS AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW. No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without the prior written consent of DataCore Software Corporation