BEST PRACTICES GUIDE: Nimble Storage Best Practices for Networking

Similar documents
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out

N5 NETWORKING BEST PRACTICES

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Introduction to MPIO, MCS, Trunking, and LACP

IP SAN BEST PRACTICES

IP SAN Best Practices

Brocade Solution for EMC VSPEX Server Virtualization

Nimble Storage SmartStack for Microsoft Windows Server

A Dell Technical White Paper Dell PowerConnect Team

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.

QNAP and Failover Technologies

DELL EQUALLOGIC PS SERIES NETWORK PERFORMANCE GUIDELINES

Deployments and Tests in an iscsi SAN

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

If you already have your SAN infrastructure in place, you can skip this section.

Isilon IQ Network Configuration Guide

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Dyrehavsbakken Amusement Park

New Features in SANsymphony -V10 Storage Virtualization Software

Question: 3 When using Application Intelligence, Server Time may be defined as.

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

VMware Virtual SAN 6.2 Network Design Guide

A Dell Technical White Paper Dell Storage Engineering

Fiber Channel Over Ethernet (FCoE)

Lab Testing Summary Report

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

AFDX networks. Computers and Real-Time Group, University of Cantabria

Windows Server 2008 R2 Hyper-V Live Migration

Nutanix Tech Note. VMware vsphere Networking on Nutanix

A Dell PowerVault MD3200 and MD3200i Series of Arrays Technical White Paper Dell

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

ADVANCED NETWORK CONFIGURATION GUIDE

iseries Ethernet cabling and configuration requirements

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Windows Server 2008 R2 Hyper-V Live Migration

How To Build A Clustered Storage Area Network (Csan) From Power All Networks

High Performance Tier Implementation Guideline

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB _v02

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Application Note Gigabit Ethernet Port Modes

AT-S41 Version Management Software for the AT-8326 and AT-8350 Series Fast Ethernet Switches. Software Release Notes

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN

Deploying Riverbed wide-area data services in a LeftHand iscsi SAN Remote Disaster Recovery Solution

QoS & Traffic Management

Improving Quality of Service

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Gigabit Ethernet Design

Scaling 10Gb/s Clustering at Wire-Speed

Leased Line + Remote Dial-in connectivity

Extending SANs Over TCP/IP by Richard Froom & Erum Frahim

High Availability Cluster for RC18015xs+

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

6/8/2011. Document ID: Contents. Introduction. Prerequisites. Requirements. Components Used. Conventions. Introduction

Storage Area Networks (SANs) and iscsi Protocol An Introduction to New Storage Technologies

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Link Aggregation and its Applications

Turning Copper into Gold

The functionality and advantages of a high-availability file server system

How to Create a Virtual Switch in VMware ESXi

W H I T E P A P E R. Best Practices for High Performance NFS Storage with VMware

Synology High Availability (SHA)

SummitStack in the Data Center

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Data Center Architecture Overview

Contents. Load balancing and high availability

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Windows Server Performance Monitoring

Accelerating Development and Troubleshooting of Data Center Bridging (DCB) Protocols Using Xgig

This topic lists the key mechanisms use to implement QoS in an IP network.

Implementing Storage Concentrator FailOver Clusters

Configuring Static and Dynamic NAT Simultaneously

Ethernet Fabric Requirements for FCoE in the Data Center

Unified Fabric: Cisco's Innovation for Data Center Networks

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Nimble Storage Best Practices for Microsoft Exchange

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Network Design. Yiannos Mylonas

Exam 1 Review Questions

Using Multipathing Technology to Achieve a High Availability Solution

The proliferation of the raw processing

AT-S45 Version Management Software for the AT-9410GB Gigabit Ethernet Switches. Software Release Notes

Cisco Unmanaged Rackmount Switches

10/100/1000 Ethernet MAC with Protocol Acceleration MAC-NET Core

Guidelines Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction

Verifying Network Bandwidth

Westek Technology Snapshot and HA iscsi Replication Suite

Network configuration for the IBM PureFlex System

OpenFlow Based Load Balancing

The Shortcut Guide To. Architecting iscsi Storage for Microsoft Hyper-V. Greg Shields

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Frequently Asked Questions k. Third-party information brought to you courtesy of Dell. NIC Partitioning (NPAR) FAQs

Transcription:

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Networking

Contents Network Connectivity... 3 Management Network... 4 Data Network... 5 Choosing iscsi Switches... 7 B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 2

Network Connectivity This section will help you to properly connect your Nimble arrays to a redundant Ethernet network to ensure optimal performance and availability. Nimble Storage arrays are designed with redundant controllers that provide high availability access to your storage in the event that the active controller fails. In each of the associated diagrams you will see both solid lines and dashed lines. The solid lines represent an active connection while the passive lines represent passive connections that will become active in the event of a Nimble controller fail-over. It is also important to wire each sibling interface on each controller to the same switch. For example, Controller A, Eth1 connects to Switch 1 and Controller B, Eth1 also connects to Switch 1. Best Practice: To make wiring easier, match Odd Numbered Ports with Switch 1 and Even Numbered Ports with Switch 2. E.g. Odd-to-Odd and Even-to-Even. Nimble Storage arrays are typically configured to connect to a management network and to data networks. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 3

Management Network Management Network Diagram Resiliency of the management network is important to permit access by administrators to the Nimble Storage arrays for management purposes. The management network is typically wired to the eth1 and eth2 ports that are located as labeled in the Management Network Diagram. The Management IP Address can float between network ports that are designated as management ports. Management IP Screen B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 4

Data Network In-general, you should configure the two stacked ports for management only which leaves the remaining (4) 1 Gigabit ports or (2) 10 Gigabit ports available for the data network. While the Nimble Storage management features permit mixing management and data networks, this configuration is rarely needed and requires special care to configure properly. If you are unsure of your networking needs, then you can contact Nimble Storage technical support for further assistance. Best Practice: If operating system attaching to Nimble Storage arrays permits the choosing of load balancing algorithm for multi-path I/O, you should choose Least Queue Depth (LQD). The Least Queue Depth algorithm is superior to Round Robin algorithms since it takes into consideration pending I/O operations to avoid overloading a particular connection. 1 Gigabit Network Wiring Best Practice: Enable Jumbo frames on data ports to maximize throughput. You must enable Jumbo frames on each network connection including the Nimble array, the switch and B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 5

servers. Failure to enable Jumbo frames on one or more connections will not achieve the benefit of the larger Ethernet frame size. 10 Gigabit Networking Wiring B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 6

Choosing iscsi Switches Network switches provide a critical part of an iscsi storage area network. There are many different classifications of switches and it is important to understand the characteristics that make a switch good for supporting iscsi storage traffic. Use the following table when evaluating network switches: Non-blocking Backplane Flow Control (802.3x) Buffer Space per Switch Port Support for Jumbo Frames A switch used for iscsi data communication should have a backplane that provides enough bandwidth to support full duplex connectivity for all ports at the same time. For example, a 24 port Gigabit switch backplane should provide at least 48 Gigabits per second of bandwidth or (1 Gbps * 2 for Full Duplex * 24 Ports). Flow control provides a mechanism for temporarily pausing the transmission of data on Ethernet networks when a sending node transmits data faster than the receiving node can accept it. You should enable flow control on all hosts, switch, and array ports to ensure graceful communication between network nodes. Nimble Storage array NICs have flow control enabled by default. Switches are used to provide communication between hosts and arrays and for Nimble Storage scaling communication between arrays. Each switch port should have at least 512 Kilobytes of buffer memory per port to ensure full performance between connected nodes. Ethernet frames that transport data are typically 1,500 Bytes in size. While this does a good job of balancing application network traffic between network clients and servers, while host to storage communication tends to be measured in Kilobytes. Jumbo frames were created to better handle the flow of iscsi SAN traffic and consist of 9,000 Byte frames. Enable Jumbo frames to improve storage throughput and reduce latency. B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E B E S T P R A C T I C E S F O R N E T W O R K I N G 7

Can Disable Unicast Storm Control Storage traffic can appear bursty to switches which can be mistaken by some switches as a packet storm and blocked. Disabling Unicast Storm Control ensures that the storage traffic is transmitted unfettered. Nimble Storage, Inc. 2740 Zanker Road, San Jose, CA 95134 Tel: 408-432-9600; 877-364-6253) www.nimblestorage.com info@nimblestorage.com B E S T 2012 P R ANimble C T I C EStorage, S G U I DInc. E : All Nrights I M B Lreserved. E S T O RCASL A G E is a Btrademark E S T P R Aof C Nimble T I C E S Storage F O R NInc. E TBPG-NET-0313 W O R K I N G 8