N5 NETWORKING BEST PRACTICES



Similar documents
Brocade Solution for EMC VSPEX Server Virtualization

Introduction to MPIO, MCS, Trunking, and LACP

Our target is an EqualLogic PS100 Storage Array with a portal address of

If you already have your SAN infrastructure in place, you can skip this section.

Vocia MS-1 Network Considerations for VoIP. Vocia MS-1 and Network Port Configuration. VoIP Network Switch. Control Network Switch

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

48 GE PoE-Plus + 2 GE SFP L2 Managed Switch, 375W

Isilon IQ Network Configuration Guide

10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable

IP SAN Best Practices

BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out

ADVANCED NETWORK CONFIGURATION GUIDE

Citrix XenServer Design: Designing XenServer Network Configurations

Dell EqualLogic Best Practices Series. Dell EqualLogic PS Series Reference Architecture for Cisco Catalyst 3750X Two-Switch SAN Reference

QNAP and Failover Technologies

ALL8894WMP. User s Manual. 8-Port 10/100/1000Mbps with 4-port PoE. Web Management Switch

20 GE + 4 GE Combo SFP G Slots L3 Managed Stackable Switch

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

IP SAN BEST PRACTICES

Chapter 7 Configuring Trunk Groups and Dynamic Link Aggregation

Enhancing the Dell iscsi SAN with Dell PowerVault TM Tape Libraries and Chelsio Unified Storage Router iscsi Appliance

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Oracle Big Data Appliance: Datacenter Network Integration

A Dell Technical White Paper Dell Storage Engineering

Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

VMware vcloud Air Networking Guide

StorSimple Appliance Quick Start Guide

24 GE + 2 GE SFP L2 Managed Switch

vcloud Air - Virtual Private Cloud OnDemand Networking Guide

WSG24POE Switch. User Manual

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.

ALLNET ALL8944WMP Layer 2 Management 24 Port Giga PoE Current Sharing Switch

StarWind iscsi SAN: Configuring Global Deduplication May 2012

Deploying Silver Peak VXOA Physical And Virtual Appliances with Dell EqualLogic Isolated iscsi SANs including Dell 3-2-1

Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization

VMware Virtual SAN 6.2 Network Design Guide

Troubleshooting LANs with Wirespeed Packet Capture and Expert Analysis

6.40A AudioCodes Mediant 800 MSBG

How To Install An At-S100 (Geo) On A Network Card (Geoswitch)

High Availability. Palo Alto Networks. PAN-OS Administrator s Guide Version 6.0. Copyright Palo Alto Networks

48 GE PoE-Plus + 2 GE SFP L2 Managed Switch, 375 W

Synology High Availability (SHA)

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Port Trunking. Contents

User Manual 24 Port PoE 10/100/1000M with 4 Combo Gigabit SFP Open Slot Web Smart Switch

Interconnecting Cisco Network Devices 1 Course, Class Outline

Alcalde # 1822 Col. Miraflores C.P Guadalajara, Jal. México MX 01 (33) y USA 001 (858) (Chulavista, CA.

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

Windows Server 2012 R2 Hyper-V: Designing for the Real World

SSVP SIP School VoIP Professional Certification

Appendix C Network Planning for Dual WAN Ports

Deployments and Tests in an iscsi SAN

StarWind Virtual SAN Best Practices

IntraVUE Plug Scanner/Recorder Installation and Start-Up

An Oracle White Paper October How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Port Trunking. Contents

White Paper. Intrusion Detection Deploying the Shomiti Century Tap

STORAGE SIMPLIFIED: INTEGRATING DELL EQUALLOGIC PS SERIES STORAGE INTO YOUR HP BLADE SERVER ENVIRONMENT

Switch Web GUI Quick Configuration Guide for

Cisco Application Networking Manager Version 2.0

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

How To Learn Cisco Cisco Ios And Cisco Vlan

Networking Topology For Your System

ALLNET ALL-SG8926PM Layer 2 FULL Management 24 Port Giga PoE Current Sharing Switch IEEE802.3at/af

8-Port Gigabit managed POE Switch. User s Manual. Version: 2.3

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

ProCurve Switch ProCurve Switch

D1.2 Network Load Balancing

Gigabit Ethernet Web Smart 8-Port Switch 2 Combo SFP Open Slot

Multi-Homing Security Gateway

10 Gigabit Aggregation and Next-Gen Edge 96-Port Managed Switch Starter Kits

Interfacing Basler GigE Cameras With Cognex VisionPro 7.2

APPLICATION NOTES High-Availability Load Balancing with the Brocade ServerIron ADX and McAfee Firewall Enterprise (Sidewinder)

How to Create VLANs Within a Virtual Switch in VMware ESXi

HP P63x0/P65x0 Enterprise Virtual Array Installation Guide

Device Interface IP Address Subnet Mask Default Gateway

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Deploying the Barracuda Load Balancer with Office Communications Server 2007 R2. Office Communications Server Overview.

Juniper / Cisco Interoperability Tests. August 2014

ESX Server 3 Configuration Guide Update 2 and later for ESX Server 3.5 and VirtualCenter 2.5

Cisco Data Centre: Introducing Cisco Data Center Networking

Lab Use Network Inspector to Observe STP Behavior

NMS300 Network Management System

Oracle Exalogic Elastic Cloud: Datacenter Network Integration

NAS 307 Link Aggregation

Layer 3 Network + Dedicated Internet Connectivity

"Charting the Course...

Clustered Data ONTAP 8.2

TP-LINK. Gigabit L2 Managed Switch. Overview. Datasheet TL-SG3216 / TL-SG

Information about IP Proprietary Telephones KX-TDA50/KX-TDA100 KX-TDA200/KX-TDA600. Hybrid IP-PBX. Model No.

Apple Airport Extreme Base Station V4.0.8 Firmware: Version 5.4

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Transcription:

N5 NETWORKING BEST PRACTICES

Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network for High Performance and High Availability... 3 iscsi SAN Topologies... 7 Networking Best Practices Summary... 9 iscsi Network Best Practices Summary...10 General Application Setup for iscsi Volume Access...11 Overview of NexGen N5 Networking Options...12 NexGen N5 Network Cabling Options...12 Network Cabling: 10GBT & 10GbE SFP+...13 Appendix A: NexGen N5 TCP/UDP Port Numbers...14 NexGen N5 Networking Best Practices 1

NEXGEN N5 NETWORKING Overview of Storage Networking Best Practices A high performance, high availability, storage network can be done in many ways. In the case of the NexGen N5 Hybrid Array, we recommend the following settings which are explained in further detail throughout this guide. Implement a fault-tolerant switch environment with multiple redundant switches. Implement MPIO at the host for high availability and performance. Implement a high performing network for data (10GbE SFP+ or 10GBT RJ45) Implement separate data and management subnets. Implement separate subnets or VLAN for dedicated data bandwidth. Set/verify the individual ports on the switch, host and storage to full-duplex mode. Enable Jumbo frames on all ports to maximize throughput. Flow Control should be enabled on all ports. Recommended Switch features for an iscsi Network NexGen does not recommend a particular switch. However, the following illustrates the minimum set of switch capabilities to optimize the operation and performance of the N5. Item Description 10Gb Ethernet Support Full duplex 10Gb Ethernet operation ensures the minimum network latency and highest throughput. In order to ensure the network is not the bottleneck for application server performance, implement end-to-end 10Gb from application server to storage Non-blocking Backplane Optimal iscsi data communications require switches with a backplane that has enough bandwidth to support fullduplex connectivity and full utilization, line rate for all ports at the same time. NexGen N5 Networking Best Practices 2

Buffer Cache High performing iscsi data communications require switches with at least 512KB of buffer space for each port. Therefore, a 48 port switch needs at least 24 MB of buffer cache. Jumbo Frames Support for Jumbo Frames ensures maximum performance for sequential read write workloads. Flow Control Flow Control ensures graceful communication between initiator and target. Switch Trunking Switch Features Use link aggregation of two or more 10Gb or 40Gb links to connect multiple switches together. Managed switch Layer 2 switching VLAN Spanning Tree Protocol Setting up the iscsi Network for High Performance and High Availability The NexGen N5 Hybrid Flash Array is equipped with four 1GbE management ports; two on each storage processor. By default, the Management Port-1 is enabled for DHCP. Management Port-2 is configured from the factory with a static IP address (details below). For data ports, each N5 is set up with four 10GbE data ports; two on each processor. The following is summary of out-of-factory network configuration. Network Interface Storage Processor A Storage Processor B Mgmt. Port-2 Enabled Static IP: 192.168.100.100 Mask: 255.255.255.0 Gateway: None Enabled Static IP: 192.168.100.200 Mask: 255.255.255.0 Gateway: None Mgmt. Port-1 Enabled DHCP Enabled DHCP Data Ports 1 2 Disabled No IP Configuration Disabled No IP Configuration Lights Out Management Ports (Accessed via Mgmt. Port-1) Disabled No IP Configuration Figure 1 Default Network Settings Disabled No IP Configuration Reference Appendix A TCP/UDP Inbound Ports Used for Normal SAN Operations for additional information. NexGen N5 Networking Best Practices 3

To setup the network interfaces on the NexGen N5 Hybrid Flash Array; navigate to the Settings window and select the Networking Addressing tab. Both the management and data ports on each storage processor in the system are listed in a dialog that allows you to select each port and configure it individually by clicking on the Edit button. Figure 2 Network Addressing screen After clicking on the Edit button, the Edit Network Interface Configuration window appears. Set the Mode (DHCP, Disabled or Static), Address, Mask, Gateway, and Frame Size (1500 or 9000). Click Save Config Changes to save the information. The Validate Configuration region of the window can be used to test network connectivity: NexGen N5 Networking Best Practices 4

Figure 3 Configuring a Management Port Click on the Edit button next to the specific data port that you want to configure. Set the Mode (DHCP, Disabled or Static), Address, Mask, Gateway, and Frame Size (1500 or 9000). Click Save Config Changes to save the information. NexGen N5 Networking Best Practices 5

Figure 4 Configuring a Data Port The Validate Configuration region of the window can be used to test network connectivity by clicking on the Ping Address option. If successful, the RTT (round trip time) and Count (how many successful trips) are displayed: Figure 5 Configuration Success Screen NexGen N5 Networking Best Practices 6

iscsi SAN Topologies There are two types of recommended iscsi SAN topologies for use with the NexGen N5 Storage System: Single IP SAN network (single subnet or single vlan) Dual IP SAN networks (two subnets or two vlans) For the Single IP SAN network topology configuration, all Data Ports on both storage processors will be configured with IP addresses on the same IP SAN network. Application servers are also connected to the single IP SAN network and volumes are connected via iscsi with either a single session or multiple sessions (MPIO). The following logical network diagrams depict how to setup single and dual IP SAN network configurations with the NexGen N5 Storage System. Figure 6 Single IP SAN Network Configuration NexGen N5 Networking Best Practices 7

Figure 7 Dual IP SAN Network Configuration For optimal performance and availability in the application servers, it is recommended that MPIO be used. The best MPIO option is to configure two or more paths from the application server to the NexGen N5 Hybrid Flash Array in an Active-Active MPIO mode. The minimum number of MPIO paths should be two paths across two NICS in the application server connected to both storage processors. The recommended physical network topology for the IP SAN network(s) is to have redundant physical paths for the volume connections made from the application servers to the storage. This is easily done with multiple switches in the environment connected to multiple NICs in the application server and storage. Below is an example of a two physical switch configuration. The switches are trunked together so that the Single IP SAN network can span both switches. If a Dual IP SAN network is implemented, the trunk links between the switches are not necessary unless other VLANs are being spanned across switches. Figure 8 Two Switch Physical Topology in Single IP SAN Network Configuration NexGen N5 Networking Best Practices 8

Networking Best Practices Summary Item Best Practice Logical Network Topologies Implement a Single IP SAN or Dual IP SAN network topology. Physical Network Topologies Implement redundant switches. In a Single IP SAN implementation, the two switches must be trunked together. In a Dual IP SAN implementation, the logically separated networks should also be on physically different switches for redundancy. Utilize core switch topologies that utilize multiple high bandwidth low-latency trunk links without requiring use of Spanning Tree. MPIO Use host-based MPIO. Setup at least two paths to a volume. Use MPIO ALUA with a Round-Robin path selection policy. Jumbo Frames Use caution when implementing Jumbo Frames. Configure Jumbo Frame support on all switches between the application servers and the NexGen N5. Enable Jumbo Frames on all application servers and storage network interfaces that are connected to the IP SAN network(s). There is no need to configure Jumbo Frames on the Management Ports on the NexGen N5 unless you are using replication. Proper configuration of Jumbo Frames should yield anywhere from 0-20% performance benefit depending on the workload. Misconfiguration of Jumbo Frames can result in a negative performance impact. Flow Control Enable Flow Control on all switches and switch ports connected to the IP SAN network(s). Flow Control is good practice for optimal iscsi performance on 10 Gigabit Ethernet networks. Enable tx/rx flow control in the application server and VM environment NIC configurations if not on by default. Flow Control is enabled by default on the NexGen N5 NICs. There is no need to configure this. Link Aggregation Technology (LACP, MC-LAG, Virtual Port Channels, etc.) Link aggregation technologies cannot be used with the NexGen N5 network ports for management or data. Use MPIO at the host to provide path redundancy and improved performance. NexGen N5 Networking Best Practices 9

Link aggregation technologies (LACP, MC-LAG, VTP) should be used for trunking switches together in order to achieve connection reliability and higher performance (bandwidth). Consult the switch vendor documentation on proper setup. iscsi Network Best Practices Summary Item Best Practice Data Ports and Management Ports Management and Data ports should be configured on separate networks. Data Ports should use static IP addresses on a dedicated IP SAN network, ideally isolated from all other traffic. Network ports which are not being used should be set to Disabled. Flow Control Flow Control should be enabled on all switches and ports that will carry iscsi traffic when using 10 Gigabit Ethernet hosts and storage. Enable tx/rx flow control in the application server NIC configurations if not on by default. Flow Control is enabled by default on the NexGen N5 NICs. There is no need to configure this. Jumbo Frames Data Ports should be enabled for Jumbo Frames (9000 Frame Size) only if all switches, switch ports and application servers connected to the IP SAN network are configured for Jumbo Frames. Misconfiguration of Jumbo Frames can have a negative performance impact. Configure Jumbo Frame support on the switches first, followed by the application server iscsi network interfaces, etc. There is no need to configure Jumbo Frames on the Management Ports on the NexGen N5. NexGen N5 Networking Best Practices 10

General Application Setup for iscsi Volume Access The NexGen N5 Hybrid Flash Array presents data volumes for access by application servers via the iscsi protocol with Asymmetric Logical Unit Access (ALUA)-enabled. There are several general setup instructions for connecting volumes to all iscsi initiators available for the most common operating systems. Best Practice Use multiple iscsi discovery addresses (portals) for high availability Use MPIO for volume connectivity high availability Use Round-Robin MPIO policy for optimal host connectivity performance Use iqn (iscsi Qualified Name) based iscsi Security for volume access. Details Configure at least two discovery addresses in the application server s iscsi initiator for accessing volumes on the N5 Storage System. For high availability of iscsi discovery on the iscsi initiator, specify at least two (preferably four) iscsi discovery IP addresses that correspond to data port IP addresses on each Storage Processor in the NexGen N5. All volumes on the NexGen N5 are advertised for discovery on all Data Ports. Volumes are NOT advertised for discovery on the Management Ports. If the host operating system and iscsi initiator support MPIO, configure at least two iscsi sessions per volume. Create one session connected to Storage Processor-A and the other session connected to Storage Processor- B. If the host operating system and iscsi initiator support MPIO, use MPIO ALUA with a Round- Robin path selection policy. Each iscsi initiator will have a unique iqn on the iscsi network. Specify the iqn(s) of the application servers in the Host Access Group and assign volumes to that group. The Allow-all Access Host Access Group is not recommended for use on production server volumes. NexGen N5 Networking Best Practices 11

Overview of NexGen N5 Networking Options Model Management Ports Redundant Data Port NICs N5-200 4x 1Gb RJ45 4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45 N5-300 4x 1Gb RJ45 4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45 N5-500 4x 1Gb RJ45 4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45 N5-1000 4x 1Gb RJ45 4x 1/10GbE SFP+ -or- 4x 1/10GbaseT RJ45 NexGen N5 Network Cabling Options Port # of Ports Cable Type NIC type What to buy with N5 1GbE Management 4 Cat6 or better (RJ45) N/A Cables only 1GbE Data 4 Cat6 or better (RJ45) SFP+ GBIC SKU plus cables 1GBT Data 4 Cat6 or better (RJ45) GBase-T Cables only 10GbE Data 4 SFP+ Twinax SFP+ Cables only 10GbE Data 4 SFP+ Optical (OM3 or better) SFP+ Cables with optic adapters/modules 10GBT Data 4 Cat6a or better (RJ45) GBase-T Cables only NexGen N5 Networking Best Practices 12

Network Cabling: 10GBT & 10GbE SFP+ Type Speed Max Distance Latency per Link Power 10G Base-T 10 Gb/sec 100m 2.6µs 2.7W 10Gb SFP+ SR Optical 10 Gb/sec 300m 0.3µs 0.7W 10Gb SFP+ SR Twinax Passive 10 Gb/sec 10m 0.3µs 0.7W 10Gb SFP+ SR Twinax Active 10 Gb/sec 25m 0.3µs 0.7W Figure 9 10GBT Cat7 Cable Figure 11 10GbE SFP+ Optical Cable Figure 10 10GbE SFP+ Twinax Cable Figure 12 10GbE SR Optical Gbic NexGen N5 Networking Best Practices 13

Appendix A: NexGen N5 TCP/UDP Port Numbers IP Protocol Port(s) Name Description TCP 22 SSH Secure Shell access for SAN support only. Not meant for normal day-today operations. TCP 80 HTTP Permit intermediate network elements to improve or enable communications between clients and servers. UDP 123 NTP Network Time Protocol used for time synchronization. UDP 161 SNMP Simple Network Management Protocol TCP/UDP 162 SNMPTRAP Simple Network Management Protocol Trap TCP 443 HTTPS HTTPS (Hypertext Transfer Protocol over SSL/TLS) TCP 860 iscsi iscsi System port TCP 3260 iscsi target iscsi port NexGen N5 Networking Best Practices 14