A Dell Technical White Paper Dell Storage Engineering

Similar documents
Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

IP SAN Best Practices

ADVANCED NETWORK CONFIGURATION GUIDE

What is VLAN Routing?

Dell Compellent Storage Center

Efficient Video Distribution Networks with.multicast: IGMP Querier and PIM-DM

Can PowerConnect Switches Be Used in IP Multicast Networks?

VXLAN: Scaling Data Center Capacity. White Paper

IP SAN BEST PRACTICES

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

Course Syllabus. Fundamentals of Windows Server 2008 Network and Applications Infrastructure. Key Data. Audience. Prerequisites. At Course Completion

CCT vs. CCENT Skill Set Comparison

Networking Guide Redwood Manager 3.0 August 2013

Internetworking Microsoft TCP/IP on Microsoft Windows NT 4.0

Dell EqualLogic Best Practices Series. Dell EqualLogic PS Series Reference Architecture for Cisco Catalyst 3750X Two-Switch SAN Reference

1 Data information is sent onto the network cable using which of the following? A Communication protocol B Data packet

hp ProLiant network adapter teaming

N5 NETWORKING BEST PRACTICES

TECHNICAL NOTE. GoFree WIFI-1 web interface settings. Revision Comment Author Date 0.0a First release James Zhang 10/09/2012

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

SIP Trunking with Microsoft Office Communication Server 2007 R2

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

Interconnection of Heterogeneous Networks. Internetworking. Service model. Addressing Address mapping Automatic host configuration

VMware Virtual SAN 6.2 Network Design Guide

vsphere Networking ESXi 5.0 vcenter Server 5.0 EN

IP Addressing A Simplified Tutorial

Catalyst Layer 3 Switch for Wake On LAN Support Across VLANs Configuration Example

How Much Broadcast and Multicast Traffic Should I Allow in My Network?

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Switch Web GUI Quick Configuration Guide for

Load Balancing for Microsoft Office Communication Server 2007 Release 2

48 GE PoE-Plus + 2 GE SFP L2 Managed Switch, 375W

Lecture 15. IP address space managed by Internet Assigned Numbers Authority (IANA)

ALTIRIS Deployment Solution 6.8 PXE Overview

Intel Active Management Technology with System Defense Feature Quick Start Guide

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

AT-S41 Version Management Software for the AT-8326 and AT-8350 Series Fast Ethernet Switches. Software Release Notes

Layer 3 Routing User s Manual

AT-S60 Version Management Software for the AT-8400 Series Switch. Software Release Notes

Clustering. Configuration Guide IPSO 6.2

Guide to Network Defense and Countermeasures Third Edition. Chapter 2 TCP/IP

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops

ProSAFE 8-Port and 16-Port Gigabit Click Switch

> Technical Configuration Guide for Microsoft Network Load Balancing. Ethernet Switch and Ethernet Routing Switch Engineering

Quantum StorNext. Product Brief: Distributed LAN Client

Configuring the Fabric Interconnects

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Isilon IQ Network Configuration Guide

LOAD BALANCING 2X APPLICATIONSERVER XG SECURE CLIENT GATEWAYS THROUGH MICROSOFT NETWORK LOAD BALANCING

Layer 3 Network + Dedicated Internet Connectivity

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Networking and High Availability

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

Three Key Design Considerations of IP Video Surveillance Systems

"Charting the Course...

ACP ThinManager Tech Notes Troubleshooting Guide

Internetworking and IP Address

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

What communication protocols are used to discover Tesira servers on a network?

Variable length subnetting

SKV PROPOSAL TO CLT FOR ACTIVE DIRECTORY AND DNS IMPLEMENTATION

A Dell PowerVault MD3200 and MD3200i Series of Arrays Technical White Paper Dell

Chapter 12 Supporting Network Address Translation (NAT)

Cisco Certified Network Associate Exam. Operation of IP Data Networks. LAN Switching Technologies. IP addressing (IPv4 / IPv6)

How To Install An At-S100 (Geo) On A Network Card (Geoswitch)

Introduction to IP v6

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.

How to Configure an Initial Installation of the VMware ESXi Hypervisor

BASIC ANALYSIS OF TCP/IP NETWORKS

Configuring IP Addressing and Services

Cisco Expressway Basic Configuration

Interconnecting Cisco Network Devices 1 Course, Class Outline

: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1)

Networking and High Availability

Cisco Data Centre: Introducing Cisco Data Center Networking

Interconnecting Cisco Networking Devices, Part 1 (ICND1) v3.0

Pulse Redundancy. User Guide

Addressing Scaling Challenges in the Data Center

ANZA Formación en Tecnologías Avanzadas

Classful IP Addressing (cont.)

Virtual PortChannel Quick Configuration Guide

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

How To Learn Cisco Cisco Ios And Cisco Vlan

Juniper / Cisco Interoperability Tests. August 2014

CCNA Tutorial Series SUBNETTING

Deploying Exchange Server 2007 SP1 on Windows Server 2008

- IPv4 Addressing and Subnetting -

Networking Basics for Automation Engineers

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

L-Series LAN Provisioning Best Practices for Local Area Network Deployment. Introduction. L-Series Network Provisioning

20 GE + 4 GE Combo SFP G Slots L3 Managed Stackable Switch

Citrix XenServer Design: Designing XenServer Network Configurations

Optimizing Enterprise Network Bandwidth For Security Applications. Improving Performance Using Antaira s Management Features

HP Switches Controlling Network Traffic

Visio Enabled Solution: One-Click Switched Network Vision

High Performance 10Gigabit Ethernet Switch

2. What is the maximum value of each octet in an IP address? A. 28 B. 255 C. 256 D. None of the above

Transcription:

Networking Best Practices for Dell DX Object Storage A Dell Technical White Paper Dell Storage Engineering

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the DELL logo, the DELL badge, and PowerConnect are trademarks of Dell Inc. Microsoft and Windows are registered trademarks of Microsoft Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own. July 2011 Page ii

Contents Purpose of This Document... 3 Key Focus Areas... 3 Prerequisites... 3 DX Object Storage Cluster... 3 Dataflow to the Cluster... 3 Application Server Access to the Cluster... 4 Remote Replication... 5 Network Topology Requirements... 5 Basic Cluster Configuration... 6 Required Information to Setup the Cluster... 6 Class C and Class B Networks... 7 Reserved IP Ranges... 8 Multiserver and IP Addresses... 8 Secondary CSN... 8 NTP and DNS Considerations... 8 Questions to Ask Before Configuring the Network... 9 Switches Best Practices... 10 Whether to Use a Dedicated Switch or VLAN... 10 Switch Configuration... 10 Figures Figure 1. Standard Cluster Setup with Remote Replication... 4 Figure 2. IP Addressing for Standalone Cluster... 6 Page iii

Page 2

Purpose of This Document This document provides information about how to configure a network for optimal performance of the Dell DX Object Storage system and not negatively impact the existing network. The recommendations in this document apply to networking only as it relates to Dell DX Object Storage in standard configurations. Key Focus Areas There are three key areas of which you must be cognizant when deploying a DX Object Storage cluster: the cluster network architecture Layer 3 switch(es) to which the cluster connects NOTE: DX Object Storage requires connection to a Layer 3 switch. Workgroup switches (Layer 2) only provide routing and do not offer the necessary functionality. Each of these areas contains variables that can impact the performance of the cluster and/or the overall network. Prerequisites This document assumes at least an intermediate understanding of networking, including the following: common IP address schemes sub-networks and VLANs DHCP and other network services DX Object Storage Cluster Dataflow to the Cluster A Dell DX Object Storage cluster consists of at least one Cluster Services Node (CSN) and two or more Storage Nodes. Reads, writes, and info requests are handled through a bidding process that takes place via multicast protocol, as shown in the following example of a write to the cluster: An application sends a write request directly to the cluster, or more specifically, a single Storage Node. This node, which can be designated or randomly assigned, is referred to as the Primary Access Node (PAN). The PAN forwards the request via multicast to all other Storage Nodes in the cluster. All Storage Nodes in the cluster bid to serve the request. The Storage Node with the most capacity and the least activity wins the bid. The PAN notifies the application of the IP address of the Storage Node that won the bid. The application writes the data object directly to the Storage Node that won the bid. Once the write is complete, the Storage Node then sends out a multicast request to write at least one replica of the object. The bidding process takes place again for each replica that needs to be created, and the Storage Node that received the original copy will send a replica to each node that won the bid for a replica. Page 3

These multicast communications take place entirely among the Storage Nodes in the cluster and do not involve any interaction with other network services. Because of the multicast traffic, Storage Nodes should be placed on a separate, private network. Configurations with remote clusters require public network connection between the CSNs, which use Content Router to manage asynchronous replication, or SCSP Proxy to manage synchronous replication, between the clusters. NOTE: See the DX6000 manuals page at support.dell.com for whitepapers on configuring synchronous and asynchronous replication. Figure 1 shows a standard primary cluster on a private network that replicates synchronously to a remote cluster via SCSP proxy over the public network. Figure 1. Standard Cluster Setup with Remote Replication Application Server Access to the Cluster Dell-approved applications connect to the cluster using any of several methods, including SCSP Proxy, mdns, DNS Round Robin, a pool of static IP addresses, or a single static IP address. Dell recommends setting up the client to use mdns or SCSP Proxy to determine the list of Storage Nodes and IP addresses. Other methods require maintaining a pool of addresses, or risking a single point of failure, if using a designated single address for a Storage Node. NOTE: To enable faster communication with the cluster and reduce DHCP traffic on the network, Dell recommends setting static IP addresses for the application server s public and private NICs. Page 4

WARNING: An application server that connects to the cluster should not be set to PXE boot. Verify in the BIOS settings that PXE boot is NOT enabled on the application server. Remote Replication Dell supports remote replication through the Content Router on the CSN. See the DX6000 manuals page at support.dell.com for whitepapers on configuring synchronous and asynchronous replication. Network Topology Requirements Dell DX Object Storage resides in the back end of the IT environment and all Storage Nodes in the cluster should be separated from the rest of the network. The Cluster Services node, however, requires access to both the public and private network. The Storage Nodes must reside solely on a private network. Placing Storage Nodes on the public network is an unsupported configuration, for the following reasons: DX Storage Nodes communicate with each other through User Datagram Protocol (UDP) and IP multicast -- not only when bidding during a write request, but also continuously as part of the health check processes. These communications can significantly impact performance for your entire network; likewise, network communications not related to the cluster can impact the performance of the cluster. DX Storage Nodes boot from a PXE image that is available through the CSN on the private network. If clients are on the same network as the Storage Nodes, they may try to PXE boot off the CSN as well. This can make the client system unbootable, unless it is shut down before the boot process completes. Data on a public network is more vulnerable to the outside world. It should be secured within a private network. All Storage Nodes in the cluster must reside on the same IP subnet, and because the CSN also participates in UDP and multicast communication it must be able to access the same subnet. The standard CSN configuration has four ports two bonded and connected to the private network and two bonded and connected to the public network. A Storage Node typically contains two ports, both of them bonded and connected to the private network. If expansion cards are used, the CSN automatically divides and bonds ports for the public and private networks; a Storage Node automatically bonds all of its ports into one connection to the private network. IMPORTANT: Ensure that all Storage Nodes are cabled exclusively to the private network, and that two ports from the CSN are also connected to the private network. NOTE: The bonds on the CSN and Storage Nodes are configured automatically and should not be altered. Page 5

Gb Gb 1 2 Gb 1 Gb 2 Gb 1 Gb 2 Gb 1 Gb 2 Gb 3 Gb 4 Dell DX Object Storage: Networking Best Practices Basic Cluster Configuration This following illustration shows a DX Object Storage configuration in relation to IP addressing. In the following sections, we will assume the following: 192.168.0.0 is the public network 172.16.0.0 is the private storage network Figure 2. IP Addressing for Standalone Cluster Layer 3 switch Public Network Private Network App Server MEST Cluster Services Node 1 2 3 4 Storage Node 1 2 ST 3 Storage Node 1 2 3 ST Required Information to Setup the Cluster Configuring a DX Object Storage cluster requires the following information, which should be obtained before beginning the installation (This should be a part of the DX design as well as the DX pre-site survey): Public IP address for the CSN. Half of the ports on the CSN will be automatically bonded to this address. Public IP address of the cluster. In the event of a failover, this address will reside on the secondary CSN so the cluster can still be accessed. A cluster name that is a fully qualified network name that can be resolved by the corporate Domain Name Server (DNS). This name is used for automatic lookups on Microsoft DNS. All domain names must be resolved, either by the corporate or external DNS. NOTE: If the cluster cannot access a DNS server, the cluster cannot integrate with Microsoft Active Directory Services. The DNS server must have backward and forward lookup. Subnet mask of the public network associated with the cluster. IP address of the gateway associated with public IP address. Page 6

IP address of DNS servers -- One or more DNS servers for the public interface. IP address or name of NTP server(s). Use a minimum of two NTP servers, in case a clock is unreachable, or less reliable. Ideally, you should use an NTP server in the public network to which the cluster is connected. Otherwise, use the internal network address of the CSN itself as a time server by setting the value in the cluster configuration file (/var/opt/caringo/netboot/content/cluster.cfg). NOTE: Do not use a Microsoft domain controller as a time source, as the domain controller may not have been properly configured to locate alternate time sources, if its primary time source becomes unavailable. Instead, use a publicly available source such as pool.ntp.org. The Internet Assigned Numbers Authority (IANA) has reserved the following addresses for non-internet networks. 192.168.0.0/16 (ranging up from 192.168.0.0 to 192.168.255.255) This range is typically used with a 24-bit (class C) subnet mask. A classless inter-domain routing (CIDR) allocation with a smaller subnet mask may not work reliably in some environments. 172.20.0.0/12 (ranging up to 172.31.255.255) This range is typically used with a 16-bit (class B) subnet mask. A CIDR allocation with a smaller subnet mask may not work in reliably in some environments. 10.0.0.0/8 (ranging up to 10.255.255.255) WARNING: Storage Nodes must be on a network separate from the public network. If clients are on the same network, they may try to PXE boot, which can render the client unbootable, unless it is shut down before the boot process completes. For example, if your corporate network uses the 192.168.0.0 addresses, the Storage Nodes cannot be on the 192 network. Class C and Class B Networks The DX Object Storage system supports Class C and Class B networks. Class C networks allow 254 addresses per network; the first octet range is 192-223. Class B networks allow 65,534 addresses; the first octet range is 128-191. NOTE: When configuring the CSN, you will be asked during the automatic NIC bonding to designate a network address for a large or small network. Dell recommends using a large (Class B) network address, as your cluster may grow beyond Class C capacity. WARNING: When providing the network address for the NIC bonding, it is critical to provide a network that is separate from the public network that the CSN is using. Otherwise, the bonding process may try to use IP addresses in the same range that DHCP is already using to distribute temporary IP addresses. Page 7

Reserved IP Ranges By default, the DX Object Storage software reserves half of the available IP addresses for static addresses. The remaining addresses are assigned through DHCP as the Storage Nodes boot and join the cluster. The following table displays the reserved IP address ranges: Network Size CSN 3rd Party DHCP Netboot Class C (/24) x.y.z.0-16 x.y.z.17-32 x.y.z.33-48 x.y.z.49-254 Class B (/16) x.y.0.0-254 x.y.1.0-254 x.y.2.0-254 x.y.3.0-x.y.255.254 The CSN range is for the various services on the primary and secondary CSNs. The 3rd Party range is for assigning static IP addresses to the applications that must run on the private network to interface with the cluster. The DHCP range provides an initial IP to Storage Nodes during their initial boot, until permanent addresses can be assigned to each Storage Node by the CSN. Other applications using the CSN's DHCP server on the private network will reduce the number of available IP addresses, and potentially the number of Storage Nodes that can be booted at the same time. The Netboot range provides permanent IPs for all Storage Nodes. Once it receives its address from DHCP, it will remain with the Storage Node on subsequent reboots. Multiserver and IP Addresses Because Storage Nodes are automatically divided into multiserver nodes, each multiserver requires its own IP address. DX6012S Storage Nodes require three IP addresses, and DX 6004S Storage Nodes use two IP addresses. When setting up the CSN, remember that the total number of multiserver IP addresses should be considered when choosing to deploy the private network as Class B (65,536 IP addresses) or Class C (256 IP addresses). Secondary CSN A secondary CSN provides redundancy for all services on the primary CSN in the event of a disaster and also provides scalability for incoming SCSP requests to the SCSP Proxy. During configuration of the secondary CSN, do not designate it as a primary CSN. Having two CSNs designated as primary can cause network conflicts with DHCP and Netboot services. (DHCP is not started on a secondary CSN.) NTP and DNS Considerations Storage Nodes receive network time protocol (NTP) and Domain Name Server (DNS) services through the CSN, which inherits these services from other sources on the public network. During configuration, you will be asked to provide address(es) for DNS and NTP servers. If you do not enter the name of a DNS or NTP server on your network, the CSN can use externally available sources, such as ntp.org, or 8.8.8.8 for Google Public DNS. WARNING: It is imperative that a valid NTP time source be designated and available at all times. The DX Storage nodes will not boot if an invalid time source is set. For legal (Data Compliance) reasons it is very important to ensure that all servers are using a valid synched time source. Page 8

Questions to Ask Before Configuring the Network When deploying DX Object Storage, there are numerous decision points about the network and DX Object Storage. These should be addressed as part of a thorough design that is developed during the sales and consulting process. How many Storage Nodes are planned for the cluster? Each Storage Node uses two bonded NICs. The number of Storage Nodes deployed or planned for growth can determine if you can leverage an existing switch at the site, or if you must install a separate switch. If installing a new switch, the number of ports required for Storage Nodes can determine whether to dedicate that switch to the private storage network, or if you can segment it into public and private VLANs. If using existing switches, how are they currently configured? A switch that is connected to a DX Object Storage Cluster has several configuration requirements (see Switch Configuration in this document). You need to understand whether a currently-used switch can be configured for DX Object Storage without affecting the rest of the infrastructure. If the existing switch cannot be configured as needed, you will need to install a separate switch for the cluster, or understand the necessary tradeoffs if you choose to use the non-optimal existing switch settings. For example, if Flow Control is not enabled on the currently installed switch, there may be dropped packets and resulting latency. However, the buffer may be ample enough on the existing switch that there will be no overflow. Where are the application servers on the network? Dell recommends that application servers reside in the private network. Any application server on the private network will use static IP addresses (recommended) or the DHCP functionality of the CSN, further reducing the number of IP addresses available. This probably will not be an issue when deploying as a Class B network, but in the 256-address confines of a Class C network, you could run out of addresses if deploying a large number of Storage Nodes. How should the network be configured for remote replication? Remote replication can run through the Content Router on the CSN, or directly to the Storage Nodes on the remote cluster. If the data replicates through the CSN, you need to understand how Content Router communicates with the nodes on the local cluster, and the address it uses to communicate to the Content Router on the remote CSN. If using remote peer-to-peer replication (Storage Node-to-Storage Node), how will the connections be routed? This can be done, either by changing static routes on the CSN at each site, or by setting static routes at the switch level. Page 9

Switches Best Practices Network switch configuration depends on both the network topology and the customer environment. To ensure proper functioning with default network parameters and to maximize cluster performance, it is recommended that you follow the best practices listed below, while configuring both the private and public networks switches. Whether to Use a Dedicated Switch or VLAN You may be able to add DX Object Storage to an existing switch. However, you need to consider the following: DX Object Storage requires specific settings on the switch. If these settings are compatible with the existing infrastructure, you can use the existing switch. If there are differences that must be enabled across the entire switch (and not just in a VLAN), then you need a dedicated switch for the cluster. Is the switch segmented or can it be segmented with little or no disruption -- to support public and private networks (VLANs)? If not, then you need another switch, either to support the required extra VLAN, or be segmented and dedicated to the cluster. Will there be enough ports for the Storage Nodes and planned growth of the cluster? Assuming the existing switch meets all other requirements, but there are only 3 ports left on the segment that will support the Storage Node subnet, will that be enough to accommodate the estimated number of Storage Nodes that will be added in the foreseeable future? If the answer is no, you need a dedicated switch. Furthermore, when segmenting the switch, make sure that the vast majority of ports are allocated for the private network for the Storage Nodes. Switch Configuration Use the following guidelines for any switch that is connected to by the CSN or Storage Nodes. Depending on the number of storage nodes, object size, and number of replicas, some of these settings can dramatically affect network performance: Disable link aggregation configuration. DX Object Storage Nodes bond the system NIC ports in balanced-alb modes. NOTE: If deploying a DX Object Storage Gateway solution and you choose to use link aggregation control protocol (LACP) on the 6000G Cluster File Server, instead of adaptive load balancing (mode 4 balanced-alb), you will need to configure link aggregation on the switch. If using jumbo frames (more than 1500 bytes payload), you need to increase the networkmtu (maximum transmission unit) parameter in the cluster.cfg file to be the same as the jumbo frames default payload (9000 bytes). The cluster.cfg file is located at /var/opt/caringo/netboot/content/cluster.cfg. Do not use super jumbo frames. If the networkmtu exceeds the default value of jumbo frames (typically 9,000 bytes payload), the neworkmtu is the value that will be used. In general, networkmtu should not exceed the jumbo frames default, as it typically degrades performance. NOTE: Before you change the default MTU value, make sure the node's network interfaces and all other network hardware support the selected MTU. If the hardware does not support the Page 10

new MTU value, Storage Nodes may not be able to replicate objects and might not be able to communicate with each other. Disable storm control. Storm control monitors the incoming broadcast traffic or unknown unicast traffic or both and compares it with the level that you specify. If broadcast traffic or unknown unicast traffic or both exceed the specified level, packets for the controlled traffic types are dropped. This affects transmissions from the application server to the primary access storage node. Consider trunking ports to aggregate throughput and increase performance. If the cluster uses multiple switches, disable spanning tree protocol if the switches are not trunked. If switches are trunked, enable spanning tree protocol and port fast on the dataintensive ports. Disable Flow Control and similar controls for quality of service or traffic shaping. In general, switches with larger buffers (such as Dell PowerConnect 7xxx series and above), are better utilized by having Flow Control disabled. On switches with less buffer, enabling Flow Control helps prevent packets from being dropped and the resulting latency. However, using Jumbo Frames and increasing the networkmtu (see above) should compensate for the smaller buffer and not using Flow Control. At this time, IGMP snooping is not supported on DX Object Storage Nodes. Dell recommends disabling IGMP snooping only on the VLAN for the private network that contains the cluster s multicast traffic. If IGMP cannot be disabled on that VLAN, then it must be disabled for the entire switch. If IGMP snooping is disabled, Dell recommends dedicating a switch to the DX Object Storage private network. This will prevent multicast packet floods from affecting other resources on the network. When stacking switches, ensure that ports are mirrored for redundancy. Page 11