1 VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016
2 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network... 3 Oversubscription considerations... 3 Host network adapter... 7 Virtual network infrastructure... 8 VMkernel network... 8 Virtual Switch... 8 NIC teaming... 9 Multicast Network I/O Control Jumbo Frames Switch Discovery Protocol Network availability Conclusion About the Author Appendix Multicast configuration examples References TECHNICAL WHITE PAPER /1
3 Intended Audience This document is targeted toward virtualization, network, and storage architects interested in deploying VMware Virtual SAN solutions. Overview Virtual SAN is a hypervisor-converged, software-defined storage solution for the software-defined data center. It is the first policy-driven storage product designed for VMware vsphere environments that simplifies and streamlines storage provisioning and management. Virtual SAN is a distributed, shared storage solution that enables the rapid provisioning of storage within VMware vcenter Server as part of virtual machine creation and deployment operations. Virtual SAN uses the concept of disk groups to pool together locally attached flash devices and magnetic disks as management constructs. Disk groups are composed of at least cache device and several magnetic or flash capacity devices. In Hybrid architectures, flash devices are used as read cache and write buffer in front of the magnetic disks to optimize virtual machine and application performance. In all flash the cache device endurance is leveraged to allow lower cost capacity devices. The Virtual SAN datastore aggregates the disk groups across all hosts in the Virtual SAN cluster to form a single shared datastore for all hosts in the cluster. Virtual SAN requires correctly configured network for virtual machine I/O as well as communication among cluster nodes. Since the majority of virtual machine I/O travels the network due to the distributed storage architecture, highly performing and available network configuration is critical to a successful Virtual SAN deployment. This paper gives a technology overview of Virtual SAN network requirements and provides Virtual SAN network design and configuration best practices for deploying a highly available and scalable Virtual SAN solution. Virtual SAN Network The hosts in a Virtual SAN cluster must be part of a Virtual SAN network and must be on the same subnet regardless whether the hosts contribute storage
4 or not. Virtual SAN requires a dedicated VMkernel port type and uses a proprietary transport protocol for Virtual SAN traffic between the hosts. The Virtual SAN network is an integral part of an overall vsphere network configuration and therefore cannot work in isolation from other vsphere network services. Virtual SAN utilizes either VMware vsphere Standard Switch (VSS) or VMware vsphere Distributed Switch (VDS) to construct a dedicated storage network. However, Virtual SAN and other vsphere workloads commonly share the underlying virtual and physical network infrastructure. Therefore, the Virtual SAN network must be carefully designed following general vsphere networking best practices in addition to its own. The following sections review general guidelines that should be followed when designing Virtual SAN network. These recommendations do not conflict with general vsphere network design best practices. Physical network infrastructure Data center network The traditional access-aggregation-core, three-tier network model was built to serve north-south traffic in and out of a data center. While the model offers great redundancy and resiliency, it limits overall bandwidth by as much as 50% due to critical network links being blocked using the Spanning Tree Protocol (STP) to prevent network looping. As virtualization and cloud computing evolves, more data centers have adopted the leaf-spine topology for data center fabric simplicity, scalability, bandwidth, fault tolerance, and quality of service (QoS). Virtual SAN is compatible with both topologies regardless how the core switch layer is constructed. Oversubscription considerations East West and throughput concerns VMware Virtual SAN requires low latency and ample throughput between the hosts, as reads may come from any host in the cluster, and writes must be acknowledged by two hosts. For simple configurations utilizing modern, wire speed, top of rack switches, this is a relatively simple consideration as all ports can speak wire speed to all ports. As clusters are stretched across
5 datacenters (perhaps using the Virtual SAN fault domains feature), the potential for oversubscription become a concern. Typically, the largest demand for throughput is during a host rebuild or host evacuation as potentially all hosts may be requesting to send and receive traffic at wire speed to reduce the time of the action. The larger the capacity consumed on each host, the more important the over subscription ratio becomes. A host with only 1Gbps and 12TB of capacity would take over 24 hours to refill with data. Leaf-spine In traditional leaf-spine architecture, due to the full mesh topology and port density constraints, leaf switches are normally oversubscribed for bandwidth. For example, a fully utilized 10GbE uplink utilized by the Virtual SAN network in reality may only achieve 2.5Gbps throughput on each node when the leaf switches are oversubscribed at a 4:1 ratio and Virtual SAN traffic needs to go across the spine, as illustrated in Figure 2. The impact of network topology on available bandwidth should be considered when designing your Virtual SAN cluster. The leaf switches are fully meshed to the spine switches with links that could either be switched or routed, these are referred to as Layer 2 and Layer 3 leafspine architectures respectively. Virtual SAN over layer 3 networks is currently supported. VMware Recommends: consider using layer 2 multicast for simplicity of configuration and operations
6 Figure 1. Bandwidth oversubscription for Virtual SAN network in leaf-spine architecture Here is an example of how over commitment can impact rebuild times. Let us assume the the above design is used with 3 fault domains, and data is being mirrored between cabinets. In this example each host has 10TB of raw capacity, with 6TB of it being used for virtual machines protected by FTT=1. In this case we will also assume 3/4ths (or 30Gbps) of the available bandwidth is available for rebuild. Assuming no disk contention bottlenecks it would take approximately 26 minutes to rebuild over the over subscribed link. If the capacity needing to rebuild was increased to 12TB of data, and the bandwidth was reduced to only 10Gbps, then the rebuild would take at a minimum 156 minutes. Any time capacity increases, or bandwidth between hosts is decreased the time for rebuilds becomes longer. VMware Recommends: minimizing oversubscription to reduce opportunities for congestion during host rebuilds or high throughput operations.
7 ECMP A number of vendors have implemented Ethernet fabrics that eliminate the need for spanning tree to prevent loops, and employ layer 2 routing mechanisms to best use the shortest paths as well as supplemental paths for added throughput. SPB (Shortest Path Bridging) or TRILL ("Transparent Interconnection of Lots of Links") are commonly used, but often with proprietary extensions. Virtual SAN is compatible with these topologies, but be sure to design adequate east west traffic within each Virtual SAN cluster. Cisco FEX/Nexus 2000 It should be noted that fabric extending devices such as the Cisco Nexus 2000 product line have unique considerations. These devices lack the ability for port to port direct traffic on the same switch, and all traffic must travel through the uplink to the Nexus 5000 or 7000 series device and back down. While this will increase port to port latency, the larger concern is large throughput operations (such as a host rebuild) will potentially put pressure on the over subscribed uplinks back to the switch. Non-Stacked top of rack switches and Cisco Fabric Interconnects. VMware Recommends: Deploying all hosts within a fault domain to a low latency wire speed switch or switch stack. When multiple switches are used, pay attention to throughput of the links between switches. Deployments with limited or heavily over subscribed throughput should be carefully considered. Flow Control Pause Frames are related to Ethernet flow control and are used to manage the pacing of data transmission on a network segment. Sometimes, a sending node (ESXi/ESX host, switch, etc.) may transmit data faster than another node can accept it. In this case, the overwhelmed network node can send
8 pause frames back to the sender, pausing the transmission of traffic for a brief period of time. Virtual SAN manages congestion by introducing artificial latency to prevent cache/buffer exhaustion. Since Virtual SAN has built-in congestion management, disabling flow control on VMkernel interfaces tagged for Virtual SAN traffic is recommended. Note Flow Control is enabled by default on all physical uplinks. For further information on Flow Control see KB VMware recommends: disabling flow control for Virtual SAN traffic Security considerations VMware Virtual SAN, like other IP storage traffic, is not encrypted and should be deployed to isolated networks. VLAN s can be leveraged to securely separate Virtual SAN traffic from virtual machine and other networks. Security can also be added at a higher layer by encrypting data in guest in order to meet security and compliance requirements. Host network adapter On each Virtual SAN cluster node, the following practices should be applied: At least one physical NIC must be used for Virtual SAN network. One or more additional physical NICs are recommended to provide failover capability. The physical NIC(s) can be shared amongst other vsphere networks such as virtual machine network and vmotion network. Logical Layer2 separation of Virtual SAN VMkernel traffic (VLANs) is recommended when physical NIC(s) share traffic types. QoS can be provided for traffic types via Network IO Control (NIOC). 10GbE NIC or larger is strongly recommended for Virtual SAN, and a requirement for all flash Virtual SAN. If 1GbE NIC is used for hybrid configurations, VMware recommends it to be dedicated for Virtual SAN. Larger than 10Gbps such as 25/40/100Gbps is supported as long as your edition of vsphere supports it.
9 Virtual network infrastructure VMkernel network A new VMkernel type called Virtual SAN traffic is introduced in vsphere for Virtual SAN. Each cluster node must have this VMkernel port configured in order to participate in a Virtual SAN cluster. This is true even for nodes that do not contribute storage to Virtual SAN. For each cluster, a VMkernel port group for Virtual SAN should be created in VSS or VDS, and the same port group network label should be used to ensure labels are consistent across all hosts. Unlike multiple-nic vmotion, Virtual SAN does not support multiple VMkernel adapters on the same subnet. Virtual Switch VMware Virtual SAN supports both VSS and VDS virtual switches. It should be noted that VDS licensing is included with VMware Virtual SAN and licensing should not be a consideration when choosing a virtual switch type. As VDS is required for dynamic LACP (Link Aggregation Control Protocol), LBT (Load Based Teaming), LLDP (Link Layer Discovery Protocol), bi-directional CDP (Cisco Discovery Protocol), and Network IO Control (NIOC) VDS is preferred for superior performance operational visibility, and management capabilities. VMware recommends: Deploying VDS for use with VMware Virtual SAN. vcenter and VDS considerations VMware fully supports deploying a vcenter that manages a cluster on top of the storage cluster. Starting with vsphere 5.x Static port groups became the default port group type for VDS, and will persist assignment to a virtual machine through a reboot. In the event vcenter is unable to be bind to the VDS a pre-created ephemeral port group, or a VSS can be leveraged to restore access to the vcenter Server.
10 NIC teaming Virtual SAN network can use teaming and failover policy to determine how traffic is distributed between physical adapters and how to reroute traffic in the event of adapter failure. NIC teaming is used mainly for high availability, but not load balancing when the team is dedicated for Virtual SAN. However, additional vsphere traffic types sharing the same team could still leverage the aggregated bandwidth by distributing different types of traffic to different adapters within the team. Virtual SAN supports all VSS and VDS supported NIC teaming options. Load Based Teaming Route based on physical NIC load, also known as Load Based Teaming (LBT), allows vsphere to balance the load on multiple NIC s without a custom switch configuration. It begins balancing similar to Virtual Port ID, but will dynamically reassess physical to virtual NIC bindings every 30 seconds based on congestion thresholds. To prevent impact on port change settings such as Cisco s portfast or HP admin-edge-port on ESXi host facing physical switch ports should be configured. With this setting, network convergence on these switch ports will happen fast after the failure because the port will enter the Spanning tree forwarding state immediately, bypassing the listening and learning states. Additional information can be found on different teaming policies in the vsphere networking documentation. IP Hash Policy One failover path option is the IP hash based policy. Under this policy, Virtual SAN, either alone or together with other vsphere workloads, is capable of balancing load between adapters within a team, although there is no guarantee of performance improvement for all configurations. While Virtual SAN does initiate multiple connections, there is no deterministic balancing of traffic. This policy requires the physical switch ports to be configured for a port link aggregation technology or port-channel architecture such as Link Aggregation Control Protocol (LACP) or EtherChannel. Only static mode EtherChannel is supported with the vsphere Standard Switch. LACP is supported only with vsphere Distributed Switch. VMware recommends: Use Load Based Teaming or for load balancing, and appropriate spanning tree port configurations are taken into account.
11 Multicast IP multicast sends source packets to multiple receivers as a group transmission. Packets are replicated in the network only at the points of path divergence, normally switches or routers, resulting in the most efficient delivery of data to a number of destinations with minimum network bandwidth consumption. For examples of Multicast configuration please see the Layer 2/Layer 3 network topologies white paper. Virtual SAN uses multicast to deliver metadata traffic among cluster nodes for efficiency and bandwidth conservation. Multicast is required for VMkernel ports utilized by Virtual SAN. While Layer 3 is supported, Layer 2 is recommended to reduce complexity. All VMkernel ports on the Virtual SAN network subscribe to a multicast group using Internet Group Management Protocol (IGMP). IGMP snooping configured with an IGMP snooping querier can be used to limit the physical switch ports participating in the multicast group to only Virtual SAN VMkernel port uplinks. The need to configure an IGMP snooping querier to support IGMP snooping varies by switch vendor. Consult your specific switch vendor/model best practices for IGMP snooping configuration. If deploying a Virtual SAN cluster across multiple subnets, be sure to review best practices and limitations in scaling Protocol Independent Multicast (PIM) dense or sparse node. A default multicast address is assigned to each Virtual SAN cluster at time of creation. When multiple Virtual SAN clusters reside on the same layer 2 network, the default multicast address should be changed within the additional Virtual SAN clusters to prevent multiple clusters from receiving all multicast streams. Similarly, multicast address ranges must be carefully planned in environments where other network services such as VXLAN also utilize multicast. VMware Knowledge Base Article can be consulted for the detailed procedure of changing the default Virtual SAN multicast address. More simply isolating each clusters traffic to its own VLAN will remove possibility for conflict. VMware recommends: isolating each Virtual SAN clusters traffic to its own VLAN to when using multiple clusters. Network I/O Control vsphere Network I/O Control (NIOC) can be used to set quality of service (QoS) for Virtual SAN traffic over the same NIC uplink in a VDS shared by
12 other vsphere traffic types including iscsi traffic, vmotion traffic, management traffic, vsphere Replication (VR) traffic, NFS traffic, Fault Tolerance (FT) traffic, and virtual machine traffic. General NIOC best practices apply with Virtual SAN traffic in the mix: For bandwidth allocation, use shares instead of limits as the former has greater flexibility for unused capacity redistribution. Always assign a reasonably high relative share for the Fault Tolerance resource pool because FT is a very latency-sensitive traffic type. Use NIOC together with NIC teaming to maximize network capacity utilization. Leverage the VDS Port Group and Traffic Shaping Policy features for additional bandwidth control on different resource pools. Specifically, for Virtual SAN, we make the following recommendations: Do not set a limit on the Virtual SAN traffic; by default, it is unlimited. Set a relative share for the Virtual SAN resource pool based on application performance requirements on storage, also holistically taking into account other workloads such as bursty vmotion traffic that is required for business mobility and availability. Avoid reservations as they will share unused traffic only with other management types (vmotion, Storage etc.) but not with Virtual Machine networking needs. Jumbo Frames Virtual SAN supports jumbo frames, but does not require them. VMware testing finds that using jumbo frames can reduce CPU utilization and improve throughput, however, with both gains at minimum level because vsphere already uses TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) to deliver similar benefits. In data centers where jumbo frames are already enabled in the network infrastructure, jumbo frames are recommended for Virtual SAN deployment. If jumbo frames are not currently in use, Virtual SAN alone should not be the justification for deploying Jumbo Frames.
13 VMware Recommends: Using the existing MTU/Frame size you would otherwise be using in your environment. Switch Discovery Protocol Switch discovery protocols allow vsphere administrators to determine which switch port is connected to a given VSS or VDS. vsphere supports Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP). CDP is available for vsphere Standard Switches and vsphere Distributed Switches connected to Cisco physical switches. When CDP or LLDP is enabled for a particular vsphere Distributed Switch or vsphere Standard Switch, you can view properties of the peer physical switch such as device ID, software version, and timeout from the vsphere Client. VMware Recommends: enable LLDP or CDP in both send and receive mode. Network availability For high availability, Virtual SAN network should have redundancy in both physical and virtual network paths and components to avoid single points of failure. The architecture should configure all port groups or distributed virtual port groups with at least two uplink paths using different NICs that are configured with NIC teaming, set a failover policy specifying the appropriate active-active or active-standby mode, and connect each NIC to a different physical switch for an additional level of redundancy. VMware recommends: redundant uplinks for Virtual SAN and all other traffic. Conclusion Virtual SAN Network design should be approached in a holistic fashion, taking into account other traffic types utilized in the vsphere cluster in addition to the Virtual SAN network. Other factors to consider should be the physical network topology, and the overprovisioning posture of your physical switch infrastructure.
14 Virtual SAN requires a 1GbE network at the minimum for hybrid clusters and 10Gbps for all flash clusters. As a best practice, VMware strongly recommends 10GbE network for Virtual SAN to avoid the possibility of the network congestion leading to degraded performance. A 1GbE network can easily be saturated by Virtual SAN traffic and teaming of multiple NICs can only provide availability benefits in limited cases. If 1GbE network is used, VMware recommends it be used for smaller clusters, and be to be dedicated to Virtual SAN traffic. To implement a highly available network infrastructure for Virtual SAN, redundant hardware components and network paths are recommended. Switches can be configured either in uplink or stack mode, depending on switch capability and your physical switch configuration. Virtual SAN supports both vsphere Standard Switches and vsphere Distributed Switches. However, VMware recommends the use of vsphere Distributed Switches in order to realize network QoS benefits offered by vsphere NIOC. When various vsphere network traffic types must share the same NICs as Virtual SAN, separate them onto different VLANs and use shares as a quality of service mechanism to guarantee the level of performance expected for Virtual SAN in possible contention scenarios. About the Author John Nicholson is a Senior Technical Marketing Manager in the Storage and Availability Business Unit. He focuses on delivering technical guidance around VMware Virtual SAN solutions. John previously worked in architecting and implementing enterprise storage and VMware solutions. Follow John on
15 Appendix Multicast configuration examples. Multicast configuration examples should be used only as a reference. Consult with your switch vendor as configuration commands may change between platforms and versions. Cisco IOS (Default is IGMP snooping on). switch# configure terminal switch(config)# vlan 500 switch(config vlan)# no ip igmp snooping switch(config vlan)# do write memory Brocade ICX (Default is IGMP snooping off) Switch#configure Switch(config)# VLAN 500 Switch(config vlan 500)# multicast disable igmp snoop Switch(config vlan 500)# do write memory Brocade VDX Guide (See guide for Virtual SAN VDX configuration) HP ProCurve (Default is IGMP snooping on) switch# configure terminal switch(config)# VLAN 500 ip IGMP switch(config)# no VLAN 500 ip IGMP querier switch(config)# write memory
16 References 1. Virtual SAN Product Page 2. VMware Virtual SAN Hardware Guidance, Hardware-Guidance.pdf 3. VMware NSX Network Virtualization Design Guide, 4. VMware Network Virtualization Design Guide, 5. Understanding IP Hash Load Balancing, VMware Knowledge Base Article Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches, VMware Knowledge Base Article Changing the multicast address used for a VMware Virtual SAN Cluster, VMware Knowledge Base Article Understanding TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) in a VMware environment, VMware Knowledge Base Article IP Multicast Technology Overview, papers/mcst_ovr.pdf 10. Essential Virtual SAN: Administrator s Guide to VMware Virtual SAN by Cormac Hogan, Duncan Epping 11. VMware Network I/O Control: Architecture, Performance and Best Practices,
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014 Table of Contents Introduction... 3 1.1 VMware Virtual SAN...3 1.2 Virtual SAN Datastore Characteristics and
VMware Virtual SAN Hardware Guidance TECHNICAL MARKETING DOCUMENTATION v 1.0 Table of Contents Introduction.... 3 Server Form Factors... 3 Rackmount.... 3 Blade.........................................................................3
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Network Troubleshooting & Configuration in vsphere 5.0 2010 VMware Inc. All rights reserved Agenda Physical Network Introduction to Virtual Network Teaming - Redundancy and Load Balancing VLAN Implementation
Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
IP SAN BEST PRACTICES PowerVault MD3000i Storage Array www.dell.com/md3000i TABLE OF CONTENTS Table of Contents INTRODUCTION... 3 OVERVIEW ISCSI... 3 IP SAN DESIGN... 4 BEST PRACTICE - IMPLEMENTATION...
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending
Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
ProCurve Networking LAN Aggregation Through Switch Meshing Technical White paper Introduction... 3 Understanding Switch Meshing... 3 Creating Meshing Domains... 5 Types of Meshing Domains... 6 Meshed and
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
TECHNOLOGY BRIEF Intel Xeon Processor E5-2600 Product Family Intel Ethernet Converged Network Adapter Family ware vsphere* 5.1 Simplified, High-Performance 10GbE Networks Based on a Single Virtual Distributed
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER Table of Contents 1 Executive Summary....3 2 Scope and Design Goals....3 2.1 NSX VMkernel
ware Network Virtualization Technical WHITE PAPER January 2013 ware Network Virtualization Table of Contents Intended Audience.... 3 Overview.... 3 Components of the ware Network Virtualization Solution....
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
What s New in VMware Virtual SAN TECHNICAL WHITE PAPER v1.0/february 2014 Update Table of Contents 1. Introduction.... 4 1.1 Software-Defined Datacenter.... 4 1.2 Software-Defined Storage.... 4 1.3 VMware
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,
SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
The next step in Software-Defined Storage with Virtual SAN VMware vforum, 2014 Lee Dilworth, principal SE @leedilworth 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual
CHAPTER 13 Revised: April 27, 2011, OL-20458-01 This chapter describes how to configure iscsi multipath for multiple routes between a server and its storage devices. This chapter includes the following
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out Contents Introduction... 3 Terminology... 3 Planning Scale-Out Clusters and Pools... 3 Cluster Arrays Based on Management Boundaries...
DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores
Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should
ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
VMware vsphere 5.0 Evaluation Guide Advanced Networking Features TECHNICAL WHITE PAPER Table of Contents About This Guide.... 4 System Requirements... 4 Hardware Requirements.... 4 Servers.... 4 Storage....
Windows Server 2012 R2 Hyper-V: Designing for the Real World Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com Is Hyper-V for real? Microsoft Fan Boys Reality VMware Hyper-V
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for
DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
How to Create a Virtual Switch in VMware ESXi I am not responsible for your actions or their outcomes, in any way, while reading and/or implementing this tutorial. I will not provide support for the information
Switching Fabric Designs for Data Centers David Klebanov Technical Solutions Architect, Cisco Systems firstname.lastname@example.org @DavidKlebanov 1 Agenda Data Center Fabric Design Principles and Industry Trends
Brocade VCS Fabric Technology and NAS with NFS Validation Test NetApp/VMware vsphere 5.0 Red Hat Enterprise Linux This material outlines sample configurations and associated test results of Brocade VCS
ESX 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
To register or for more information call our office (208) 898-9036 or email email@example.com Vmware VSphere 6.0 Private Cloud Administration Class Duration 5 Days Introduction This fast paced,
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
1 Extending Virtualized Oracle RAC Across Data Centers: True Active-Active Availability Over Distance Sam Lucido, EMC Kannan Mani, VMware Inc., 2 Agenda Customer Challenges What is Oracle Real Application