VMware and Brocade Network Virtualization Reference Whitepaper

Size: px
Start display at page:

Download "VMware and Brocade Network Virtualization Reference Whitepaper"

Transcription

1 VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM OVERVIEW COMPONENTS OF THE VMWARE NSX DATA PLANE CONTROL PLANE MANAGEMENT PLANE CONSUMPTION PLATFORM FUNCTIONAL SERVICES OF NSX FOR VSPHERE WHY DEPLOY BROCADE NETWORK FABRIC WITH VMWARE NSX DESIGN CONSIDERATIONS FOR VMWARE NSX AND BROCADE NETWORK FABRIC DESIGN CONSIDERATIONS FOR BROCADE NETWORK FABRIC BROCADE VCS FABRIC AND VDX SWITCHES SCALABLE BROCADE VCS FABRICS FLEXIBLE BROCADE VCS FABRIC BUILDING BLOCKS FOR EASY MIGRATION BROCADE VDX SWITCHES DISCUSSED IN THIS GUIDE MIXED SWITCH FABRIC DESIGN MULTI- FABRIC DESIGNS DEPLOYING THE BROCADE VDX 8770 AND VCS FABRICS AT THE CLASSIC AGGREGATION LAYER VCS FABRIC BUILDING BLOCKS VMWARE NSX NETWORK DESIGN CONSIDERATIONS DESIGNING FOR SCALE AND FUTURE GROWTH COMPUTE RACKS EDGE RACKS INFRASTRUCTURE RACKS LOGICAL SWITCHING TRANSPORT ZONE LOGICAL SWITCH REPLICATION MODES LOGICAL SWITCH ADDRESSING WITH NETWORK ADDRESS TRANSLATION LOGICAL ROUTING CENTRALIZED ROUTING LOGICAL SWITCHING AND ROUTING DEPLOYMENTS

2 LOGICAL FIREWALLING, ISOLATION AND MICRO- SEGMENTATION ADVANCED SECURITY SERVICE INSERTION, CHAINING AND STEERING LOGICAL LOAD BALANCING CONCLUSION Executive Summary This document is targeted at networking and virtualization architects interested in deploying VMware Network virtualization in a vsphere hypervisor environment based on the joint solution from VMware NSX and Brocade Virtual Cluster Switching (VCS) technology. VMware s Software Defined Data Center (SDDC) vision leverages core data center virtualization technologies to transform data center economics and business agility through automation and non- disruptive deployment that embraces and extends existing compute, network and storage infrastructure investments. VMware s NSX is the component providing the networking virtualization pillar of this vision. With NSX customers can build an agile overlay infrastructure for Public and Private cloud environments leveraging Brocade s robust and resilient Virtual Cluster Switching (VCS) for the physical underlay network. Together, Brocade and VMware help customers leverage the promise of VMware s Software Defined Data Center (SDDC) vision to enable the power, intelligence, and analytics of networks with a flexible, end- to- end solution. VMware NSX with Brocade VCS: Seamless Transition to SDDC New technologies and applications are driving constant change in organizations both large and small, and nowhere are the effects felt more keenly than in the network. Large- scale server virtualization is generating unpredictable bandwidth requirements driven by virtual machine (VM) mobility. The move toward cloud computing is demanding a high- performance network interconnect that can be driven by servers and VMs that number in the tens of thousands. Modern virtualized multi- tiered applications are generating massive levels of east/west inter- server traffic. Unfortunately, traditional network topologies and solutions were not designed to support these highly virtualized environments with mobile VMs and demanding modern workloads. VMware NSX has emerged as an attractive solution to these challenges, bringing dramatic improvements over the inefficiencies, rigidity, fragility, and management challenges of classic hierarchical Ethernet networks. For optimal performance, it is recommended to run NSX on a resilient physical network or fabric underlay for providing robust network connectivity. Brocade s VCS Fabric technology is ideal for this scenario, enabling organizations to migrate to a highly available and automated fabric at their own pace, without disrupting their existing data center network architecture. Here are some typical instances when customers may choose to transition to a Brocade network fabric as part of their evolution to NSX SDDC architecture: Transitioning from Gigabit Ethernet (GbE) to 10 GbE - Many organizations are consolidating multiple workloads onto fewer more powerful servers, creating a demand for greater network bandwidth. Scaling the network - The elasticity, manageability, flexibility, and scalability of Ethernet fabrics make them ideal for new virtualization and cloud computing environments. Adding storage - Storage virtualization and those organizations developing Ethernet Storage Area Networks (SANs) require a true lossless fabric. Adopting Network Virtualization - Network virtualization introduce additional parameters to set up and manage, and typically require new skill sets as well. Ethernet fabrics provide a simpler, highly resilient, low- latency foundation to virtualize the network to reach SDDC. This combined solution by Brocade and VMware NSX delivers the required IT agility through automated, zero- touch VM discovery, configuration, and mobility, which is demanded by today s constantly evolving workloads. 2

3 VMware's NSX Network Virtualization Platform Overview IT organizations have gained significant benefits as a direct result of server virtualization. Server consolidation reduced physical complexity, increased operational efficiency, and the ability to dynamically re- purpose underlying resources to quickly and optimally meet the needs of increasingly dynamic business applications are just a handful of the gains that have already been realized. Now, VMware s Software Defined Data Center (SDDC) architecture is extending virtualization technologies across the entire physical data center infrastructure. VMware NSX, the network virtualization platform is a key product in the SDDC architecture. With NSX, virtualization now delivers for networking the same value and advantages it has provided for compute and storage. In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software- based virtual machines (VMs), NSX network virtualization programmatically creates, snapshots, deletes, and restores software- based virtual networks. The result is a completely transformative approach to networking that not only enables data center managers to achieve orders of magnitude better agility and economics, but also allows for a vastly simplified operational model for the underlying physical network. With the ability to be deployed on any IP network, including both existing traditional networking models and next generation fabric architectures from any vendor, NSX is a completely non- disruptive solution. Figure 1 Server and Network Virtualization Analogy Figure 1 draws an analogy between compute and network virtualization. With server virtualization, a software abstraction layer (server hypervisor) reproduces the familiar attributes of an x86 physical server (e.g., CPU, RAM, Disk, NIC) in software, allowing them to be programmatically assembled in any arbitrary combination to produce a unique virtual machine (VM) in a matter of seconds. With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 to Layer 7 networking services (e.g., switching, routing, access control, firewalling, QoS, and load balancing) in software. As a result, these services can be programmatically assembled in any arbitrary combination, to produce unique, isolated virtual networks in a matter of seconds. Not surprisingly, similar benefits are also derived. For example, just as VMs are independent of the underlying x86 platforms and allow IT to treat physical hosts as a pool of compute capacity, virtual networks are independent of the underlying IP network hardware and allow IT to treat the physical network as a pool of transport capacity that can be consumed and repurposed on demand. Unlike legacy architectures, virtual networks can be provisioned, changed, stored, deleted and restored programmatically without reconfiguring the underlying physical hardware or topology. By matching the capabilities and benefits derived from familiar server and storage virtualization solutions, this transformative approach to networking unleashes the full potential of the software defined data center. With VMware NSX, you already have the network you need to deploy a next- generation software defined data center. This paper will highlight the design factors you should consider to fully leverage your existing network investment and optimize that investment with VMware NSX. 3

4 Components of the VMware NSX VMware NSX is a distributed system. It consists of the components as shown in the Figure 1below: Figure 2 NSX Components Data Plane The NSX Data plane consists of the NSX vswitch. The vswitch in NSX for vsphere is based on the vsphere Distributed Switch (VDS) with additional components to enable rich services. The add- on NSX components include kernel modules (VIBs) which run within the hypervisor kernel providing services such as distributed routing, distributed firewall and enable VXLAN bridging capabilities. The NSX VDS vswitch abstracts the physical network and provides access- level switching in the hypervisor. It is central to network virtualization because it enables logical networks that are independent of physical constructs such as VLAN. Some of the benefits of the vswitch are: Support for overlay networking with protocols such as VXLAN and centralized network configuration. Overlay networking enables the following capabilities: o Creation of a flexible logical layer 2 (L2) overlay over existing IP networks on existing physical infrastructure without the need to re- architect any of the data center networks o Provision of communication (east west and north south) while maintaining isolation between tenants o Application workloads and virtual machines that are agnostic of the overlay network and operate as if they were connected to a physical L2 network NSX vswitch facilitates massive scalability of hypervisors and their attached workloads. Multiple features such as Port Mirroring, NetFlow/IPFIX, Configuration Backup and Restore, Network Health Check, QoS, and LACP provide a comprehensive toolkit for traffic management, monitoring and troubleshooting within a virtual network. Additionally, the data plane also consists of gateway devices that can either provide L2 bridging from the logical networking space (VXLAN) to the physical network (VLAN). The gateway device is typically a NSX Edge virtual appliance. NSX Edge offers L2, L3, perimeter firewall, load- balancing and other services such as SSL VPN, DHCP, etc. 4

5 Control Plane The NSX control plane runs in the NSX controller and does not have any data plane traffic passing through it. The NSX controller nodes are deployed in a cluster of odd members in order to enable high- availability and scale. Any failure of the controller nodes does not impact any data plane traffic. Management Plane The NSX management plane is built upon the NSX manager. The NSX manager provides the single point of configuration and is the target for REST API entry- points in a vsphere NSX environment. Consumption Platform The consumption of NSX can be driven directly via the NSX manager UI which is available via the vsphere Web UI itself. Typically end- users tie in network virtualization to their cloud management platform for deploying applications. NSX provides a rich set of integration into virtually any CMP via the REST API. Out of the box integration is also available through VMware vrealize automation vra (previously known as vcloud Automation Center). Functional Services of NSX for vsphere In this design guide we will discuss how all of the components described above give us the following functional services: Logical Layer 2 Enabling extension of a L2 segment / IP Subnet anywhere in the fabric irrespective of the physical network design Distributed L3 Routing Routing between IP subnets can be done in a logical space without traffic going out to the physical router. This routing is performed in the hypervisor kernel with a minimal CPU / memory overhead. This functionality provides an optimal data- path for routing traffic within the virtual infrastructure. Similarly the NSX Edge provides a mechanism to do full dynamic route peering using OSPF and BGP with the physical network to enable seamless integration. Distributed Firewall Security enforcement is done at the kernel and VNIC level itself. This enables firewall rule enforcement in a highly scalable manner without creating bottlenecks onto physical appliances. The firewall is distributed in kernel and hence has minimal CPU overhead and can perform at line- rate. Logical Load- balancing Support for L4- L7 load balancing with ability to do SSL termination. SSL VPN services to enable L2 VPN services. Why Deploy Brocade Network Fabric with VMware NSX Open and Reliable Infrastructure: Brocade as a leader in the networking and data center space has over a decade of experience building high performance and reliable networks for the most demanding workloads and some of the world s largest data centers. Brocade VDX switches support both open standards and more elegant options for customers in terms of deployment for cloud based architectures and the SDDC. For example Brocade supports standard Link aggregation for connection with legacy networking equipment but also offers Brocade Trunking for more efficient utilization of links and higher performance to better serve customer needs. Equal Cost Multipath (ECMP) is also supported to provide predictable performance and resiliency across the network as a whole. By supporting industry standards Brocade provides interoperability and consistency for customers while still being able to provide higher level functionality for particularly intensive SDDC environments that other network vendors don t offer. Agile: Brocade networks are highly agile and can start as simply as one switch which provides a foundation for a running image of the network. As additional network elements are added they inherit the running configuration. This provides a level of automation that allows users to scale their SDDC without having to configure each element. By leveraging ECMP, trunking and fabric elasticity it eliminates architectural complexity from small enterprise deployments to large multi- tenant cloud provider environments. With the ability to support up to 8,000 physical ports in a single domain and up to 384,000 MAC addresses in a single chassis you can build massively scalable Virtual environments that provide zero touch VM discovery, network configuration and VM mobility. VCS Fabric automation provides self- healing and self- provisioning capability that allows for customers to reduce up to 50% of the operational cost associated with traditional networks. This allows VMware customers to focus on the managing virtualized applications and infrastructure instead of the physical underlay. 5

6 Efficient: Brocade VCS Fabric support Equal Cost Multipath (ECMP) and makes use of all links in the network with multipathing and traffic load balancing at all layers in the network. Brocade VDX switches provide the industry s deepest buffers allowing customers to be confident that even when bursts of traffic occur at peak times the network can minimize latency and packet loss. By supporting 10/40/100GbE ports and efficient Layer1-3 load balancing Brocade networks ensure proper performance for even the largest most demanding environments. Highly Manageable: Proactive network monitoring helps minimize business disruption by focusing on early indicators. Brocade s capability to support sflow monitoring and integration with VMware vrealize Operations & vrealize Operations Insight, users can understand where traffic is traversing through the fabric, where bandwidth is being the most heavily consumed, and most importantly where potential hot spots are forming. Design Considerations for VMware NSX and Brocade Network Fabric VMware NSX network virtualization can be deployed over existing data center networks. In this section, we discuss how the logical overlay networks using VXLAN encapsulation can be deployed over common data center network topologies. We first address requirements for the physical network and then look at the network designs that are optimal for network virtualization. Finally, the logical networks and related services and scale considerations are explained. Design Considerations for Brocade Network Fabric Brocade VCS Fabric and VDX Switches Brocade VCS fabrics provide advanced Ethernet fabric technology, eliminating many of the drawbacks of classic Ethernet networks in the data center. In addition to standard Ethernet fabric benefits, such as logically flat networks without the need for Spanning Tree Protocol (STP), Brocade VCS Fabric technology also brings advanced automation with logically centralized management. Brocade VCS Fabric technology includes unique services that are ideal for simplifying traffic in a cloud data center, such as scalable network multi- tenancy capabilities, automated VM connectivity and highly- efficient multipathing at Layers 1, Layer 2 and Layer 3 with multiple Layer 3 gateways. The VCS architecture conforms to the Brocade strategy of revolution through evolution, therefore Brocade VDX switches with Brocade VCS Fabric technology connect seamlessly with existing data center Ethernet products, whether offered by Brocade or other vendors. At the same time, the VCS architecture allows newer datacenter solutions to be integrated quickly. For example, Brocade VDX switches are hardware- enabled to support emerging SDN protocols, such as Virtual Extensive LAN (VXLAN). Logical Chassis technology and northbound Application Programming Interfaces (APIs) can provide operationally scalable management and access to emerging management frameworks such as VMware vrealize Automation vra (previously known as vcloud automation Center vcac) Scalable Brocade VCS Fabrics Brocade VCS fabrics offer dramatic improvements over the inefficiencies, inherent limitations, and management challenges of classic hierarchical Ethernet networks. Implemented on Brocade VDX switches, Brocade VCS fabrics drastically simplify the deployment and management of scale- out architectures. Brocade VCS fabrics are elastic, self- forming, and self- healing, allowing administrators to focus on service delivery instead of basic network operations and administration. All- active connections and load balancing throughout Layers 1 3 provide resilience that is not artificially hampered by arbitrary limitations at any network layer. The distributed control plane ensures that all nodes are aware of the health and state of their peers and that they forward traffic accordingly across the shortest path in the topology. Nodes can be added and removed non- disruptively, automatically inheriting predefined configurations and forming new links upon entry or removal of a node. Brocade VCS fabrics offers uniform, multidimensional scalability that enables the broadest diversity of deployment scenarios and operational flexibility. Large or small, Brocade VCS fabrics work and act the same, offering operational efficiencies that span a very wide range of deployed configurations and requirements. 6

7 Brocade VCS fabrics are easy to manage, with a shared control plane and unified management plane that allow the fabric nodes to function and to be managed as a single entity, regardless of fabric size. Open APIs and OpenStack support facilitate orchestration of VCS fabrics within The On- Demand Data Center. Brocade VCS fabrics offer considerable scale and capacity, as shown in Table 1. Criteria Number of switches in a cluster Number of ports in a cluster Switching fabric capacity Data forwarding capacity MAC addresses Brocade Switches Up to 32 8, Tbps 7.7 Tbps 384,000 Maximum ports per switch 384 x 10GbE or 216 x 40GbE or 48 x 100GbE Table 1 Brocade VCS Fabric Scalability Flexible Brocade VCS Fabric Building Blocks for Easy Migration Brocade VCS fabrics can be deployed as a large single domain or multiple smaller fabric domains can be configured to suit either application needs or administrative boundaries (see Figure 3). A single larger domain affords a simple, highly efficient configuration that avoids STP while smoothly supporting significant east- west traffic common to modern applications. Data Center Bridging (DCB) is supported on all nodes, allowing for unified storage access over Ethernet. Multiple Brocade VCS domains can be configured to easily scale out the data center, while offering multiple active Layer 3 gateways, contained failure domains, and MAC address scalability all while avoiding STP. L3 G/W L3 G/W L3 G/W Figure 3 Brocade VCS fabrics easily accommodate a wide range of configurations, from a single large VCS domain to multiple smaller domains. Brocade VDX Switches Discussed in This Guide Brocade VDX 6720 Switch Available in both 1U and 2U versions, the Brocade VDX 6720 provides either 24 (1U) or 60 (2U) 1/10 GbE SFP+ ports, which can be acquired with the flexible and innovative Brocade Ports on Demand (PoD) licensing. Brocade VDX 6730 Switch The Brocade VDX 6730 adds Fibre Channel (FC) support, with the 1U version offering 24 1/10 GbE SFP+ 7

8 ports and eight 8 gigabit- per- second (Gbps) FC ports, and the 2U version offering 60 1/10 GbE SFP+ ports and sixteen 8 Gbps FC ports. The Brocade VDX 6730 also supports PoD licensing. Brocade VDX 6740 Switch The Brocade VDX 6740 offers GbE SFP+ ports and four 40 GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40 GbE SFP+ port can be broken out into four independent 10 GbE SFP+ ports, providing an additional GbE SFP+ ports, which can be licensed with Ports on Demand. Brocade VDX 8770 Switch Available in 4- slot and 8- slot versions, the 100 GB- ready Brocade VDX 8770 dramatically increases the scale that can be achieved in Brocade VCS fabrics, with 10 and 40 Gigabit Ethernet wire- speed switching, numerous line card options, and the ability to connect over 8,000 server ports in a single switching domain. As shown in Figure 4, organizations can easily deploy Brocade VCS fabrics at the access layer, incrementally expanding the fabric over time. As the Brocade VCS fabric expands, existing network infrastructure can remain in place, if desired. Eventually, the advantages of VCS fabrics can extend to the aggregation layer, delivering the benefits of Brocade VCS Fabric technology to the entire enterprise, while allowing legacy aggregation switches to be redeployed elsewhere. Alternatively, a VCS fabric can be implemented initially in the aggregation tier, leaving existing access tier switches in place. Figure 4 Incremental deployment of Brocade VCS Fabric.in brown- field environment Mixed Switch Fabric Design For dense server deployments and highly virtualized environments, multiple Brocade VDX switch types can be combined to form one single VCS Fabric and leverage administrative simplicity through a single logical chassis.. For instance, a small and cost- effective Brocade VCS fabric can be piloted using the family of Brocade VDX 6700 products alone and eventually scaled out, using the Brocade VDX 8770, as the fabric grows and the organization moves toward deploying larger and larger virtualized environments and cloud services. The configuration shown in Figure 5 shows a typical leaf- spine fabric. Here the leaf or access layer uses the Brocade VDX 6740 switch links to provide redundant Gigabit Ethernet access to servers, while also providing redundant 10 Gigabit Ethernet links to the Spine layer which uses the Brocade VDX 8770s while the Core layer uses Brocade MLX series switches with MCT technology (For more information on Brocade Multi- chassis Trunking please see Chassis_Trunking_WP.pdf). Note that the, Brocade VDX 8770 switches can be deployed both at the spine and leaf layers. When used as a leaf switch, the Brocade 8

9 VDX 8770 switch greatly expands the VCS fabric, providing high- volume connectivity for large numbers of servers. When used at the spine, the Brocade VDX 8770 can provide Layer 3 routing capabilities.deploying Layer 3 routing at the spine layer shields the core switches from unnecessary routing traffic, thus enabling additional network scale and enhancing application performance. Multiple active Layer 3 gateways at the Spine layer provide high availability through an architecturally hierarchical, but logically flat, network. Brocade VDX 6740 Figure 5 The Brocade VDX 8770 switch can be added to existing small- to- medium- scale VCS fabrics at both the leaf and spine for additional scale while containing Layer 3 traffic. For very data intensive applications with very low- latency requirements, the Brocade VDX 8770 switch can be paired with Brocade VDX 6730 switches for connecting to FC arrays, as shown in Figure 6. This highly- redundant dual- fabric configuration offers the benefits of both Brocade VCS fabrics and FC fabrics. 9

10 Figure 6 Highly redundant, dual- fabric design for an HPC environment can consolidate multiple data stores into a single managed service. Multi- fabric Designs The Brocade VDX 8770 can be used to accomplish phased data center deployments of VCS fabrics, or to accomplish truly massive scalability through multi- fabric Brocade VCS Fabric deployments. By deploying the Brocade VDX 8770 switch as a spine switch, multiple fabrics can be interconnected to provide additional scale and Layer 3 flexibility. Figure 7 illustrates separate fabrics built from Brocade VDX 6740 and 8770 switches. As shown, Virtual LAG (vlag ) connect the separate fabric domains using both 40 Gbe connections and 10 Gbe DCB connections for storage access. Note : Link aggregation allows you to bundle multiple physical Ethernet links to form a single logical trunk providing enhanced performance and redundancy. The aggregated trunk is referred to as a Link Aggregation Group (LAG). Virtual LAG (vlag) is a feature that is included in Brocade VCS Fabric technology that extends the concept of LAG to include edge ports on multiple VCS switches. 10

11 Brocade MLX Series Brocade VDX 8770 Brocade VDX 8770 vlag Core With 1Gbps, 10Gbps and DCB Connectivity Brocade VDX Gbps DCB FCOE/iSCSI Storage Figure 7 Illustrates separate fabrics built from Brocade VDX 6700 and 8770 switches. Deploying the Brocade VDX 8770 and VCS Fabrics at the Classic Aggregation Layer Many medium- to- large data centers are looking for opportunities to move toward cloud computing solutions, while realizing the benefits of Ethernet fabrics. Often these organizations need to improve the performance of their existing networks, but they also want to protect investments in existing networking technology. Even in traditional hierarchical deployment scenarios, the combination of the Brocade VDX 8770 switch and Brocade VCS Fabric technology can offer significant benefits in terms of future- proofing the network, advancing network convergence, and offering a migration path to 40 GbE, eventually, 100 GbE technologies. The Brocade VDX 8770 switch can provide many advantages, especially for those organizations that are tied to tiered network architecture for now, but want to deploy hybrid architecture for investment protection. Deploying a Brocade VCS fabric at the traditional aggregation layer can dramatically improve the performance of the existing network, while protecting both investments in existing infrastructure as well as new investments in Brocade VCS technology. Advantages of deploying Brocade VCS Fabric technology at the traditional aggregation layer include: Multiple Layer 3 gateways for redundancy and optimal load balancing Standard Layer 2 and Layer 3 functionality Wire- speed performance High- density 10GbE 40GbE and 100GbE ~4 µsec latency within the VCS fabric Resiliency through high availability Reduced demand on core switches for east- west traffic 11

12 Figure 8 Dual Brocade VDX 8770 switches configured as a VCS fabric at the aggregation/distribution layer convey many benefits to traditional tiered networks. VCS Fabric Building Blocks Multiple data center templates can be defined, tested and deployed out a common set of building blocks. This promotes reusability of building blocks (and technologies), reduces testing and simplifies support. VCS Fabric flattens the network using Brocade s VCS Fabric technology. Within a single fabric, both layer 2 and layer 3 switching are available on any or all switches in the fabric. VCS Fabric of ToR switches can be configured to create a layer 2 fabric with layer 2 links to an aggregation block. In this set of building blocks the aggregation and access switching are combined into a single VCS Fabric of VDX switches. A single fabric is a single logical management domain simplifying configuration of the network. VCS Fabric Topologies Fabric topology is also flexible. For example, a leaf- spine topology is a good design choice for virtualized data centers where consistent low latency, constant bandwidth is required between end devices. Fabric resiliency is automatic so link or port failures on inter- switch links or Brocade ISL Trunks are detected and traffic is automatically rerouted on the remaining least cost paths. Below is an example of a leaf- spine topology for a VCS Fabric. 12

13 Figure 9 Leaf- Spine VCS Fabric Topology with L3 at Spine Each leaf switch at the bottom is connected to all spine switches at the top. The connections are Brocade ISL Trunks for resiliency which can contain up to 16 links per trunk. All servers can connect with each other with two switch hops in between. As shown, all leaf switches are at layer 2 and spine switches create the L2/L3 boundary. However, the L2/L3 boundary can be at the leaf switch as well as shown below. Figure 10 Leaf- Spine VCS Fabric Topology with L3 at Leaf In this option, VLAN traffic is routed across the spine and each leaf switch includes layer 3 routing services. Brocade ISL Trunks continue to provide consistent latency and large cross- sectional bandwidth with link resiliency. However, ECMP at layer 3 provides multipath forwarding rather than ECMP at layer 2. An alternative is a collapsed spine typically using VDX 8770 switches as shown below. 13

14 Figure 11 Collapsed Spine VCS Fabric Topology The VDX 8770 is a modular chassis switch with high density of 10 GbE and/or 40 GbE ports. A collapsed spine topology can be an efficient building block for server virtualization with NAS storage pools. Multiple racks of virtualized servers and NAS servers are connected to a middle of row (MoR) or end of row (EoR) cluster of VDX 8770 switches. The collapsed spine topology lends itself to data center scale out that relies on pods of compute, storage and networking connected to a common data center routing core. For cloud computing environments, pod- based scale- out architectures are attractive. The following describe several VCS Fabric building blocks. VCS Fabric Leaf- Spine Topology A VCS Fabric leaf- spine topology can be used to create a scalable fabric with consistent latency, high bandwidth multipath switch links and automatic link resiliency. This block forms the spine with each spine switch connecting to all leaf switches. Fabric connections in red are Brocade ISL Trunks with up to 16 links per auto- forming trunk. Layer 2 traffic moves across the fabric while layer 3 traffic exits the fabric on port configured for a routing protocol. As shown by the black arrows, uplinks to the core router would be routed, for example using OSPF. And connection to an IP Services block would also use layer 3 ports on spine switches. The blue links show layer 2 ports that can be used to attach NAS storage to the spine switches. This option creates a topology for NAS storage that is similar to best practices for SAN storage fabrics based on a core/edge topology. For most applications, storage IOPS and bandwidth is less per server than a NAS port can service. An economical use of NAS ports, particularly when using 10 GbE ports, is to fan- out multiple servers to each NAS port. Therefore, attaching NAS storage nodes to the spine switches facilitates this architecture. Figure 12 VCS Fabric, Spine Block, Leaf- Spine Topology 14

15 Collapsed Spine This is a collapsed spine with a two switch VCS Fabric. Typically, high port count modular switches such as the VDX 8770 series would be used. This block works efficiently for data centers that scale- out by replicating a pod of compute, storage and networking. Each pod is connected via layer 3 routing to the data center core routers. Local traffic within the pod does not transit the core routers, but inter- pod traffic does. The collapsed spine uses VRRP/VRRP- E for IP gateway resiliency with the VCS Fabric providing layer- 2 resiliency. As shown, the collapsed spine can be used effectively when connecting a large number of compute nodes to NAS storage as is commonly found in cloud computing environments and data analytics configurations such as a Hadoop cluster. The blue arrows represent 10 GbE links that use vlag for link resiliency within the VCS Fabric and NIC Teaming for NAS server and compute server resiliency. As shown, IP Services blocks can be attached to the spine switches providing good scalability for load balancing and IDS/IPS services. Figure 13 VCS Fabric Spine: Collapsed Spine Topology These VDX switches as leaf nodes can be used with the VCS Fabric Spine for Leaf- Spine Topology. They can also be used to convert the VCS Fabric Spine for Collapsed Spine into a Leaf- Spine topology. 15

16 VMware NSX Network Design Considerations Network virtualization consists of three major aspects; decouple, reproduce, and automate. All three functions are vital in achieving the desired efficiencies. This section focuses on decoupling, which is key to simplifying and scaling the physical infrastructure. While the NSX network virtualization solution can be successfully deployed on top of different network topologies, the focus for this document is on the Arista routed access design where the leaf/access nodes provide full L3 functionality. In this model the network virtualization solution should not span VLANs beyond a single rack inside the switching infrastructure and provide the VM mobility with overlay network topology. Designing for Scale and Future Growth When designing a new environment, it is essential to choose an architecture that allows for future growth. The approach presented is intended for deployments that begin small with the expectation of growth to a larger scale while retaining the same overall architecture. This network virtualization solution does not require spanning of VLANs beyond a single rack. Elimination of this requirement has a widespread impact on the design and scalability of the physical switching infrastructure. Although this appears to be a simple requirement, it has widespread impact on how a physical switching infrastructure can be built and on how it scales. Note the following three types of racks within the infrastructure: Compute Edge Infrastructure Figure 14 Data Center Design - layer- 3 in Access Layer In Figure 14, to increase the resiliency of the architecture, Brocade recommends to deploy a pair of ToR switches in each rack and leverage technologies such as Brocade vlag to dual connect them to all the servers which are part of the same rack. 16

17 Compute Racks Compute racks are the section of the infrastructure where tenant virtual machines are hosted. Central design characteristics include: Interoperability with an existing network Repeatable rack design Connectivity for virtual machines without use of VLANs No requirement for VLANs to extend beyond a compute rack A hypervisor typically sources three or more types of traffic. This example consists of VXLAN, management, vsphere vmotion, and storage traffic. The VXLAN traffic is a new traffic type that carries all the virtual machine communication, encapsulating it in the UDP frame. The following section will discuss how the hypervisors connect to the external network and how these different traffic types are commonly configured. Connecting Hypervisors The servers in the rack are connected to the access layer switch via a number of Gigabit Ethernet (1GbE) interfaces or 10GbE interfaces. Physical server NICs are connected to the virtual switch on the other end. For best practices on how to connect the NICs to the virtual and physical switches, refer to the VMware vsphere Distributed Switch Best Practices technical white paper. distributed- switch- best- practices.pdf The connections between each server in the rack and the leaf switch are usually configured as 802.1q trunks. A significant benefit of deploying VMware NSX network virtualization is the drastic reduction of the number of VLANs carried on those trunk connections. Figure 15. Example - Host and Leaf Switch Configuration in a Rack 17

18 In Figure 15, 802.1q trunks are now used for carrying few VLANs, each dedicated to a specific type of traffic (e.g., VXLAN tunnel, management, storage, VMware vsphere vmotion ). The leaf switch terminates and provides default gateway functionality for each VLAN; it has a switch virtual interface (SVI or RVI) for each VLAN. This enables logical isolation and clear separation from an IP addressing standpoint. The hypervisor leverages multiple routed interfaces (VMkernel NICs) to source the different types of traffic. Please refer to the VLAN Provisioning section for additional configuration and deployment considerations of VMkernel interfaces. VXLAN Traffic After the vsphere hosts have been prepared for network virtualization using VXLAN, a new traffic type is enabled on the hosts. Virtual machines connected to one of the VXLAN- based logical layer- 2 networks use this traffic type to communicate. The traffic from the virtual machine is encapsulated and sent out as VXLAN traffic. The external physical fabric never detects the virtual machine IP or MAC address. The virtual tunnel endpoint (VTEP) IP address is used to transport the frame across the fabric. In the case of VXLAN, the tunnels are initiated and terminated by a VTEP. Traffic that flows between virtual machines in the same data center is typically referred to as east west traffic. For this type of traffic, both the source and destination VTEP are situated in hypervisors located in compute racks. Traffic leaving the data center will flow between a tenant virtual machine and an NSX Edge, and is referred to as north south traffic. VXLAN configuration requires an NSX VDS vswitch. One requirement of a single- VDS based design is that the same VLAN ID is defined for each hypervisor to source VXLAN encapsulated traffic (VLAN ID 88 in the example in Figure 15). Because a VDS can span hundreds of hypervisors, it can reach beyond a single leaf switch. Note that the use of the same VLAN ID does not mean that the different VTEPs across hypervisors are necessarily in the same broadcast domain (i.e. VLAN). It simply means they encapsulate their traffic using the same VLAN ID. The host VTEPs - even if they are on the same VDS can use IP addresses in different subnets, thus offering the capability to leverage an end- to- end L3 fabric. Management Traffic Management traffic can be categorized into two types; one is sourced and terminated by the management VMkernel interface on the host, the other is involved with the communication between the various NSX components. The traffic that is carried over the management VMkernel interface of a host includes the communication between vcenter Server and hosts as well as communication with other management tools such as NSX Manager. The communication between the NSX components involves the heartbeat between active and standby edge appliances. Management traffic stays inside the data center. A single VDS can span multiple hypervisors that are deployed beyond a single leaf switch. The management interfaces of hypervisors participating in a common VDS and connected to separate leaf switches could reside in the same or in separate subnets. vsphere vmotion Traffic During the vsphere vmotion migration process, the running state of a virtual machine is transferred over the network to another host. The vsphere vmotion VMkernel interface on each host is used to move this virtual machine state. Each vsphere vmotion VMkernel interface on the host is assigned an IP address. The number of simultaneous vmotion migrations than can be performed is limited by the speed of the physical NIC. On a 10GbE NIC, eight simultaneous vsphere vmotion migrations are allowed. Note: VMware has previously recommended deploying all the VMkernel interfaces used for vmotion as part of a common IP subnet. This is not possible when designing a network for network virtualization using layer- 3 at the access layer, where it is mandatory to select different subnets in different racks for those VMkernel interfaces. Until VMware officially relaxes this restriction, it is recommended that customers requiring vmotion over NSX go through VMware's RPQ ( Request for Product Qualification ) process so that the customer's design can be validated on a case- by- case basis. Storage Traffic A VMkernel interface is used to provide features such as shared or non- directly attached storage. Typically this is storage that can be attached via an IP connection (e.g., NAS, iscsi) rather than FC or FCoE. The same rules that apply to management traffic apply to storage VMkernel interfaces for IP address assignment. The storage VMkernel interface of servers inside a rack (i.e., connected to a 18

19 leaf switch) is part of the same subnet. This subnet cannot span beyond this leaf switch, therefore the storage VMkernel interface IP of a host in a different rack is in a different subnet. For an example of the IP address for these VMkernel interfaces, refer to the VLAN Provisioning section. Edge Racks Tighter interaction with the physical infrastructure occurs while bridging between the overlay world and the physical infrastructure. The main functions provided by an edge rack include: Providing on- ramp and off- ramp connectivity to physical networks Connecting with VLANs in the physical world Hosting centralized physical services Tenant- specific addressing is exposed to the physical infrastructure where traffic is not encapsulated in VXLAN (e.g., NAT not used at the edge). In the case of a layer- 3 edge, the IP addresses within the overlay are exposed to the physical fabric. The guiding principle in these cases is to separate VXLAN (overlay) traffic from the un- encapsulated (native) traffic. As shown in Figure 16, VXLAN traffic hits the data center internal Ethernet switching infrastructure. Native traffic traverses a dedicated switching and routing infrastructure facing the WAN or Internet and is completely decoupled from the data center internal network. Figure 16. VXLAN Traffic and the Data Center Internal Ethernet Switching Infrastructure To maintain the separation, NSX Edge virtual machines can be placed in NSX Edge racks, assuming the NSX Edge has at least one native interface. For routing and high availability, the two interface types overlay and native must be examined individually. The failover mechanism is based on the active standby model, where the standby Edge takes over after detecting the failure of the active Edge. Layer- 3 NSX Edge Deployment Considerations When deployed to provide layer- 3 routing services, the NSX Edge terminates all logical networks and presents a layer- 3 hop between the physical and the logical world. Depending on the use case, either NAT or static/dynamic routing may be used to provide connectivity to the external network. In order to provide redundancy to the NSX Edge, each tenant should deploy a HA redundant pair of NSX Edge devices. There are three models of HA redundancy supported by NSX : Stateful Active/Standby HA, Standalone and ECMP with the latter one 19

20 representing a newer functionality introduced from NSX SW release 6.1 onward. Figure below highlights the reference topology that will be used for describing the various HA models for the NSX Edge router deployment between the Distributed Logical Router (DLR) and the physical network. Figure 17 : Reference Topology for NSX Edge HA models The next three sections illustrate briefly the HA models mentioned above. Stateful Active/Standby HA Model This is the redundancy model where a pair of NSX Edge Services Gateways is deployed for each tenant; one Edge functions in Active mode (i.e. actively forwards traffic and provides the other logical network services), whereas the second unit is in Standby state, waiting to take over should the active Edge fail. Health and state information for the various logical network services is exchanged between the active and standby NSX Edges leveraging an internal communication protocol. The first vnic interface of type Internal deployed on the Edge is used by default to establish this communication, but the user is also given the possibility of explicitly specifying the Edge internal interface to be used. Note: it is mandatory to have at least one Internal interface configured on the NSX Edge to b able to exchange keepalives between the Active and Standby units. Deleting the last Internal interface would break this HA model. The Figure 18 below highlights how the Active NSX Edge is active both from a control and data plane perspectives. If the Active NSX Edge fails (for example because of an ESXi host failure), both control and data planes must be activated on the Standby unit that takes over the active duties. 20

21 Figure 18 : NSX Edge Active Standby HA Model (left) and Traffic Recovery (right) Standalone HA Model (NSX 6.0.x Releases) The standalone HA model inserts two independent NSX Edge appliances between the DLR and the physical network and it is supported when running NSX 6.0.x SW releases. Figure 19 : NSX Edge Standalone HA Model In this case, both NSX Edge devices are active, both from a data and control planes point of view and can establish routing adjacencies with the physical router and the DLR Control VM. However, in all the 6.0.x NSX SW releases the DLR cannot support Equal Cost Multi- Pathing. As a consequence, even when receiving routing information from both 21

22 NSX Edges for IP prefixes existing in the physical network, the DLR only installs in its forwarding table one possible next- hop (active path). This implies that all traffic in the south- to- north direction will only flow through a single NSX Edge and cannot leverage both appliances. Traffic load balancing may instead happen in the north- to- south direction since most physical routers and switches are ECMP capable by default. Figure 20 : Traffic Flows with Standalone HA model ECMP HA Model (NSX 6.1 Release Onward) NSX software release 6.1 introduces support for a new Active/Active ECMP HA model, which can be considered the improved and evolved version of the previously described Standalone one. Figure 21 : NSX Edge ECMP HA Model (Left) and Traffic Recovery after Edge Failure (right) 22

23 In the ECMP model, the DLR and the NSX Edge functionalities have been improved to support up to 8 equal cost paths in their forwarding table. Focusing for the moment on the ECMP capabilities of the DLR, this means that up to 8 active NSX Edges can be deployed at the same time and all the available control and data planes will be fully utilized, as shown in Figure 21. This HA model provides two main advantages: 1. An increased available bandwidth for north- south communication (up to 80 Gbps per tenant). 2. A reduced traffic outage (in terms of % of affected flows) for NSX Edge failure scenarios. Notice from the diagram in Figure 21 that traffic flows are very likely to follow an asymmetric path, where the north- to- south and south- to- north legs of the same communications are handled by different NSX Edge Gateways. The DLR distributes south- to- north traffic flows across the various equal cost paths based on hashing of the source and destination IP addresses of the original packet sourced by the workload in logical space. The way the physical router distributes north- to- south flows depends instead on the specific HW capabilities of that device. Traffic recovery after a specific NSX Edge failure happens in a similar fashion to what described in the previous standalone HA model, as the DLR and the physical routers would have to quickly time out the adjacency to the failed unit and re- hash the traffic flows via the remaining active NSX Edge Gateway. 23

24 Infrastructure Racks Infrastructure racks host the management components, including vcenter Server, NSX Manager, NSX Controller, CMP, and other shared IP storage related components. It is key that this portion of the infrastructure does not have any tenant- specific addressing. If bandwidth- intense infrastructure services are placed in these racks IP- based storage, for example bandwidth of these racks can be dynamically scaled, as discussed in the High Bandwidth subsection of the Data Center Fabric Attributes section. VLAN Provisioning Every compute rack has four different subnets, each supporting a different traffic type; tenant (VXLAN), management, vsphere vmotion, and storage traffic. Provisioning of IP addresses to the VMkernel NICs of each traffic type is automated using vsphere host profiles. The host profile feature enables creation of a reference host with properties that are shared across the deployment. After this host has been identified and required sample configuration performed, a host profile can be created and applied across in the deployment. This allows quick configuration of a large numbers of hosts. As shown in, the same set of four VLANs storage, vsphere vmotion, VXLAN, management is provided in each rack. Figure 22 : Host Infrastructure Traffic Types and IP address Assignment Table 2 IP Address Management and VLANs 24

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

VMware. NSX Network Virtualization Design Guide

VMware. NSX Network Virtualization Design Guide VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

Multitenancy Options in Brocade VCS Fabrics

Multitenancy Options in Brocade VCS Fabrics WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports

More information

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Simplify Your Data Center Network to Improve Performance and Decrease Costs Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should

More information

NSX TM for vsphere with Arista CloudVision

NSX TM for vsphere with Arista CloudVision ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending

More information

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000 VMware NSX Network Virtualization Design Guide Deploying VMware NSX with Cisco UCS and Nexus 7000 Table of Contents Intended Audience... 3 Executive Summary... 3 Why deploy VMware NSX on Cisco UCS and

More information

DEDICATED NETWORKS FOR IP STORAGE

DEDICATED NETWORKS FOR IP STORAGE DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores

More information

Analysis of Network Segmentation Techniques in Cloud Data Centers

Analysis of Network Segmentation Techniques in Cloud Data Centers 64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology

More information

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc. White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3

More information

Virtualization, SDN and NFV

Virtualization, SDN and NFV Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

VMware vcloud Networking and Security Overview

VMware vcloud Networking and Security Overview VMware vcloud Networking and Security Overview Networks and Security for Virtualized Compute Environments WHITE PAPER Overview Organizations worldwide have gained significant efficiency and flexibility

More information

Brocade Data Center Fabric Architectures

Brocade Data Center Fabric Architectures WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center. TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building

More information

Brocade SDN 2015 NFV

Brocade SDN 2015 NFV Brocade 2015 SDN NFV BROCADE IP Ethernet SDN! SDN illustration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY Brocade ICX (campus)

More information

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Brocade Data Center Fabric Architectures

Brocade Data Center Fabric Architectures WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building

More information

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

Data Center Evolution without Revolution

Data Center Evolution without Revolution WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside

More information

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as

More information

(R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure

(R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure (R)Evolution im Software Defined Datacenter Hyper-Converged Infrastructure David Kernahan Senior Systems Engineer VMware Switzerland GmbH 2014 VMware Inc. All rights reserved. Agenda 1 VMware Strategy

More information

Introducing Brocade VCS Technology

Introducing Brocade VCS Technology WHITE PAPER www.brocade.com Data Center Introducing Brocade VCS Technology Brocade VCS technology is designed to revolutionize the way data center networks are architected and how they function. Not that

More information

Cross-vCenter NSX Installation Guide

Cross-vCenter NSX Installation Guide NSX 6.2 for vsphere This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

How Network Virtualization can improve your Data Center Security

How Network Virtualization can improve your Data Center Security How Network Virtualization can improve your Data Center Security Gilles Chekroun SDDC, NSX Team EMEA gchekroun@vmware.com 2014 VMware Inc. All rights reserved. Security IT spending Security spending is

More information

VMware NSX @SoftLayer!!

VMware NSX @SoftLayer!! A VMware@SoftLayer CookBook v1.1 April 30, 2014 VMware NSX @SoftLayer Author(s) & Contributor(s) (IBM) Shane B. Mcelligott Dani Roisman (VMware) Merlin Glynn, mglynn@vmware.com Chris Wall Geoff Wing Marcos

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang bhuang@cisco.com. Nov 13, 2014

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang bhuang@cisco.com. Nov 13, 2014 Simplify IT With Cisco Application Centric Infrastructure Barry Huang bhuang@cisco.com Nov 13, 2014 There are two approaches to Control Systems IMPERATIVE CONTROL DECLARATIVE CONTROL Baggage handlers follow

More information

Software Defined Environments

Software Defined Environments November 2015 Software Defined Environments 2015 Cloud Lecture, University of Stuttgart Jochen Breh, Director Architecture & Consulting Cognizant Global Technology Office Agenda Introduction New Requirements

More information

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based

More information

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation

More information

Data Center Use Cases and Trends

Data Center Use Cases and Trends Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1,

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

Network Virtualization with Dell Infrastructure and VMware NSX

Network Virtualization with Dell Infrastructure and VMware NSX Network Virtualization with Dell Infrastructure and VMware NSX A Dell-VMware Reference Architecture July 2015 A Dell-VMware Reference Architecture Revisions Date Description Authors 7/09/15 Version 2.0

More information

VMware Network Virtualization Design Guide. January 2013

VMware Network Virtualization Design Guide. January 2013 ware Network Virtualization Technical WHITE PAPER January 2013 ware Network Virtualization Table of Contents Intended Audience.... 3 Overview.... 3 Components of the ware Network Virtualization Solution....

More information

Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER

Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER Reference Design: Deploying NSX for vsphere with Cisco UCS and Nexus 9000 Switch Infrastructure TECHNICAL WHITE PAPER Table of Contents 1 Executive Summary....3 2 Scope and Design Goals....3 2.1 NSX VMkernel

More information

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE

On-Demand Infrastructure with Secure Networks REFERENCE ARCHITECTURE REFERENCE ARCHITECTURE Table of Contents Executive Summary.... 3 Audience.... 3 Overview.... 3 What Is an On-Demand Infrastructure?.... 4 Architecture Overview.... 5 Cluster Overview.... 8 Management Cluster...

More information

SOFTWARE DEFINED NETWORKING

SOFTWARE DEFINED NETWORKING SOFTWARE DEFINED NETWORKING Bringing Networks to the Cloud Brendan Hayes DIRECTOR, SDN MARKETING AGENDA Market trends and Juniper s SDN strategy Network virtualization evolution Juniper s SDN technology

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER JOINT SDN SOLUTION BY ALCATEL-LUCENT ENTERPRISE AND NEC APPLICATION NOTE EXECUTIVE SUMMARY Server

More information

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track** Course: Duration: Price: $ 4,295.00 Learning Credits: 43 Certification: Implementing and Troubleshooting the Cisco Cloud Infrastructure Implementing and Troubleshooting the Cisco Cloud Infrastructure**Part

More information

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer HAWAII TECH TALK SDN Paul Deakin Field Systems Engineer SDN What Is It? SDN stand for Software Defined Networking SDN is a fancy term for: Using a controller to tell switches where to send packets SDN

More information

Core and Pod Data Center Design

Core and Pod Data Center Design Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by

More information

REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION

REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION The modern data centre has ever-increasing demands for throughput and performance, and the security infrastructure required to protect and segment the network

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure

More information

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly

More information

Software Defined Network (SDN)

Software Defined Network (SDN) Georg Ochs, Smart Cloud Orchestrator (gochs@de.ibm.com) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

Virtual Fibre Channel for Hyper-V

Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Networking in the Era of Virtualization

Networking in the Era of Virtualization SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.

More information

VMware vcloud Networking and Security

VMware vcloud Networking and Security VMware vcloud Networking and Security Efficient, Agile and Extensible Software-Defined Networks and Security BROCHURE Overview Organizations worldwide have gained significant efficiency and flexibility

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

Switching Fabric Designs for Data Centers David Klebanov

Switching Fabric Designs for Data Centers David Klebanov Switching Fabric Designs for Data Centers David Klebanov Technical Solutions Architect, Cisco Systems klebanov@cisco.com @DavidKlebanov 1 Agenda Data Center Fabric Design Principles and Industry Trends

More information

Pluribus Netvisor Solution Brief

Pluribus Netvisor Solution Brief Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5

Creating a VMware Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.5 Table of Contents Executive Summary....4 Audience....4 Overview....4 VMware Software Components....6 Architectural Overview... 7 Cluster...

More information

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions Sponsored by Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions In This Paper Traditional network infrastructures are often costly and hard to administer Today s workloads

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Nutanix Tech Note. VMware vsphere Networking on Nutanix Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure

More information

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from

More information

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17

More information

VMware NSX for vsphere (NSX-V) Network Virtualization Design Guide

VMware NSX for vsphere (NSX-V) Network Virtualization Design Guide VMware NSX for vsphere (NSX-V) Network Virtualization Design Guide DESIGN GUIDE / 1 Intended Audience... 4 Overview... 4 Introduction to Network Virtualization... 5 Overview of NSX-v Network Virtualization

More information

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER TECHNICAL WHITE PAPER Table of Contents Intended Audience.... 3 Overview.... 3 Virtual SAN Network... 3 Physical Network Infrastructure... 4 Data Center Network... 4 Host Network Adapter.... 5 Virtual

More information

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Simplifying the Data Center Network to Reduce Complexity and Improve Performance SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,

More information

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL Pascal Geenens CONSULTING ENGINEER, JUNIPER NETWORKS pgeenens@juniper.net BUSINESS AGILITY Need to create and deliver new revenue opportunities faster Services

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture

More information

Delivering the Software Defined Data Center

Delivering the Software Defined Data Center Delivering the Software Defined Data Center Georgina Schäfer Sr. Product Marketing Manager VMware Calvin Rowland, VP, Business Development F5 Networks 2014 VMware Inc. All rights reserved. F5 & Vmware

More information

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby

More information

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments

Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments March 13, 2015 Abstract To provide redundancy and disaster recovery, most organizations deploy multiple data

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems

Cisco Prime Network Services Controller. Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems Cisco Prime Network Services Controller Sonali Kalje Sr. Product Manager Cloud and Virtualization, Cisco Systems Agenda Cloud Networking Challenges Prime Network Services Controller L4-7 Services Solutions

More information

Installation Guide Avi Networks Cloud Application Delivery Platform Integration with Cisco Application Policy Infrastructure

Installation Guide Avi Networks Cloud Application Delivery Platform Integration with Cisco Application Policy Infrastructure Installation Guide Avi Networks Cloud Application Delivery Platform Integration with Cisco Application Policy Infrastructure August 2015 Table of Contents 1 Introduction... 3 Purpose... 3 Products... 3

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization VMware vsphere 5 for 50 & 100 Virtual Machines Enabled by VMware vsphere 5, Brocade ICX series switch,

More information

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter Sponsored by: Brocade Communications Systems Inc. Lucinda Borovick March 2011 Global Headquarters: 5 Speen

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

Brocade VCS Fabrics: The Foundation for Software-Defined Networks

Brocade VCS Fabrics: The Foundation for Software-Defined Networks WHITE PAPER DATA CENTER Brocade VCS Fabrics: The Foundation for Software-Defined Networks Software-Defined Networking (SDN) offers significant new opportunities to centralize management and implement network

More information

The Road to SDN: Software-Based Networking and Security from Brocade

The Road to SDN: Software-Based Networking and Security from Brocade WHITE PAPER www.brocade.com SOFTWARE NETWORKING The Road to SDN: Software-Based Networking and Security from Brocade Software-Defined Networking (SDN) presents a new approach to rapidly introducing network

More information