Netvisor Software Defined Fabric Architecture



Similar documents
Pluribus Netvisor Solution Brief

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

How To Build A Cisco Ukcsob420 M3 Blade Server

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Blue Planet. Introduction. Blue Planet Components. Benefits

Telecom - The technology behind

BUILDING A NEXT-GENERATION DATA CENTER

Definition of a White Box. Benefits of White Boxes

Virtualizing the SAN with Software Defined Storage Networks

Core and Pod Data Center Design

Virtual SAN Design and Deployment Guide

Network Virtualization: Delivering on the Promises of SDN. Bruce Davie, Principal Engineer

State of the Art Cloud Infrastructure

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Zadara Storage Cloud A

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Virtualization, SDN and NFV

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

UCS M-Series Modular Servers

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Scalable Approaches for Multitenant Cloud Data Centers

Software-Defined Networks Powered by VellOS

Visibility in the Modern Data Center // Solution Overview

Open Ethernet. April

Enabling Technologies for Distributed Computing

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Data Center Networking Designing Today s Data Center

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Cisco UCS B460 M4 Blade Server

Enabling Technologies for Distributed and Cloud Computing

Getting More Performance and Efficiency in the Application Delivery Network

Whitepaper Unified Visibility Fabric A New Approach to Visibility

SolidFire SF3010 All-SSD storage system with Citrix CloudPlatform Reference Architecture

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013

SOFTWARE DEFINED NETWORKING

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Unified Computing Systems

Virtual Compute Appliance Frequently Asked Questions

Hyperscale Use Cases for Scaling Out with Flash. David Olszewski

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Network Functions Virtualization Using Intel Ethernet Multi-host Controller FM10000 Family

Network Virtualization Solutions - A Practical Solution

Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack

Use Cases for the NPS the Revolutionary C-Programmable 7-Layer Network Processor. Sandeep Shah Director, Systems Architecture EZchip

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

EVOLVED DATA CENTER ARCHITECTURE

White. Paper. The Rise of Network Functions Virtualization. Implications for I/O Strategies in Service Provider Environments.

Springpath Data Platform with Cisco UCS Servers

White paper. ATCA Compute Platforms (ACP) Use ACP to Accelerate Private Cloud Deployments for Mission Critical Workloads. Rev 01

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

SDN CONTROLLER. Emil Gągała. PLNOG, , Kraków

Different NFV/SDN Solutions for Telecoms and Enterprise Cloud

The Road to SDN: Software-Based Networking and Security from Brocade

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

Microsoft Exchange Solutions on VMware

Data Center Evolution without Revolution

High-performance vswitch of the user, by the user, for the user

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

BSN Big Cloud Fabric and OpenStack CloudLabs. Big Cloud Fabric P+V Solution Validation with OpenStack

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Flexible SDN Transport Networks With Optical Circuit Switching

Brocade One Data Center Cloud-Optimized Networks

NetScaler VPX FAQ. Table of Contents

Switch Chip panel discussion. Moderator: Yoshihiro Nakajima (NTT)

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

F5 Application Delivery in a Virtual Network

VIRTUALIZED SERVICES PLATFORM Software Defined Networking for enterprises and service providers

Ethernet Fabrics: An Architecture for Cloud Networking

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

Microsoft Private Cloud Fast Track

Accelerating Real Time Big Data Applications. PRESENTATION TITLE GOES HERE Bob Hansen

Data Center Network Evolution: Increase the Value of IT in Your Organization

Data and Control Plane Interconnect solutions for SDN & NFV Networks Raghu Kondapalli August 2014

Programmable Networking with Open vswitch

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

SDN Applications in Today s Data Center

Virtualization of the MS Exchange Server Environment

Cisco UCS B200 M3 Blade Server

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

Juniper Networks QFabric: Scaling for the Modern Data Center

Block based, file-based, combination. Component based, solution based

MS Exchange Server Acceleration

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Transcription:

Netvisor Software Defined Fabric Architecture Netvisor Overview The Pluribus Networks network operating system, Netvisor, is designed to power a variety of network devices. The devices Netvisor powers include merchant silicon ODM switches and server-switches, microservers with integrated switching such as the Supermicro Microblade on up to high-performance network computing appliances including the Pluribus Freedom F64, a device based on commercial off-the shelf server and network components. Netvisor (the name is derived from Network Hypervisor) was designed and built Figure 1: Netvisor Evolution of with the goal of bringing both the virtualization revolution and the disruptive Hypervisor Technology economics of the server industry to the networking world via an open, distributed, programmable network hypervisor. Netvisor is the first data center operating system that solves the complex problem of virtualizing infrastructure at the physical network fabric layer - the centerpiece of the data center infrastructure. By treating merchant silicon switching as a true extension of the server, Netvisor is the foundation of a highly converged and virtualized architecture fusing compute, storage and network under one operating system for the data center (Figure 1).

Software Defined Fabric (SDF) Overview and Benefits One of the key features of Netvisor is the ability for Netvisor powered nodes to join what we call a Software Defined Fabric. With a distributed architecture, the Pluribus SDF uses a variety of well understood and proven server type clustering techniques to allow the Pluribus standards-based Ethernet fabric to function and be managed as a single logical switch. This approach dramatically simplifies the management, monitoring, virtualization and programming of the network fabric. Figure 2: Software Defined Fabric: making networks simple Pluribus SDF is not a replacement of open, standard-based network protocols to build a network fabric, but rather it is a complementary approach that augments and extends network capabilities well beyond simple L2/L3/VXLAN connectivity. The table below the advantages of the Pluribus Software Defined Fabric over traditional switch fabric approaches. Table 1: SDF above and beyond traditional switch fabrics

Software Defined Fabric Architecture Overview Netvisor SDF is built around a set of peer-to-peer, distributed clustering algorithms and techniques with roots in 15+ years of database clustering technology. Each switch locally computes its view of the network with traditional L2/ L3 network protocols, then each switch uses clustering algorithms to replicate its view of the network fabric to all the other peer switches in the cluster. We can think of Pluribus clustering as analogous to a mapreduce operation: first each switch produces its own map of the network, then the cluster algorithms perform a reduce operation to produce a summary map of the network which is common to every node. As a result every switch in the cluster has the same view of the network: MAC addresses, IP addresses, ports, connection flows, network resources etc. The net effect of the cluster is the creation of a multi-box virtual switch, which greatly simplifies network management, monitoring, virtualization and programming. Any node in the network can act as the central point of management and control for the entire cluster. The peer-to-peer, distributed architecture removes any practical scalability limitations by eliminating centralized controller bottlenecks. Fabric configurations are synchronized across the cluster with a classic 3-phase commit algorithm: either a particular configuration changed is agreed upon by all nodes in the cluster, or the change cannot be committed and the entire clusters rolls back to the previous state. Figure 3 shows how the cluster management software of the fabric is built on top of a standard Ethernet fabric. The cluster operates similarly to a server cluster independent of the network fabric. Figure 3: Software Defined Fabric built on classic Ethernet fabric protocols and frame formats

Figure 4 shows the Pluribus fabric cluster in action: the port-stats-show command (to show the port statistics) is issued on switch F64-L, but the cluster offers a fabric-wide view of the statistics of all the ports in the fabric. Note also how certain ports are repeated multiple times (yellow circles) indicating that multiple virtual machines are behind a particular physical port. Figure 4: The Pluribus Netvisor Cluster Fabric in Action Because every MAC, IP, port state and connection flow is now visible to any node in the cluster, customers also have fabric-wide visibility and monitoring capabilities for all network events. Examples of some of the capabilities enabled by Netvisor SDF include: VM/Host Visibility: Ability to locate fabric-wide both physical and virtual hosts from any node. Detect VM density per host and individual VM profiles (via OpenStack plugin). Trace VM migrations fabric-wide from any node. Application Flow Analytics: Track application flow congestion statistics, port statistics, timestamps, flow latency and even flow paths across the fabric from any node in the cluster.

Fabric Sniffer: Fabric-wide full packet capture in PCAP format. Inspect packets with on-board sniffer software or export PCAP files via NFS to external appliances. Time Machine - Network Flow Recorder: Combined with onboard storage options including SSD or FusionIO flash storage, SDF allows storage of snapshots of network flows and state (flow analytics, port congestion, VM state, VM location etc.) capturing a comprehensive record of events and traffic on the network, enabling later analytics or forensics. With Netvisor s API, it is now possible for the customer to program application flows and control network resources fabric-wide. The network can be treated as a single programmable virtual switch. This approach removes the scalability limitations of centralized SDN architectures. Furthermore Netvisor allows developers to control the fabric-cluster through standard OpenFlow APIs. Switch nodes are hot-pluggable and can be added/removed from the cluster without impacting the rest of the cluster. Additionally, the cluster has no impact on network traffic and does not require data encapsulation or tunneling protocols. The cluster algorithms are effectively a management overlay on top of a traditional L2/L3 network fabric. Figure 5: Netvisor SDF Clusters are interoperable with standard, open Ethernet/IP Fabrics

Software Defined Fabric and Virtualization With SDF, network resources are effectively pooled and presented as part of a single logical device. Another powerful feature of Netvisor SDF is the ability to create fabric containers or isolated physical slices of the network fabric, leveraging the bare metal hypervisor capabilities of the operating system. This approach shifts the burden of virtualizing the fabric to the network switches (where it naturally belongs), therefore freeing up the compute layer from having to handle a separate virtual overlay network. Let s see how this works. Figure 6: Netvisor SDF Fabric Virtualization In the picture above three switches are clustered together to form a single logical switch. Netvisor fabric virtualization builds on top of the cluster, by allowing the logical switch to be carved into multiple fabric containers. The figure above shows three fabric containers: red, blue, and green network. For example the red network is the result of virtualizing the physical switch fabric by carving out port 1-16 out of each switch, assigning the VLAN range 100-200 and a 16K slice of the L2 table. The blue and green networks are allocated similar allotments of ports, VLANs and table space. From this point on, every server or storage node connected to a red network port is isolated from the rest of the infrastructure from a management and visibility standpoint, with its own set of dedicated services, and can support any server virtualization stack independently from other containers.

The elegance of Netvisor virtualization is that via simple software configuration it is possible to re-allocate ( virtual re-wiring ) ports/vlans thus repurposing compute and storage resources from one network to the other with minimal operational effort. This is Netvisor Software Defined Infrastructure. Netvisor Platforms And Software Defined Fabric Netvisor SDF can be run on any of the platforms supported by Netvisor. Today this means three classes of devices: Network Computing Appliances, Server-Switches and microblade switches. This section is devoted to explaining the differences and the architecture of Netvisor platforms. Server-Switch and Network Computing Appliance Overview In 2010 Pluribus Networks pioneered the concept of fusing network switches and servers creating a new category of product, the server-switch, a new class of top-of- rack network gear offering capabilities above and beyond traditional switches. Since then, the server-switch paradigm has been validated by Facebook s announcement of their home-grown switch, dubbed Wedge, which combines switching and compute into a modular and open top-of-rack platform. The rationale behind this new category of product is explained by Facebook in their Wedge announcement: [...] One of the big changes we made in designing Wedge was to give the switch the same power and flexibility as a server. Traditional network switches often use fixed hardware configurations and non-standard control interfaces, limiting the capabilities of the device and complicating deployments. Server Switch and Network Computing Appliance Differences While both the Server-Switch and the Network Computing Appliance are based on the same designs principles and share the same OS architecture, the difference is in the scalability and flexibility of the hardware platform. The Server-Switch is equipped with a much more powerful control plane than any traditional switch, but it sports a fixed hardware configuration and a mechanical form factor similar to a typical TOR switch. In contrast, the Network Computing Appliance has the hardware extensibility (SATA storage, PCIe slots, NPU capabilities), the mechanical form factor and compute/storage capacity inline with a high-end appliance. The Freedom F64-Series is an example of Network Computing Appliance, while the E68-M is an example of Server-Switch.

Server-Switch/Network Computing Appliance vs Traditional Switching In general, the Pluribus approach, when compared to traditional switching, is similar in some ways. Both leverage merchant switching silicon but there are significant differences. Pluribus leverages server class CPUs, onboard storage and significantly more memory to provide true server class compute performance. Additionally, the connection between the CPU and switching components is far faster with the Pluribus approach, 10-40x faster or more, producing a tighter, more responsive coupling of network and compute. Figure 7: Architectural comparison between Pluribus platforms and a Traditional Switch Pluribus platforms, besides representing a major upgrade of the compute/storage hardware capability of a traditional switch (summarized in the table below), arguably hide the most impactful architectural innovation under-the-hood in the way the Netvisor operating system software controls and virtualizes the switch chip. In a Pluribus Server-Switch all the switch tables and register space are mapped over PCIe into the OS (see illustration in Figure 2). The learning and switching behavior is defined by the OS while the switch hardware (TCAM, forwarding tables) is treated like x86 cache or

hardware off-load, an approach which retains wire speed switching while enabling much greater scale. Thanks to the direct memory mapping over PCIe (exactly like a server-nic), the Netvisor OS controls the switch at multi-gigabit bandwidth, microseconds latency and with high-performance multi-threading. Moreover with Pluribus the CPUs are connected to the switch chip with additional integrated NICs (up to 4x10GE on the F64 appliances), facilitating a variety of things including high performance hosting of NFV services. New capabilities are enabled by this architecture such as high-performance analytics, bare metal network virtualization, high performance network/flow programmability and NFV services in the fabric. By contrast, with traditional switches the OS interacts with the switch silicon indirectly via the merchant SDK of several millions of lines of code. These merchant SDKs are mostly single threaded, run over a low-speed control plane channel and run on an underpowered CPU resulting in an environment optimized to run L2/L3 protocols, configuration operations and little else. Being built from the ground up as a server-style OS, Netvisor is completely different. Network Computing Appliance Server-Switch Traditional Switch CPU Dual Socket Intel Xeon Single Socket Intel Xeon Embedded Class CPU RAM Up to 512GB 16-32GB 4-8GB Storage PCIe Flash, SSD, SATA multi TB 240GB SSD NA Internal Expandability 6 PCIe Slots NA NA CPU to Switch Bandwidth 4x10GE + PCIe 4xGE+PCIe Low speed control channel (speed varies) Form Factor Appliance TOR Switch TOR Switch Table 2: Hardware differences between various classes of TOR devices

The Flexibility Of The SDF Architecture and Deployment Examples With a solid understanding of the different classes of platforms supported by Netvisor, we can now appreciate the true power and flexibility of SDF, which is platform and merchant silicon agnostic: the Software Defined Fabric architecture, by federating high-performance appliances and cost effective server-switches under a single logical device, enables customer to strike the ideal balance between a services rich, virtualized infrastructure and white box economics. Figure 8: SDF federation of different platforms under one single logical switch As an example the illustration below shows F64 Network Computing Appliances as spine switches with E68-M Server-Switches as TORs. In this configuration the F64 appliances offers network services and network virtualization, while the E68-M allows customer to scale the pod at white box economics. Figure 9: Software Defined Fabric - uncompromised services and economics

The illustration in Figure 10 shows how the SDF architecture extends into the rack with Netvisor powering a network of Supermicro Microblade chassis. Each 6RU Microblade chassis houses up to 112 servers and four switches with up to 7 chassis in each rack. Obviously 784 servers per rack is of interest, but what might be less obvious is that each Microblade chassis houses up to 4 integrated switches, yielding up to 28 switches per rack, a situation where a fabric allowing all member switches to be managed as a single logical entity would be of considerable utility. In this case Netvisor on the embedded switches in the Microblade servers simplifies management of those switches, while leveraging the F64 appliances to host the services and virtualize for the entire pod. Figure 10: Extending Netvisor SDF into the rack Conclusion Pluribus Networks Netvisor is a network operating system with its roots in open source server operating systems, server virtualization and cluster technologies developed over the past 20+ years. Netvisor is designed to efficiently solve the challenge of virtualizing data center infrastructure while bringing server economics to the networking world. A key feature of Netvisor, the Software Defined Fabric, is central to realizing the promise of Netvisor. This feature allows the network to federate high-performance appliances, cost effective server-switches and microblade switches as a single logical device, enabling the customer to strike the ideal balance between a services rich, virtualized infrastructure and white box economics.

Freedom begins where traditional networking ends. Pluribus Networks, Inc. 2455 Faber Place, Suite 100 Palo Alto, CA 94303 www.pluribusnetworks.com Copyright 2014 Pluribus Networks, Inc. All rights reserved. 1-855-GET-VNET / +1 650-289-4717 To purchase Pluribus Networks solutions, please contact us at sales@pluribusnetworks.com. facebook.com/pluribusnetworks @pluribusnet linkedin.com/company/pluribus-networks