WHITE PAPER. Network Virtualization: A Data Plane Perspective



Similar documents
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Extending Networking to Fit the Cloud

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

Virtualization, SDN and NFV

Carrier Ethernet 2.0: A Chipmaker s Perspective

Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T

SDN CONTROLLER. Emil Gągała. PLNOG, , Kraków

Software-Defined Network (SDN) & Network Function Virtualization (NFV) Po-Ching Lin Dept. CSIE, National Chung Cheng University

Network Virtualization for Large-Scale Data Centers

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN, Enhancements, and Network Integration

MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs

Avaya VENA Fabric Connect

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX

TRILL for Data Center Networks

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES

the Data Center Connecting Islands of Resources Within and Across Locations with MX Series Routers White Paper

Multitenancy Options in Brocade VCS Fabrics

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Analysis of Network Segmentation Techniques in Cloud Data Centers

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Network Virtualization Solutions

TRILL Large Layer 2 Network Solution

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

CONNECTING PHYSICAL AND VIRTUAL WORLDS WITH VMWARE NSX AND JUNIPER PLATFORMS

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

VXLAN Bridging & Routing

Evolution of Software Defined Networking within Cisco s VMDC

Enabling Solutions in Cloud Infrastructure and for Network Functions Virtualization

Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers

Network Virtualization Network Admission Control Deployment Guide

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Cloud Networking: Framework and VPN Applicability. draft-bitar-datacenter-vpn-applicability-01.txt

How To Understand The Benefits Of An Mpls Network

Development of the FITELnet-G20 Metro Edge Router

Chapter 1 Reading Organizer

BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG

Advanced VSAT Solutions Bridge Point-to-Multipoint (BPM) Overview

SDN Architecture and Service Trend

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

How To Orchestrate The Clouddusing Network With Andn

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Network Technologies for Next-generation Data Centers

Networking in the Era of Virtualization

WHITEPAPER. VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

Cloud Networks Uni Stuttgart

MP PLS VPN MPLS VPN. Prepared by Eng. Hussein M. Harb

Introducing Basic MPLS Concepts

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Definition of a White Box. Benefits of White Boxes

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Using LISP for Secure Hybrid Cloud Extension

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Cloud Computing and the Internet. Conferenza GARR 2010

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

APPLICATION NOTE. Benefits of MPLS in the Enterprise Network

Testing Edge Services: VPLS over MPLS

Virtual Machine in Data Center Switches Huawei Virtual System

Data Communication Networks and Converged Networks

ISTANBUL. 1.1 MPLS overview. Alcatel Certified Business Network Specialist Part 2

Linux KVM Virtual Traffic Monitoring

Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.

Using Network Virtualization to Scale Data Centers

Defining SDN. Overview of SDN Terminology & Concepts. Presented by: Shangxin Du, Cisco TAC Panelist: Pix Xu Jan 2014

Software Defined Network (SDN) for Service Providers

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Virtual Private LAN Service

Overview of Routing between Virtual LANs

NVO3: Network Virtualization Problem Statement. Thomas Narten IETF 83 Paris March, 2012

Carrier Class Transport Network Technologies: Summary of Initial Research

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Visibility into the Cloud and Virtualized Data Center // White Paper

How To Make A Network Cable Reliable And Secure

Datacenter Network Virtualization in Multi-Tenant Environments

Nuage Networks Virtualised Services Platform. Packet Pushers White Paper

Top-Down Network Design

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures

Understanding PBB-TE for Carrier Ethernet

Introduction to MPLS-based VPNs

Cisco Integrators Cisco Partners installing and implementing the Cisco Catalyst 6500 Series Switches

VMDC 3.0 Design Overview

IP/MPLS-Based VPNs Layer-3 vs. Layer-2

Rohde & Schwarz R&S SITLine ETH VLAN Encryption Device Functionality & Performance Tests

Transcription:

WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015

Abstract Virtualization is the leading technology to provide agile and scalable services in cloud computing networks. Virtualization is occurring at different areas of the network: 1) Server Virtualization 2) Network Virtualization 3) Network Function Virtualization Marvell Prestera switching devices are based on the flexible ebridge architecture, which implements data plane interface virtualization. This enables migration from the legacy physical networking paradigm to the virtualized network in the emerging modern data center compute cloud, overlay networks and network function virtualization domains. This paper outlines the state of art of the virtualization technologies and how the Marvell ebridge architecture allows Marvell switches to serve as a universal gateway, seamlessly interconnecting different types of virtualization domains and data encapsulations. Network Virtualization: A Data Plane Perspective 2

Server Virtualization Server virtualization allows scaling of server resources by partitioning a physical server into multiple independent virtual machines (VMs). IEEE 802.1 has standardized two approaches for connecting a virtualized server to the network: 802.1Qbg Edge Virtual Bridging 802.1BR Bridge Port Extension Although the packet encapsulations differ, in both standards the Controlling Bridge is the central entity which performs all forwarding, filtering and policy decisions. The Controlling Bridge views VMs as being attached via a virtual interface. IEEE 802.Qbg Edge Virtual Bridging IEEE 802.1Qbg defines VEPA (Virtual Ethernet Port Aggregation) and a Multi-channel S-tagged interface between a Controlling Bridge and the server NIC. Basic VEPA requires all VM-sourced traffic to be processed by the Controlling Bridge. The Controlling Bridge may need to hairpin traffic back to the source port if it is destined to another VM on the same server. In Multi-channel VEPA, an S-tag is added to the packet indicating the source or target VM. Upon receiving the packet, the Controlling Bridge examines the S-Tag and assigns a source logical interface to the packet. The S-Tag is popped, and the packet is subject to ingress policy, forwarding and filtering rules that are associated with the source VM. When forwarding traffic to a VM, the Controlling Bridge applies egress policy for this VM, and adds an S-Tag to indicate the target VM on the server. In the case of Broadcast, Unknown Unicast and Multicast (BUM) traffic, the switch replicates the packet to the set of VMs in the flood domain, where each packet instance includes the S-Tag associated with the remote VM. The Marvell ebridge architecture supports the 802.1bg VEPA and Multi-channel VEPA, by flexible mapping of each VM to a Controlling Bridge local virtual interface. Network Virtualization: A Data Plane Perspective 3

802.1BR Bridge Port Extension IEEE 802.1BR defines a logical entity called an Extended Bridge that is comprised of a Controlling Bridge attached to a set of devices called Port Extenders. The Port Extenders can be cascaded to interconnect the Controlling Bridge with remote VMs. Logically, adding a Port Extender is similar to adding a line card to a modular switch. All VM-sourced traffic is sent via the Port Extenders to the Controlling Bridge. The Port Extender implemented in a virtualized server s hypervisor pushes an E-Tag identifying the source VM and forwards the packets towards the Controlling Bridge. On receiving the packet, the Controlling Bridge examines the E-Tag, and assigns a source logical port. The E-Tag is then popped, and the packet is subject to ingress policy, forwarding, and filtering rules that are associated with the source VM. When the Controlling Bridge transmits traffic to a VM, the switch applies egress policy for this VM, and adds a unicast E-Tag, which indicates the target VM on the server. The intermediate Port Extenders forward the packet towards the server, where the hypervisor Port Extender strips the E-Tag prior to sending the packet to the VM. In case of BUM traffic, the Controlling Bridge pushes a single multicast E-Tag which indicates the multitarget port group. The downstream Port Extenders replicate the packet to each of its port group members. The hypervisor Port Extender strips the E-Tag prior to sending to its local VMs. The Marvell ebridge architecture supports the 802.1BR Controlling Bridge and Port Extender standard, by flexible mapping of each VM, remotely or locally attached, to a Controlling Bridge local virtual interface. Network Virtualization: A Data Plane Perspective 4

Network Virtualization A network overlay creates a virtual network by decoupling the physical topology from the logical topology, allowing compute and network services to reside anywhere in the network topology and to be dynamically relocated as needed. In particular, network overlays are being deployed today to provide scalable and agile solutions for multi-tenancy services in large data center networks. Key requirements for multi-tenancy services include: Requirement Traffic isolation between each tenant Address space isolation between each tenant s address spaces Address isolation between the tenant address space and the overlay network address space Benefit Ensures no tenant s traffic is ever leaked to another tenant Enables tenants to possibly have overlapping address spaces. Enables VMs to be located anywhere in the overlay network and mobility to migrate to any new location, e.g. migrate across the IP subnets in the network. The IETF NVO3 work group is chartered to define the architecture, control plane and data plane for L2 and L3 services over a virtual overlay network. The NVO3 architecture model is illustrated below. The overlay network infrastructure is IP-based. The NVE (Network Virtualization Edge), which resides between the tenant and the overlay network, implements the overlay functionality. Network Virtualization: A Data Plane Perspective 5

For L2 service, the tenant is provided with a service that is analogous to being connected to an L2 bridged network. If a tenant s packet MAC DA is known unicast, the NVE tunnels the Ethernet frame across the overlay network to the remote NVE where the tenant destination host resides. Tenant BUM traffic is transported to all the remote NVEs attached to the given tenant, using either head-end replication or tandem replication. For L3 service, the tenant is provided with an IP-only service where the NVE routes the tenant traffic according to the tenant virtual router and forwarder (VRF) and forwards the IP datagram over an overlay tunnel to the remote NVE(s). In both L2 and L3 overlay services, the underlay network natively routes the IP traffic based on the outer IP tunnel header. The underlay network is unaware of the type of overlay service and payload type. The Marvell ebridge architecture supports the intelligent and flexible processing required by the NVE to implement L2 and L3 overlay services. This includes support for the leading NVO3 encapsulation modes, VXLAN, VXLAN-GPE, Geneve, and flexibility for proprietary and future standards as well. Network Virtualization: A Data Plane Perspective 6

Network Function Virtualization & Service Function Chaining To improve scaling and reduce OPEX/CAPEX, modern data centers and carrier networks are replacing dedicated network appliances (e.g. firewalls, deep packet inspectors) with virtualized network service functions running as an application in the server or VMs. This technology is called Network Functions Virtualization (NFV). A service function chain (aka VNF Forwarding Graph) is an ordered set of network service functions that are required for a given packet flow, (e.g. Firewall Deep Packet Inspector Load Balancer). Packets are steered along the service path using an overlay encapsulation (e.g. VXLAN-GPE) between service functions. This allows the service functions to be located anywhere in the network (e.g. in server VMs), independent of the network topology. The IETF Service Function Chaining work group is standardizing a service layer call Network Service Header (NSH.) NSH is transport independent, that is, it can reside over any type of overlay encapsulation (e.g. VXLAN-GPE, Geneve, GRE) or directly over Ethernet L2 header. NSH contains the service path ID and optionally packet metadata. The service path ID represents the ordered set of service functions that the packet must visit. The metadata may be any type of data that may be useful to the service functions along the path. The Marvell ebridge architecture supports the ability to classify and impose the Network Service Header (NSH), and encapsulate with any overlay tunnel header. At overlay tunnel termination points, the NSH header can be examined and a new overlay tunnel can be applied to transport the packet to the next service function. At the end of the service function chain, the NSH and overlay tunnel can be removed, and L2/L3 forwarding is based on the original packet payload. Network Virtualization: A Data Plane Perspective 7

Marvell ebridge Architecture The legacy switching paradigm is based on the concept of physical interfaces, where incoming traffic is associated with the ingress physical interface, and outgoing traffic is forwarded to an egress physical interface. This paradigm worked well in basic layer-2 networks. However, in modern data centers, carrier networks, and converged wired/wireless networks, there are additional challenges, such as cross-tenant isolation, tunnel overlay encapsulations, learning and forwarding over virtual interfaces and per virtual interface attributes. Today, system architects must design a complex software virtualization layer to hide the physical layer limitations from the end-user software. This software virtualization layer enables the application software to configure traffic flows, services, VMs and other logical entities. The ebridge architecture provides a common scalable infrastructure that has been proven to meet the new challenges of virtualization. This architecture uses opaque handles to identify virtual interfaces (eports) and virtual switching domains (evlans). This unique implementation brings the hardware configuration driver view close to the application software view, thus significantly reducing the size and complexity of the required software virtualization layer, and with it, the development and debug efforts and resulting time-to-market of the system. While supporting virtual interfaces, the ebridge architecture fully supports legacy network paradigms based on physical interfaces. In the legacy network paradigm, an eport maps 1:1 with a physical port, and an evlan maps 1:1 with a VLAN bridge domain. Prestera devices integrating the ebridge architecture are fully backward-compatible with legacy packetprocessors processing pipe and feature set. The ebridge architecture has been successfully applied to many of the emerging networking standards associated in different market segments, as illustrated in the figure below. Network Virtualization: A Data Plane Perspective 8

ebridge eports The ebridge architecture utilizes, at the packet processor data plane level, an entity called eport to represent a virtual interface. For example, an eport may represent a physical port, Port-VLAN interface, 802.1BR E-Tag interface, VXLAN-GPE tunnel, Geneve tunnel, MPLS pseudowire, MAC-in-MAC tunnel, TRILL tunnel, etc. Ingressed packets are classified and assigned a source eport. Forwarding engines assign a target eport indicating the virtual interface through which the packet is egressed. The eport entity, source or target, is completely decoupled from the physical interface the packet is received on / transmitted to. ebridge evlans The ebridge architecture utilizes, at the packet processor data plane level, an entity called evlan to represent a virtual switching domain. In concept, this is similar to IEEE 802.1Q VLAN, but can be extended beyond 4K VLAN-ID range to support a large number of switching domains (e.g. per tenant) independent of the packet VLAN-ID. Network Virtualization: A Data Plane Perspective 9

Marvell ebridge Universal Gateway Different network domains are virtualized using different technologies, e.g. server virtualization using 802.1BR, data center virtualization using VXLAN-GPE overlay, data center interconnect (DCI) using VPLS, branch office WAN connection using GRE tunnels, etc. To allow connectivity between these domains, traffic with dissimilar encapsulations must be bridged or routed from one domain to another. The ebridge architecture allows Marvell Prestera switches to serve as a universal gateway, seamlessly stitching between different virtualization domains and their respective data encapsulations. The Prestera switching devices perform any-to-any stitching in a single pipeline pass, including BUM traffic replication, full passenger and/or encapsulation-based packet processing, such as policy TCAM rules, policing and metering, bridging, routing and more, with no performance impact. Network Virtualization: A Data Plane Perspective 10

The ebridge architecture supports single pass any-to-any stitching as follows: 1. The incoming packet encapsulation is classified, tunnel-terminated and assigned a source eport representing the ingress virtual interface. 2. The payload is subject to the full pipeline processing. This includes policy TCAM rules, bridging based on the evlan bridge domain, routing based on the VRF, metering, counting, etc. 3. Unicast traffic is assigned a target eport representing the egress virtual interface. 4. Multitarget traffic (BUM, routed IP multicast) is replicated to a set of target eports, each representing a different egress encapsulation, independent of the underlying physical interface. Network Virtualization: A Data Plane Perspective 11

The table below lists some common data center stitching use cases supported by Marvell ebridge architecture: Use Cases Inter-VXLAN Routing 802.1BR Controlling Bridge to VXLAN overlay Data Center Interconnect (DCI): VXLAN VPLS/EVPN ebridge Support The incoming VXLAN traffic is tunnel terminated; the tenant IP packet is routed using VRFs and its L2 header is updated and the resulting Ethernet packet is re-encapsulated in a new VXLAN tunnel. E-tagged traffic is received from the server, bridged in the evlan domain, and egressed as VXLAN tunneled traffic over the IP core network. Similarly, VXLAN traffic is received from the IP core, bridged within the evlan domain and forwarded with E-Tags to the respective server VM(s). VPLS/EVPN traffic received from the MPLS core network is tunnel-terminated; the passenger packet is bridged in the evlan domain and egressed as VXLAN tunneled traffic over the IP core network Similarly, VXLAN traffic received from the IP core network is tunnel-terminated; the passenger packet is bridged in the evlan domain and egressed as VPLS/EVPN tunneled traffic over the MPLS core network. Split-horizon filtering prevents traffic received from one core network from being looped back to the same core network. Network Virtualization: A Data Plane Perspective 12

About the Authors David Melman Marvell Switching Architect David Melman is a 20-year veteran of the networking industry. For the past 15 years, Melman has been a switch architect at Marvell, involved in the definition of the Marvell Prestera family of packet-processing devices. He is an active participant in the Internet Engineering Task Force (IETF), co-author of the IETF draft Generic Protocol Extension for VXLAN and contributor to the IETF draft Network Service Header. Uri Safrai Marvell Software Solution Architect Uri Safrai has more than 17 years of networking experience. Prior to his current position, Safrai worked at Galileo Technology until its acquisition by Marvell in 2001. Since then he has held variety of technological positions at Marvell including his former role as a switch architect of the Prestera line of packet processors, where Safrai was involved in the definition and microarchitecture of networking features, protocols and various ASIC engines and mechanisms. He also led the definition of the Prestera ebridge architecture. Since 2010, Safrai represents Marvell at the Metro Ethernet Forum (MEF), and recently joined the Open Networking Foundation (ONF) Chipmakers Advisory Board (CAB). Network Virtualization: A Data Plane Perspective 13

Acronyms BUM Broadcast, Unknown Unicast, Multicast CAPWAP Control And Provisioning of Wireless Access Points DCI Data Center Interconnect EVPN Ethernet Virtual Private Network GENEVE Generic Network Virtualization Encapsulation GRE Generic Routing Encapsulation MPLS Multiprotocol Label Switching NFV Network Function Virtualization NSH Network Service Header NVE Network Virtualization Edge PBB Provider Backbone Bridging SFC Service Function Chaining TRILL Transparent Interconnect of Lots of Links VEPA Virtual Ethernet Port Aggregator VLAN Virtual Local Area Network VM Virtual Machine VNF Virtual Network Function VPLS Virtual Private LAN Service VPWS Virtual Private Wire Service VXLAN-GPE Virtual Extensible LAN - Generic Protocol Encapsulation Network Virtualization: A Data Plane Perspective 14

References An Architecture for Overlay Networks (NVO3) https://datatracker.ietf.org/doc/draft-ietf-nvo3-arch/ Generic Protocol Extension for VXLAN https://datatracker.ietf.org/doc/draft-quinn-vxlan-gpe/ Geneve: Generic Network Virtualization Encapsulation https://datatracker.ietf.org/doc/draft-ietf-nvo3-geneve/ Network Service Header https://datatracker.ietf.org/doc/draft-ietf-sfc-nsh/ 802.1BR - Bridge Port Extension http://www.ieee802.org/1/pages/802.1br.html 802.1Qbg - Edge Virtual Bridging http://www.ieee802.org/1/pages/802.1bg.html Network Virtualization: A Data Plane Perspective 15