VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure



Similar documents
NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

VXLAN: Scaling Data Center Capacity. White Paper

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Extending Networking to Fit the Cloud

Virtualization, SDN and NFV

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Network Virtualization for Large-Scale Data Centers

Optimizing Data Center Networks for Cloud Computing

Analysis of Network Segmentation Techniques in Cloud Data Centers

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Scalable Approaches for Multitenant Cloud Data Centers

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Multitenancy Options in Brocade VCS Fabrics

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Using Network Virtualization to Scale Data Centers

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Software-Defined Networks Powered by VellOS

Data Center Networking Designing Today s Data Center

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Enhancing Cisco Networks with Gigamon // White Paper

VMware Network Virtualization Design Guide. January 2013

Pluribus Netvisor Solution Brief

Network Virtualization Solutions

Network Function Virtualization Using Data Plane Developer s Kit

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Brocade One Data Center Cloud-Optimized Networks

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

Accelerating Micro-segmentation

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Lecture 02b Cloud Computing II

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

WHITE PAPER. Network Virtualization: A Data Plane Perspective

SOFTWARE DEFINED NETWORKING

CoIP (Cloud over IP): The Future of Hybrid Networking

BUILDING A NEXT-GENERATION DATA CENTER

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

TRILL Large Layer 2 Network Solution

Agility has become a key initiative for business leaders. Companies need the capability

Hyper-V Network Virtualization Gateways - Fundamental Building Blocks of the Private Cloud

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Windows Server 2008 R2 Hyper-V Live Migration

Visibility into the Cloud and Virtualized Data Center // White Paper

SDN v praxi overlay sítí pro OpenStack Daniel Prchal daniel.prchal@hpe.com

VXLAN, Enhancements, and Network Integration

SOLUTIONS FOR DEPLOYING SERVER VIRTUALIZATION IN DATA CENTER NETWORKS

Using LISP for Secure Hybrid Cloud Extension

Data Center Virtualization and Cloud QA Expertise

Networking Issues For Big Data

Deliver the Next Generation Intelligent Datacenter Fabric with the Cisco Nexus 1000V, Citrix NetScaler Application Delivery Controller and Cisco vpath

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual Machine in Data Center Switches Huawei Virtual System

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION

Cloud Optimize Your IT

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Enterasys Data Center Fabric

VXLAN Bridging & Routing

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Data Center Convergence. Ahmad Zamer, Brocade

Software Defined Network (SDN)

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

Windows Server 2008 R2 Hyper-V Live Migration

VMDC 3.0 Design Overview

Network Performance Comparison of Multiple Virtual Machines

Data Center Use Cases and Trends

How To Manage A Virtualization Server

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Flash Storage Gets Priority with Emulex ExpressLane

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Networking in the Era of Virtualization

State of the Art Cloud Infrastructure

Cisco Unified Network Services: Overcome Obstacles to Cloud-Ready Deployments

The Open Cloud Near-Term Infrastructure Trends in Cloud Computing

VIRTUALIZING THE EDGE

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Brocade Solution for EMC VSPEX Server Virtualization

OPTIMIZING SERVER VIRTUALIZATION

Avaya VENA Fabric Connect

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

CONNECTING PHYSICAL AND VIRTUAL WORLDS WITH VMWARE NSX AND JUNIPER PLATFORMS

Transcription:

W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure Optimization... 3 The Economics Driving Overlay Networks... 4 Cloud Computing s Network Scalability Challenge... 4 Limitations to security (network isolation) using 802.1Q VLAN grouping...................................... 5 Capacity expansion across server racks or server clusters (use cases)....................................... 5 Limitations of Spanning Tree Protocol (STP)........................................................... 6 Infrastructure over provisioning capital expenditure (CAPEX)............................................... 6 Addressing Network Scalability with VXLAN........................................................... 7 VXLAN a solution to the network scalability challenge... 7 What is VXLAN?................................................................................ 7 VXLAN Ethernet Frame encapsulation... 7 VM-to-VM communications in an VXLAN environment................................................... 8 VXLAN Enabling Network Scalability in the Cloud..................................................... 9 Looking Ahead VXLAN and Emulex s Advanced Edge architecture...................................... 10 Conclusion... 10 2

Executive Summary The paradigm shift from a virtualization driven data center toward an infrastructure for cloud computing includes shifting the focus from applications consolidation to tenant consolidation. While virtualization has reduced the cost and time required to deploy a new application from weeks to minutes, reducing the costs from thousands of dollars to a few hundred, reconfiguring the network for a new or migrated virtual workload can take a week and cost thousands of dollars. Scaling existing networking technology for multi-tenant infrastructures requires solutions enabling virtual machine (VM) communication and migration across Layer 3 boundaries without impacting connectivity while ensuring isolation for hundreds of thousands of logical network segments and maintaining existing VM addresses and Media Access Control (MAC) addresses. VXLAN, an IETF proposed standard from VMware, Emulex, Cisco and other leading organizations address these challenges. Cloud Computing Growth Cloud computing adoption, both public and private, continues unabated with server revenues in 2014 forecasted to grow to $12.5 Billion (IDC) and Cloud Services becoming a $35 Billion/year market by 2013 (Cisco). Almost 80 percent of respondents to a recent study (ESG) indicated having three or more data centers worldwide, and 54 percent have started or completed data center consolidation projects. This trend toward data center consolidation is a key driver of new requirements for multi-tenant cloud based private infrastructures that are manageable and scalable. Andrew McAfee of MIT s Sloan School describes cloud computing a deep and permanent change in how computing power is generated and consumed. Cloud computing, with any service model approach, public, private or hybrid, delivers multiple benefits including: n Extracting increased hardware efficiencies from server consolidation via virtualization. n Enabling IT agility through optimized placement or relocation of workloads. n Optimizing IT capacity through distribution of workloads between private and public infrastructure. n Ensuring business continuity and disaster recovery. Server virtualization has laid the foundation for the growth of cloud computing, with hypervisor offerings such as VMware ESXi. Cloud Computing Infrastructure Optimization Ensuring optimized growth of cloud computing infrastructure, and achieving investment return, rests on three fundamental tenets based on optimization of three data center infrastructures: Server The cloud is a natural evolution of server virtualization and advances including powerful multi-core processors, server clusters/pods and standardized low cost servers; together, they offer a plethora of options to design an optimized core cloud infrastructure. Storage Storage virtualization, de-duplication, on-demand and simplified provisioning, disaster recovery and scalable file storage using the Hadoop File Storage System (HDFS) provide the infrastructure fabric capabilities to optimally design for cloud based storage. Network Forrester s research has indicated that 50 percent of companies that didn t invest in the network suffered from degradation of services or had noticeable increases in operational costs. Advances in NIC adapter and Ethernet switching technologies are simultaneously driving the optimization of data center networking for cloud infrastructure. 3

The growth of 10Gb Ethernet (10GbE) networking, accelerating with the launch of Intel s Xeon E5 (Romley) processors, coupled with other adapter card advances including SR-IOV (Single Root I/O Virtualization) and Emulex s vengine protocol offload technology, available with the XE4310R Ethernet controller, enables greater VM consolidation on a host server without impacting networking I/O throughput. However, when customers deploy private or hybrid clouds, a good architecture will always have spare capacity or spare hosts (over provision) for failover. A primary reason is that a VM can only communicate or be seamlessly migrated within a Layer 2 (L2) domain (L2 subnet). Network optimization and network scalability will be critical to achieve the full potential of cloud based computing. The Economics Driving Overlay Networks Overlay Networks are being driven by, simply put, the need to reduce the costs of deploying a new server workload and managing it. Here is an interesting analysis from a presentation at the Open Networking Summit in the spring of 2012: 1 Server virtualization has drastically reduced the cost and time to deploy a new server workload. Before virtualization, it would take ten weeks and cost $10,000, plus software costs, to deploy a new server workload. With virtualization, a new server workload can be deployed in a few minutes at a deployment cost of $300, plus software costs. But, the network hasn t been virtualized, and the network really hasn t changed during the last ten years. From a network management perspective, it will take five days and cost $1,800 to reconfigure the network devices, including routers, IP and MAC addresses, firewalls, intrusion detection systems, etc. Overlay Networks is the first step to Network Virtualization and promises to significantly reduce the costs of network management for virtualized workloads and hybrid cloud deployments. Cloud Computing s Network Scalability Challenge This paradigm shift from a virtualization-driven data center to an infrastructure architecture for supporting cloud computing will also shift the focus from applications consolidation to tenant consolidation. A tenant can be a department or a Line of Business (LoB) in a private cloud or a distinct corporate entity in a public or hybrid cloud. For a cloud infrastructure, the importance of network scalability extends beyond the immediately obvious bandwidth driven economics, to that of increasing VM density on the physical host server and extending the reach of tenants. Let us examine challenges to achieving network scale in a cloud based data center. Reviewing the OSI 7-layer architectural model in Figure 1 below, we will want to focus on the critical communication layers 2 and 3 as they pertain to VM-to-VM communications and VM migration. 1 http://opennetsummit.org 4

Figure 1 Specifically, we will examine how to ensure intra-tenant VM-to-VM communications isolation (security) while enabling a tenant s resource to seamlessly scale at layer 2 (L2) and, as equally important, address the challenges of elastically adding additional resources at layer 3 (L3). Limitations to security (network isolation) using 802.1Q vlan grouping VMs belonging to a group (Departmental, LoB, Corporation, etc.) each require Virtual Local Area Network tagging (vlan tagging) based L2 segment isolation for their network traffic (since VMs communicate at L2). However, the IEEE 802.1Q standard, with its 12-Bit namespace for a vlan Identifier (VID), allows only 4094 vlans. (Note: The maximum number of vlans that can be supported is 4094 rather than 4096, as the VID values of 0 and FFF are reserved.) A Top-of-Rack switch may connect to dozens of physical servers, each hosting a dozen or more VMs that can each belong to at least one vlan. A data center can easily contain enough switches, resulting in the vlan ID limit of 4094 being exceeded quickly. Furthermore, VMs running a single application may reside in geographically disparate data centers. These VMs may have to communicate via an L2 data link, so vlan identifiers must remain unique across geographies. Capacity expansion across server racks or server clusters (use cases) A single rack with multiple host servers, or multiple server racks residing in a common cluster or pod (also termed a Virtual Data Center in VMware terminology), effectively providing a small virtualized environment for a single tenant. Each server rack, may constitute an L2 subnet, while multiple host server racks in a cluster will likely be assigned their own L3 network. Expanding (elastic) demand for additional computing resource by the tenant may require more servers or VMs on a different rack, within a different L2 subnet, or on a different host cluster, in a different L3 network. These additional VMs need to communicate with the applications VM on the primary rack or cluster. 5

A second use case is the need to migrate a VM(s) to another server cluster, that is, inter-host workload mobility to optimize usage of the underlying server hardware resources. This may be driven by scenarios such as: n Migration from, and shut down of, underutilized hosts and consolidating on another more fully utilized host to reduce energy costs. n Decommissioning one or more older hosts and bringing up workloads on newly added infrastructure. These use cases require stretching the tenant s L2 subnet to connect the servers or VM since VM-to-VM communications and/or VM migration between hosts currently requires the hosts to be in the same L2 subnet. Limitations of Spanning Tree Protocol (STP) STP effectively limits the number of VMs in a vlan or IP network segment to 200-300 resulting in: n Wasted links where links in the networking infrastructure are effectively turned off to avoid connectivity loops. n Failures or changes in link status that result in loss of several seconds to re-establish connectivity an unacceptable delay in many current applications scenarios. n Limited ability to provide multipathing and resulting networking resiliency due to its (STP s) requirement for a single active path between network nodes. Infrastructure over provisioning capital expenditures (CAPEX) A common best practice is to over-provision a data center by as much as 10 percent to provide spare capacity for server failover. This over provisioning practice means data center managers allocate one spare physical host server for every 11, based on an approximate 23:1 consolidation ratio with the L2 domain limited by STP of 254 devices (network nodes). Over provisioning spans the entire infrastructure and covers servers, racks, power and cooling, networking, administration and management, and some storage. VXLAN, on the other hand, breaks the STP boundary, allowing data center administrators to reduce their over provisioning to a much lower number. Since servers are fairly reliable, if the annual failure rate for servers is two percent, then the cost of the over provisioned infrastructure can be reduced by a factor of five. Optimizing the failover infrastructure is another opportunity for VXLAN to significantly reduce IT costs. In summary, the current practices of host-centric segmentation based on vlans/l2 subnets, with server clusters separated by L3 physical IP networks limit today s multi-tenant cloud infrastructure from meaningful network scaling for longer term growth. Figure 2 describes these network scaling challenges associated with the deployment of a cloud based server infrastructure. Figure 2 6

Addressing Network Scalability with VXLAN VXLAN a solution to the network scalability challenge VMware and Emulex, along with other industry leaders, have outlined a vision for a new virtualized Overlay Network architecture that enables the efficient and fluid movement of virtual resources across cloud infrastructures, leading the way for large scale and cloud scale VM deployments. VXLAN (Virtual extensible LAN), is a proposed new standard from VMware, Emulex and Cisco amongst a group of other leading companies. The VXLAN standard enables overlay networks, focuses on eliminating the constraints describe earlier, enabling virtualized workloads to seamlessly communicate or move across server clusters and data. What is VXLAN? At its core, VXLAN is simply a MAC-in-UDP encapsulation (in other words, encapsulation of an Ethernet L2 Frame in IP) scheme enabling the creation of virtualized L2 subnets that can span physical L3 IP networks. Looking back at the use case described earlier, VXLAN enables the connection between two or more L3 networks and makes it appear like they share the same L2 subnet. This now allows virtual machines to operate in separate networks while operating as if they were attached to the same L2 subnet. VXLAN is an L2 overlay scheme over an L3 network. VXLAN Ethernet Frame encapsulation Each L2 overlay network, mentioned above, is interchangeably also called a VXLAN Segment. A unique 24-bit segment ID called a VXLAN Network Identifier (VNI) is added to the L2 Ethernet frame with its optional 12 bit IEEE 801.2 Q VLAN identifier. This new 24-bit VNI now enables more than 16 million VXLAN segments to operate within the same administrative domain, a scalability improvement of many orders of magnitude over the 4,094 vlan segment limit discussed above. The L2 frame with VXLAN encapsulation is then encapsulated with an outer UDP Header, outer IP header and finally an outer Ethernet Frame. A simplified representation of the VXLAN frame format and encapsulation is shown in Figure 3 below. Figure 3 As a result of this encapsulation, think of VXLAN as a tunneling scheme. As currently designed, VMware ESXi hosts will act as the VXLAN Tunnel End Points (VTEP). The VTEPs encapsulate the virtual machine traffic in a VXLAN header as well as strip it off and present the destination virtual machine with the original L2 packet. 7

VM-to-VM communications in a VXLAN environment The value of VXLAN technology is in enabling inter-vm communications and VM migration for hosts in different L2 subnets as discussed earlier. A simplified explanation of the process is described below and demonstrated in Figures 4 and 5. The new entity mentioned above, called the Virtual Tunneling End Point (VTEP), residing in the ESXi server (or a switch or physical 10GbE NIC) will be responsible for generating the VXLAN header needed to specify the 24-Bit VNI. The VTEP also performs the UDP and outer Ethernet encapsulations required to communicate across the L3 network. (NOTE: The VM originating the communication is unaware of this VNI tag). The source VM that wishes to communicate with another VM initially sends a broadcast ARP message. The VTEP associated with the source VM will encapsulate the packet and send as a broadcast packet to the IP multicast group associated with the VXLAN segment (VNI) to which the communicating VMs belong. All of the other VTEPs will learn the Inner MAC address of the source (sending) VM and Outer IP address of its associated VTEP from this packet. The destination VM will respond to the ARP via a standard ARP response using a IP unicast message back to the sender, which allows the originating VTEP to learn the destination mapping of the responding VM. Once the destination VM s response is received, the communicating VTEPs and VMs have the necessary IP and MAC addresses for ongoing inter-vm communications across L3 networks. Note: The initial IETF draft does not explicitly specify the details of the management layer that establishes the relationship between the VXLAN VNI and the IP Multicast group it will have to use. Figure 4 8

Figure 5 VXLAN Enabling Network Scalability in the Cloud We have seen how VXLAN technology addresses some of the key tactical challenges associated with networking within a cloud infrastructure. Figure 6, a simplified representation, demonstrates the benefits of adopting VXLAN. Figure 6 9

In summary, with VXLAN: n The 4094 vlan limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16 million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. n A multi-tenant cloud infrastructure is now capable of delivering elastic capacity service by enabling additional application VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet. n Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with VXLAN Overlay Networking. n Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data flows over the WAN. n VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant customer networks for the Cloud service provider. n Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring forklift hardware upgrades thus easing and hastening the adoption of the technology. Looking Ahead Overlay Networking implemented in software, as currently defined, is incompatible with current hardware based protocol offload technologies, so network performance goes down, more CPU cycles are consumed, and the VM-host consolidation ratio is reduced. This fundamentally impacts the core economics of virtualization and can easily consume 10 to 20 percent of CPU cycles of a host. VMware and other major hypervisor vendors have built upon the APIs and updated the networking stack to provide compatibility with offload technologies, such as Emulex vengine. While the current VXLAN draft specification leaves out many of the details for managing the Control Plane, it outlines the critical piece for interoperability the VXLAN frame format. As for the control plane, there will be several tools available to manage the networking piece. The fundamental philosophy is that when a VM moves, a VM management tool orchestrates its movement. Since this tool knows the complete lifecycle of a VM, it can provide all of the control plane information required by the network. This builds on the trend that the control plane migrates away from the hypervisor s embedded vswitch to the physical 10GbE NIC, with the NIC becoming more intelligent, and a critical part of the networking infrastructure. Therefore, in order for VXLAN to be leveraged without a notable performance impact an intelligent NIC needs to be deployed that supports: 1. VXLAN-offload without disrupting current protocol offload capabilities. 2. API integration to apply the correct control plane attributes to each flow for Overlay Networking. 3. APIs to expose the presence and connectivity of Overlay Networks to network management tools. Conclusion By extending the routing infrastructure inside the server, VMware, Emulex and Cisco are enabling the creation of programmatic dynamic Overlay Networks standard (VXLAN) that can easily scale from small to large to cloud enabling multi-tenant architectures. Network manageability can be moved to the edge, and the network management layer becomes virtualization aware and can adapt and manage the network as VMs start up, shut down, and move around. 10

www.emulex.com World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600 Bangalore, India +91 80 40156789 Beijing, China +86 10 68499547 Dublin, Ireland+35 3 (0)1 652 1700 Munich, Germany +49 (0) 89 97007 177 Paris, France +33 (0) 158 580 022 Tokyo, Japan +81 3 5325 3261 Wokingham, United Kingdom +44 (0) 118 977 2929 13-0136 7/12