Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro



Similar documents
ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

State of the Art Cloud Infrastructure

PLUMgrid Open Networking Suite Service Insertion Architecture

VXLAN Performance Evaluation on VMware vsphere 5.1

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

Scalable Approaches for Multitenant Cloud Data Centers

Virtualization, SDN and NFV

Introduction to Cloud Design Four Design Principals For IaaS

The Road to SDN: Software-Based Networking and Security from Brocade

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Analysis of Network Segmentation Techniques in Cloud Data Centers

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Connecting the Clouds

NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Extending Networking to Fit the Cloud

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Open Source Networking for Cloud Data Centers

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Telecom - The technology behind

RoCE vs. iwarp Competitive Analysis

VXLAN: Scaling Data Center Capacity. White Paper

Adopting Software-Defined Networking in the Enterprise

PLUMgrid Toolbox: Tools to Install, Operate and Monitor Your Virtual Network Infrastructure

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Network Virtualization for Large-Scale Data Centers

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Boosting Data Transfer with TCP Offload Engine Technology

Mellanox Academy Online Training (E-learning)

Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION

Pluribus Netvisor Solution Brief

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Network performance in virtual infrastructures

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Successfully Deploying Globalized Applications Requires Application Delivery Controllers

VMWARE WHITE PAPER 1

Windows Server 2008 R2 Hyper-V Live Migration

Virtual Switching Without a Hypervisor for a More Secure Cloud

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

IO Visor: Programmable and Flexible Data Plane for Datacenter s I/O

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

Using Network Virtualization to Scale Data Centers

Virtual Machine in Data Center Switches Huawei Virtual System

A Platform Built for Server Virtualization: Cisco Unified Computing System

Private cloud computing advances

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

Definition of a White Box. Benefits of White Boxes

Data Center Use Cases and Trends

Networking in the Era of Virtualization

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

Networking Virtualization Using FPGAs

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

STRATEGIC WHITE PAPER. The next step in server virtualization: How containers are changing the cloud and application landscape

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

Network Function Virtualization Using Data Plane Developer s Kit

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

VMware and Brocade Network Virtualization Reference Whitepaper

Windows Server 2008 R2 Hyper-V Live Migration

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Software Defined Network (SDN)

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

SMB Direct for SQL Server and Private Cloud

Data Center Virtualization and Cloud QA Expertise

SN A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

Cisco and Canonical: Cisco Network Virtualization Solution for Ubuntu OpenStack

How To Orchestrate The Clouddusing Network With Andn

SDN Applications in Today s Data Center

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Network Performance Comparison of Multiple Virtual Machines

Inception: Towards a Nested Cloud Architecture

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Accelerating Micro-segmentation

SierraVMI Sizing Guide

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Quantum Hyper- V plugin

Network Virtualization Solutions

Multitenancy Options in Brocade VCS Fabrics

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Transcription:

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled elastic creation and migration of applications within and across data centers. Unfortunately, on-demand network infrastructure provisioning in multi-tenant cloud environments is still very rudimentary due to the complex nature of Physical Network Infrastructure (PNI) and limited network functionality in hypervisors. Even after this complexity is somehow mastered by the cloud operations and IT teams, even simple reconfigurations (let alone complex upgrades) of the network remain largely error-prone. The same networking problems percolate upstream into the tenants virtual networks, where deployment, configuration and management of anything beyond a simple Virtual Private Connect bridge is still a significant challenge. These constraints result in wastage of thousands of dollars and months of effort for cloud consolidation or upgrade projects. In other words, the network has become the fundamental bottleneck in cloud deployments and operations. PLUMgrid OpenStack Networking Suite 3.0: A one-stop shop for solutions to Operator and Tenant networking problems of your cloud PLUMgrid OpenStack Networking Suite 3.0 [Figure 1] solves both operator and tenant networking problems. For the operator problems, all the core elements that currently exist in the physical infrastructure (IPMI, boot, CMS, etc.) are transitioned into the VNI, thus making it straightforward to deploy, manage and extend the underlying cloud infrastructure. For tenant problems, PLUMgrid OpenStack Networking Suite 3.0 (Figure 1) enables users of a cloud to create and deploy Virtual Domains in seconds without any change to the physical network fabric. These Virtual Domains abstract away the underlying physical network and provide private switching, routing, policies, load balancing, NAT and DHCP services to tenants while ensuring total tenant-isolation and traffic security. Each tenant has full administrative control over the networks contained within its Virtual Domains. This is achieved through the PLUMgrid OpenStack Networking Suite 3.0 overlay network that leverages the VXLAN encapsulation protocol to decouple network services from the physical, device-based network. Overlay networks standards, such as VXLAN, make the network as agile and dynamic as other parts of the cloud infrastructure. This agility in turn enables dynamic network segment provisioning for cloud workloads which induces dramatic increases in cloud resource utilization.

Figure 1: The PLUMgrid OpenStack Networking Suite 3.0 Mellanox ConnectX-3 Pro: Resolving VNI Performance Bottlenecks While overlay-based VNI offers significant benefits in terms of flexibility, scalability and security for cloud infrastructure, the additional VXLAN processing remains a concern for high performance environments. VXLAN uses an additional encapsulation layer, and therefore traditional NIC offloads cannot be utilized, resulting in high levels of CPU consumption. In addition to consuming expensive CPU processing resources, network throughput is also significantly reduced and a software-only solution simply does not scale with an ever-increasing number of workloads per server. In order for VXLAN to be of real value, its inherent CPU overhead and network performance degradation must be eliminated. This can be achieved by offloading the overlay network processing to hardware embedded within the network controllers. Mellanox s ConnectX-3 Pro is the only NIC that currently handles the traditional offloads despite the extra encapsulation layer. Furthermore, ConnectX-3 Pro enables RSS to steer and distribute traffic based on the inner packet, and not, like traditional NICs, only the outer packet. The primary benefit of this is that it allows multiple cores to handle traffic, which then enables the interconnect to run at the intended line-rate bandwidth, not at reduced bandwidth as occurs when VXLAN is used without RSS. ConnectX-3 Pro addresses the unintended performance degradation seen when using VXLAN in both CPU usage and throughput. By using ConnectX-3 Pro, the scalability and security benefits of VXLAN can be utilized without any decrease in performance and without incurring an increase in CPU overhead.

Benchmarking results of PLUMgrid OpenStack Networking Suite 3.0 with Mellanox ConnectX-3 Pro We benchmarked PLUMgrid OpenStack Networking Suite 3.0 with VXLAN encapsulation offloads on Mellanox ConnectX-3 Pro. Performance was measured for a single Virtual Domain with a Bridge topology configured on an IO Visor (running as a kernel module), with Mellanox ConnectX-3 Pro providing the underlying tunnel encapsulation/decapsulation functions. All test results were collected with the following hardware and software configurations: Server: Dell PowerEdge R620. CPU: Dual Intel Xeon E5-2670 v2 @ 2.5GHz (10 cores with HT). RAM: 128GB RDIMM. NIC: Mellanox ConnectX-3 Pro. Hypervisor: KVM Ubuntu 12.04 with kernel version 3.14-rc8 (mlx4 VXLAN offload support). VNI: PLUMgrid OpenStack Networking Suite version 3.2 Build 8. Virtual Domain Configuration: Single Virtual Bridge. Traffic: super_netperf [super_netperf 4 -H $IP -l 30 -- -m $SZ]. VM pair: Two VMs on the same virtual bridge but on different hypervisors/servers. Figure 2 shows the throughput comparison with a fixed packet size (9000 bytes) with and without VXLAN offloading. The improvement in performance with VXLAN offloads is significant (generally an order of magnitude) and the performance increases almost linearly until reaching the close-to-optimal 36 Gbps mark (with eight VM pairs generating traffic at maximum rates). Figure 2: Throughput of PLUMgrid OpenStack Networking Suite 3.0 with and without Mellanox ConnectX-3 Pro VXLAN offloading

Figure 3 shows the PLUMgrid OpenStack Networking Suite 3.0 throughput with ConnectX-3 Pro s VXLAN offloading under different packet sizes. It can be observed that for packets with sizes greater than 512 bytes, the traffic is processed at close to line rate. For very small packets (< 128 bytes), the throughput is lower but still reaches 5-10 Gbps for 4 or more VM pairs. Figure 3: Throughput under varying packet sizes of PLUMgrid OpenStack Networking Suite 3.0 with Mellanox ConnectX-3 Pro VXLAN Figure 4 shows the CPU utilization in comparison to the obtained throughput with a fixed message size (9000B) and VXLAN offload. It can be easily observed how the CPU idle percentage remains nearly constant on both TX and RX ends, while the throughput grows to 36 Gbps. Figure 4: Available (Idle) CPU for PLUMgrid OpenStack Networking Suite 3.0 with Mellanox ConnectX-3 Pro VXLAN

Conclusion PLUMgrid OpenStack Networking Suite 3.0 with Mellanox ConnectX-3 Pro provides a solution which allows you to enjoy the automation and flexibility of a VNI solution at wirespeed traffic processing rates. The performance benchmarking results of this whitepaper substantiate that the right VNI and NIC architectures can ensure that networking performance remains uncompromised as the number of tenants and workloads increase in your cloud. 350 Oakmead Parkway, Suite 250 Sunnyvale CA 94085 USA 408.800.7586 info@plumgrid.com www.plumgrid.com