Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking



Similar documents
Network Virtualization Technologies and their Effect on Performance

High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs.

Advanced Computer Networks. Network I/O Virtualization

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Telecom - The technology behind

SDN software switch Lagopus and NFV enabled software node

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014

Evaluation and Characterization of NFV Infrastructure Solutions on HP Server Platforms

KVM Architecture Overview

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster

Virtual device passthrough for high speed VM networking

Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.

Cloud Operating Systems for Servers

Jun (Jim) Xu Principal Engineer, Futurewei Technologies, Inc.

Intel DPDK Boosts Server Appliance Performance White Paper

Network Function Virtualization Using Data Plane Developer s Kit

Architecture of the Kernel-based Virtual Machine (KVM)

RCL: Software Prototype

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Xeon+FPGA Platform for the Data Center

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Small is Better: Avoiding Latency Traps in Virtualized DataCenters

Customizing Data-plane Processing in Edge Routers

Open Source and Network Function Virtualization

Performance Analysis of Large Receive Offload in a Xen Virtualized System

Programmable Networking with Open vswitch

Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00. Maryam Tahhan Al Morton

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

Datacenter Operating Systems

DPDK Summit 2014 DPDK in a Virtual World

Open vswitch and the Intelligent Edge

Virtual Switching Without a Hypervisor for a More Secure Cloud

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

KVM: Kernel-based Virtualization Driver

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Wire-speed Packet Capture and Transmission

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

Accelerating 4G Network Performance

Determining Overhead, Variance & Isola>on Metrics in Virtualiza>on for IaaS Cloud

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Full and Para Virtualization

BHyVe. BSD Hypervisor. Neel Natu Peter Grehan

OpenStack Networking: Where to Next?

vpf_ring: Towards Wire-Speed Network Monitoring Using Virtual Machines

Enabling Technologies for Distributed Computing

Linux KVM Virtual Traffic Monitoring

RCL: Design and Open Specification

Performance of Software Switching

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

Foundation for High-Performance, Open and Flexible Software and Services in the Carrier Network. Sandeep Shah Director, Systems Architecture EZchip

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

VNF & Performance: A practical approach

Linux NIC and iscsi Performance over 40GbE

Real-Time KVM for the Masses Unrestricted Siemens AG All rights reserved

High-performance vswitch of the user, by the user, for the user

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

WHITE PAPER. How To Compare Virtual Devices (NFV) vs Hardware Devices: Testing VNF Performance

Using Linux as Hypervisor with KVM

Deploying Extremely Latency-Sensitive Applications in VMware vsphere 5.5

Enabling Technologies for Distributed and Cloud Computing

VMWARE WHITE PAPER 1

OpenFlow with Intel Voravit Tanyingyong, Markus Hidell, Peter Sjödin

OpenStack, OpenDaylight, and OPNFV. Chris Wright Chief Technologist Red Hat Feb 3, CHRIS WRIGHT OpenStack, SDN and NFV

KVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com

Masters Project Proposal

Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements. Marcia Zangrilli and Bruce Lowekamp

Virtualization Overview

Database Virtualization

Cisco Application-Centric Infrastructure (ACI) and Linux Containers

Optimizing Network Virtualization in Xen

Network Function Virtualization: Virtualized BRAS with Linux* and Intel Architecture

KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS. Mark Wagner Principal SW Engineer, Red Hat August 14, 2011

Oracle Database Scalability in VMware ESX VMware ESX 3.5

An Analysis of Container-based Platforms for NFV

Optimizing Network Virtualization in Xen

OpenStack Enhancements to Support NFV Use Cases Steve Gordon, Red Hat Adrian Hoban, Intel Alan Kavanagh, Ericsson

Intel Open Network Platform Release 2.1: Driving Network Transformation

PARALLELS CLOUD SERVER

KVM on S390x. Revolutionizing the Mainframe

Virtualization and Performance NSRC

Revisiting Software Zero-Copy for Web-caching Applications with Twin Memory Allocation

Big Data Technologies for Ultra-High-Speed Data Transfer and Processing

High Performance Packet Processing

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

vpf_ring: Towards Wire-Speed Network Monitoring Using Virtual Machines

Utiliser l Intel DPDK Communauté dpdk.org

The QEMU/KVM Hypervisor

Network Function Virtualization Packet Processing Performance of Virtualized Platforms with Linux* and Intel Architecture

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM

Virtualization System Vulnerability Discovery Framework. Speaker: Qinghao Tang Title:360 Marvel Team Leader

Virtual Computing and VMWare. Module 4

Resource usage monitoring for KVM based virtual machines

Virtualization. Types of Interfaces

Transcription:

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Roberto Bonafiglia, Ivano Cerrato, Francesco Ciaccia, Mario Nemirovsky, Fulvio Risso Politecnico di Torino, Italy 1/21

Agenda Goals Motivation Technologies overview Execution environments (KVM, Docker) Virtual switches (OvS, OVDPDK) Experimental results Conclusions 2/21

Goals Analyze the performance of different virtualization technologies when used to implement NFV Virtual switches: Open vswitch (OvS) DPDK-based OVS (OVDPDK) Execution environment: Kernel-based Virtual Machine (KVM) Docker container Provide an initial evaluation of the overhead they introduce in an NFV environment Throughput Latency 3/21

Why new evaluations? NFV is heavily based on cloud computing technologies VNFs executed in virtual machines or lightweight containers Paths implemented through vswitches However, many differences exist between cloud computing and NFV: Computed-bound tasks (cloud) vs network I/O intensive (NFV) Packets have to traverse several VNFs: the vswitch may have to handle the same packet multiple times Existing hardware accelerators not appropriate Generic Receive Offload (GRO) / TCP segmentation offload (TSO) Some VNFs (e.g., NAT) need to work on each single Ethernet frame, not on TCP/UDP segments Then, NFV services may behave differently in a cloud environment 4/21

Networking with KVM KVM: kernel module that transforms Linux in an hypervisor Guest operating system executed within QEMU Process running in user-space in the host VM memory: part of the QEMU memory Each vcpu corresponds to a different QEMU thread in the host Virtual Machine Eth0 (QEMU process) Eth1 Virtio-net Virtio-net kernel vhost vhost Tap interface Tap interface Virtio queues vswitch Physical interfaces 5/21

Networking with the KVM cont d The guest operating system accesses to the virtual NICs through the virtio-net driver Driver optimized for the virtualization context Each vnic is associated to a vhost kernel-process in the host Vhost is connected to the vnic on one side, to a TAP interface on the other side TAP interface in turn connected with the vswitch Vhost waits for interrupts, and moves pkts between the vnic and the TAP kernel Virtual Machine Eth0 (QEMU process) Eth1 Virtio-net Virtio-net vhost vhost Tap interface Tap interface Virtio queues vswitch Physical interfaces 6/21

Networking with Docker containers Lightweight virtualization mechanism Does not run a complete operating system All containers share the same kernel Just a set of namespaces to limit the resources visible by a user-space process If no user-space process is running in a container, it is not associated with any thread in the host Container network namespace Container network namespace Container Container vswitch Veth interface Host network namespace Physical interfaces 7/21

Networking with Docker containers cont d Each container corresponds to a different network namespace It is only aware of the vnics in such a namespace It has its own network stack Packets can traverse the namespace boundary through veth interfaces Container network namespace Container network namespace Container Container vswitch Veth interface Host network namespace Physical interfaces 8/21

Open vswitch I/O model OvS consists of a kernel module and a user-space daemon Kernel module: it is a cache of the last matched rules User-space daemon: Contains the full forwarding table of the switch Can be programmed through OpenFlows Both the kernel and the user-space daemon work in interrupt mode User space Kernel space ovs-vswitchd ovs_mod.ko Slow path Fast path 9/21

Open vswitch I/O model cont d No thread associated with the kernel module It is a callback invoked by the software interrupt raised when a packet is available on a network interface (NIC, tap, etc) OvS executed in the context of the ksoftirq kernel thread in case of packets from a physical NIC OvS executed in the context of the sender process, in case of virtual NIC The vhost kernel thread in case of VMs The user-space process in case of Docker User space Kernel space ovs-vswitchd ovs_mod.ko Slow path Fast path 10/21

DPDK-based OvS I/O model It consists of a number of user-space threads Continuously iterates on the physical and virtual NICs Polling mode (vs. interrupt mode in the vanilla OvS) Thanks to DPDK: OvS accesses to the physical NICs by bypassing the operating system Zero-copy data transfer between physical NICs and the vswitch 11/21

Packet flows the VM case QEMU process (virtual machine) VNF code user-space vhost Ksoftirq vhost OvS kernel OvS P The processing of the packet is split in multiple threads Ksoftirq Vhost QEMU process 12/21

Packet flows the container case The VNF is linux bridge The data-plane is a kernel-level callback executed in the context of the thread providing the packet to the switch Docker namespace user-space kernel Ksoftirq OvS Linux bridge P A single thread executes all the operations associated with a packet No parallelization Poor performance 13/21

Packet flows the container case cont d Docker namespace Libpcap-based bridge user-space kernel Ksoftirq OvS OvS P Running in a container a VNF that is actually a process, parallelization can be achieved in Docker as well 14/21

Testbed Physical machines: Intel Core i7-4770 @3.40GHz (4+4 cores), 32 GB of memory, fedora 20 Physical network interfaces: 10 Gigabit Ethernet Intel X540-AT2 Variable number of VNFs chained through a vswitch KVM VNF: linux bridge Each VM has a single virtual CPU (i.e., it corresponds to 1 thread in the host) Docker VNF: simple libpcap-based bridge Server VNF Sender VNF vswitch VNF Receiver 15/21

Single chain throughput OvS VMs vs Docker Point graph Gbps Bar graph Mpps Performance slightly better with VMs, tends to be equivalent with more VNFs in the chain Docker: linux bridge vs libpcap-based switch Bare metal throughput: linux bridge outperforms libpcap-based switch Several chained dockers: linux bridge is much worse (3Mbps with 8 chaines containers) Throughput inversely proportional to the length of the chain More VNFs means more contention on the physical resources (CPU, cache, TLB) 16/21

Multiple chain The graph shows the results for 64B packets Using multiple chains reduce the overall throughput Having multiple VMs in parallel (this graph) or in sequence (previous slide) produces comparable results 17/21

OVDPDK single chain 64B packets - VMs Throughput close to zero Results obtained by installing DPDK inside the VM The L2-fw application provided with DPDK has been used Running traditional Linux bridge w/o DPDK did not provide enhancements wrt standard OvS More cores to the switch results in better performance if (#cores_switch + #VMs) <= numbers of cores of the server 18/21

Latency As per the throughput, also latency worsens sensibly when exceeding the #CPU cores (with OVDPDK) The behavior for VM and Docker results linear with the number of elements in the chain 19/21

Conclusions Docker provides acceptable results only in case of VNFs associated with a specific process Containers are unsuitable for VNFs implemented as kernel callbacks Due to the low overhead introduced by containers, a userspace program in Docker provides better latency and almost the same throughput wrt kernel callback-based VNFs within VMs OVDPDK provides better performance than OvS, when used with DPDK-based VNFs within VMs Polling working model This limits its usage in servers running a reduced number of VNFs Zero copy between the physical NICs and the switch 20/21

Questions? 21/21