OpenDaylight Performance Stress Tests Report



Similar documents
WHITE PAPER. SDN Controller Testing: Part 1

Benchmarking the SDN controller!

Towards Smart and Intelligent SDN Controller

An Oracle White Paper March Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE

OF 1.3 Testing and Challenges

Abstraction of a failure free Software Defined Network (SDN Application)

SDN/Virtualization and Cloud Computing

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

Designing Virtual Network Security Architectures Dave Shackleford

Developing High-Performance, Flexible SDN & NFV Solutions with Intel Open Network Platform Server Reference Architecture

Cloud Computing, Software Defined Networking, Network Function Virtualization

D1.2 Network Load Balancing

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux perfsonar developer

Security Challenges & Opportunities in Software Defined Networks (SDN)

Linux NIC and iscsi Performance over 40GbE

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (

VMWARE WHITE PAPER 1

Network performance in virtual infrastructures

FLOW-3D Performance Benchmark and Profiling. September 2012

Performance of Host Identity Protocol on Nokia Internet Tablet

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

OFCProbe: A Platform-Independent Tool for OpenFlow Controller Analysis

1000Mbps Ethernet Performance Test Report

Informatica Data Director Performance

Informatica Ultra Messaging SMX Shared-Memory Transport

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content

Advanced Study of SDN/OpenFlow controllers

Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX

Agility Database Scalability Testing

Dynamic Controller Deployment in SDN

A Comparative Study on Vega-HTTP & Popular Open-source Web-servers

Tyche: An efficient Ethernet-based protocol for converged networked storage

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

3CX Phone System Enterprise 512SC Edition Performance Test

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Simulation Analysis of Characteristics and Application of Software-Defined Networks

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

High-Speed TCP Performance Characterization under Various Operating Systems

Software Defined Networking (SDN) OpenFlow and OpenStack. Vivek Dasgupta Principal Software Maintenance Engineer Red Hat

Delivering Quality in Software Performance and Scalability Testing

Load and Performance Testing

Envox CDP 7.0 Performance Comparison of VoiceXML and Envox Scripts

This presentation will define what we mean by hybrid mode, how that concept is supported by the OpenFlow specification, and some of the benefits of

Benchmarking Hadoop & HBase on Violin

Scalability Factors of JMeter In Performance Testing Projects

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Securing Local Area Network with OpenFlow

Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00. Maryam Tahhan Al Morton

Ethernet-based Software Defined Network (SDN)

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

OpenDaylight: Introduction, Lithium and Beyond

Tools for Testing Software Architectures. Learning Objectives. Context

vsphere Resource Management

Summer Internship 2013 Group No.4-Enhancement of JMeter Week 1-Report-1 27/5/2013 Naman Choudhary

SIDN Server Measurements

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

Effective disaster recovery using Software defined networking

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

SOLUTION BRIEF: SLCM R12.8 PERFORMANCE TEST RESULTS JANUARY, Submit and Approval Phase Results

Intel Open Network Platform Release 2.1: Driving Network Transformation

Boosting Data Transfer with TCP Offload Engine Technology

Agile VPN for Carrier/SP Network. ONOS- based SDN Controller for China Unicom MPLS L3VPN Service

SDN_CDN Documentation

Variations in Performance and Scalability when Migrating n-tier Applications to Different Clouds

An Implementation Of Multiprocessor Linux

Migrating from Amazon AWS to OpenStack

Using SDN-OpenFlow for High-level Services

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

Energy-aware Memory Management through Database Buffer Control

Virtuoso and Database Scalability

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

An Introduction to Software-Defined Networking (SDN) Zhang Fu

White Paper. Recording Server Virtualization

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

Performance Analysis of Web based Applications on Single and Multi Core Servers

Centinel: Streaming Data Handler. September 07 th, 2015

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

Scalable and Reliable control and Management for SDN-based Large-scale Networks. CJK CFI

Virtual Network Provisioning and Fault-Management across Multiple Domains

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

SDN Interfaces and Performance Analysis of SDN components

PARALLELS CLOUD SERVER

The OpenDaylight Project

Intel SOA Expressway Performance Comparison to IBM * DataPower XI50

MAGENTO HOSTING Progressive Server Performance Improvements

1-Gigabit TCP Offload Engine

HPSA Agent Characterization

Programmable Networking with Open vswitch

WHITE PAPER. April 6th, ONOS. All Rights Reserved.

Exploration of Large Scale Virtual Networks. Open Network Summit 2016

Performance Tuning Guidelines for PowerExchange for Microsoft Dynamics CRM

Transcription:

OpenDaylight Performance Stress Tests Report v1.0: Lithium vs Helium Comparison 6/29/2015

Introduction In this report we investigate several performance aspects of the OpenDaylight controller and compare them between the Helium and Lithium releases. The investigation targets several objectives such as controller throughput, switch scalability and stability (sustained throughput). For our evaluation we have used NSTAT [1], an open source environment written in Python for easily writing SDN controller stress tests and executing them in a fully automated and end-to-end manner. The key components in NSTAT tests are the SDN controller and the SouthBound OpenFlow traffic generator. NSTAT is responsible for automating and sequencing every step of the testing procedure, including controller and generator lifecycle management and orchestration, test dimensioning (i.e. iterating over a large space of experimental parameters), online performance and statistics monitoring, and finally reporting. NSTAT does exhaustive stress testing, which means it repeats the test over a multi-dimensional experimental space within the same run. In this way the user is able to execute multiple test cases within a single run. In all tests we have used MT-Cbench [2] as the SouthBound traffic generator. MT-Cbench is a direct extension of the Cbench emulator [3] which uses threading to generate OpenFlow traffic from multiple streams in parallel. The motivation behind this extension was to be able to boot-up and operate network topologies with OpenDaylight much larger than those being able with the original Cbench. We note here that, as with the original Cbench, the switches emulated with MT-Cbench implement only a subset of the OpenFlow protocol, and therefore the results presented here are expected to vary in real conditions. In future releases of NSTAT we plan to provide improvements in our generator towards providing better OpenFlow protocol support. Results Summary Our key findings in stress tests where the controller DataStore is involved witness notable improvements in the Lithium DataStore, which translate into better throughput in scalability tests as well as better sustained throughput in stability tests. Specifically, in a test running for an hour with aggressive SouthBound traffic that is specifically designed to stress the DataStore, Lithium exhibits steady behavior unlike Helium that shows a downward slope all the way down to zero. In scalability tests that involve the DataStore we observe a significant improvement from 1 to 5K switches reaching up to 8x. In the same cases involving just the OpenFlow plugin, the improvement is smaller yet evident (by almost 1.5x in the best case). Page 2

Experimental Setup The setup used in all experiments presented in this report is summarized in the following table: Operating system Centos 7, kernel 3.10.0 Server platform Processor model Dell R720 Intel Xeon CPU E5-2640 v2 @ 2.00GHz Total system CPUs 32 CPUs configuration Main memory Controller under test 1 Controller under test 2 2 sockets 8 cores/socket 2 HW-threads/core 256 GB, 1.6 GHz RDIMM OpenDaylight Lithium RC0 OpenDaylight Helium SR3 Part A - Stability Stress Tests Stability tests explore how controller throughput behaves in a large time window with a fixed topology connected to it. The controller accepts an aggressive rate of incoming OpenFlow packets from the switches (Packet-In s) and its response throughput is sampled periodically. We evaluate two different controller configurations: in the first one, the controller is configured to directly reply to the switches with a predefined message at the OpenFlow plugin level ( RPC mode ). In the second one, the controller additionally performs updates in its DataStore ( DataStore mode ). In all cases, the MT-Cbench generator is configured to operate at Latency mode, which means that each switch sends a Packet-In message only when it has received a reply for the previous one. Page 3

Test A1: 500 switches, DataStore mode, 1 hour running time Controller: DataStore mode MT-Cbench generator: Latency mode, 50 switches per thread, 10 threads, 8 seconds between thread creation 360 internal generator repeats, 10 seconds each (1h total running time) Figure 1: Lithium throughput Figure 2: Helium throughput Test A2: 500 switches, RPC mode, 1 hour running time Controller: RPC mode MT-Cbench generator: Latency mode, 50 switches per thread, 10 threads, 8 seconds between thread creation 360 internal generator repeats, 10 seconds each (1h total running time) Figure 3: Lithium throughput Figure 4: Helium throughput This test does not trigger the controller DataStore, hence the stable sustained throughput observed in Helium which outperforms Lithium by ~5-7%. Page 4

Test A3: 500 switches, DataStore mode, 12 hours running time Controller: DataStore mode MT-Cbench generator: Latency mode, 50 switches per thread, 10 threads, 8 seconds between thread creation 4320 internal generator repeats, 10 seconds each (12h total running time) Figure 5: Lithium throughput Figure 6: Helium throughput Test A4: 16 switches, RPC mode, 1 hour running time, external generator repeats Controller: RPC mode MT-Cbench generator: Latency mode, 16 switches per thread, 1 thread 360 external generator repeats, 10 seconds each (1h total running time) Figure 7: Lithium throughput Figure 8: Helium throughput Helium degradation can be attributed to the fact that this test emulates an always restarting topology which seems to trigger the DataStore much more frequently than the pure RPC case (Test A2). Page 5

Part B - Scalability Stress Tests Scalability tests measure controller performance as the switch topology scales, giving a hint on the controller's upper bound. We perform two kinds of stress tests, active and idle scalability tests. With the active tests we try to explore how controller throughput scales as the topology scales. In this case we send OpenFlow traffic from booted topologies with varying number of switches and measure controller throughput. With the idle tests we explore how to optimally boot-up topologies of varying size. In this case we gradually boot up switches in idle mode and count installed switches and installation times. Switches are booted in batches and with a fixed delay between each batch. As in the stability tests, in these tests as well we explore both operation modes of OpenDaylight (RPC and DataStore), and we also use the MT-Cbench generator in Latency mode. Test B1: Active scalability test, DataStore mode Controller: DataStore mode MT-Cbench generator: Latency mode, 50 switches per thread, [1,2,4,8,16,20,40,60,80,100] threads, 15sec inter-thread thread creation delay Figure 9: Lithium throughput Figure 10: Helium throughput Lithium starts outperforming Helium for switch numbers larger than 400. After that, it consistently exhibits better relative throughput by an ever increasing factor that reaches almost 8x for 5000 switches. Page 6

Test B2: Active scalability test, RPC mode Controller: RPC mode MT-Cbench generator: Latency mode, 50 switches per thread, [1,2,4,8,16,20,30,40,50,60,70,80,90,100] threads, 15sec inter-thread thread creation delay Figure 11: Lithium throughput Figure 12: Helium throughput Although not as notable as in the DataStore active scalability case, Lithium exhibits better performance than Helium and maintains it for all switch numbers. Page 7

Test B3: Idle scalability test, RPC mode Controller: RPC mode MT-Cbench generator: Latency mode, 50 switches per thread, [1,2,4,8,16,20,30,40,50,60,70,80,90,100] threads, inter-thread thread creation delay [500,1000,2000,4000,8000,16000] secs Figure 13: Lithium topology bootup time Figure 14: Helium topology bootup time Helium achieves to successfully boot up large topologies with much less inter-thread delay than Lithium (2 seconds) and hence in quite smaller total time. An inter-thread thread delay that always works for Lithium is that of 16 seconds. Future Releases In future releases of this report we are going to additionally include: SouthBound stability and scalability tests with Mininet NorthBound stability and scalability tests system-level and controller-level level performance statistics and profiling results. This functionality is going to be part of future NSTAT releases. Page 8

References [1] https://github.com/intracom-telecom-sdn/nstat [2] https://github.com/intracom-telecom-sdn/nstat/wiki/mt-cbench [3] https://github.com/andi-bigswitch/oflops/tree/master/cbench Contributors Nikos Anastopoulos (nanastop@intracom-telecom.com) Vaios Bitos (vbitos@intracom-telecom.com) Panagiotis Georgiopoulos (panageo@intracom-telecom.com) Apostolis Glenis (aglenis@intracom-telecom.com) Andreas Pantelopoulos (panandr@intracom-telecom.com) Konstantinos Papadopoulos (konpap@intracom-telecom.com) Page 9