OpenStack Enhancements to Support NFV Use Cases Steve Gordon, Red Hat Adrian Hoban, Intel Alan Kavanagh, Ericsson



Similar documents
A Path to Line-Rate-Capable NFV Deployments with Intel Architecture and the OpenStack* Kilo Release

OpenStack, OpenDaylight, and OPNFV. Chris Wright Chief Technologist Red Hat Feb 3, CHRIS WRIGHT OpenStack, SDN and NFV

OpenStack Networking: Where to Next?

Open Source and Network Function Virtualization

Telecom - The technology behind

Real-Time KVM for the Masses Unrestricted Siemens AG All rights reserved

Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-00. Maryam Tahhan Al Morton

Utilization-based Scheduling in OpenStack* Compute (Nova)

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

新 一 代 軟 體 定 義 的 網 路 架 構 Software Defined Networking (SDN) and Network Function Virtualization (NFV)

VNF & Performance: A practical approach

Network Virtualization Technologies and their Effect on Performance

Intel Open Network Platform Release 2.1: Driving Network Transformation

2972 Linux Options and Best Practices for Scaleup Virtualization

Evaluation and Characterization of NFV Infrastructure Solutions on HP Server Platforms

Network Function Virtualization Using Data Plane Developer s Kit

Dive Into VM Live Migration

High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs.

SDN and NFV Open Source Initiatives. Systematic SDN and NFV Workshop Challenges, Opportunities and Potential Impact

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

2016 Spring Technical Forum Proceedings

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Different NFV/SDN Solutions for Telecoms and Enterprise Cloud

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014

KVM, OpenStack, and the Open Cloud

KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS. Mark Wagner Principal SW Engineer, Red Hat August 14, 2011

Full and Para Virtualization

SDN software switch Lagopus and NFV enabled software node

Cloud Computing CS

オープンソース NFV プラットフォーム の 取 り 組 み

END-TO-END OPTIMIZED NETWORK FUNCTION VIRTUALIZATION DEPLOYMENT

Intel Network Builders Solution Brief. Intel and ASTRI* Help Mobile Network Operators Support Small Cell Networks

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

AN OPEN PLATFORM TO ACCELERATE NFV. A Linux Foundation Collaborative Project

Intel Service Assurance Administrator. Product Overview

Chapter 14 Virtual Machines

Programmable Networking with Open vswitch

KVM, OpenStack, and the Open Cloud

Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015

White Paper - Huawei Observation to NFV

HRG Assessment: Stratus everrun Enterprise

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Introduction to Virtualization & KVM

Virtualization. Types of Interfaces

Red Hat enterprise virtualization 3.0 feature comparison

WHITE PAPER. How To Compare Virtual Devices (NFV) vs Hardware Devices: Testing VNF Performance

Xeon+FPGA Platform for the Data Center

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

How To Use Vsphere On Windows Server 2012 (Vsphere) Vsphervisor Vsphereserver Vspheer51 (Vse) Vse.Org (Vserve) Vspehere 5.1 (V

Pluribus Netvisor Solution Brief

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista

End to End Network Function Virtualization Architecture Instantiation

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Enabling Technologies for Distributed Computing

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

Introduction to Quality Assurance for Service Provider Network Functions Virtualization

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

Performance tuning Xen

Microkernels, virtualization, exokernels. Tutorial 1 CSC469

SUSE Linux Enterprise 10 SP2: Virtualization Technology Support

Virtualization, SDN and NFV

I/O virtualization. Jussi Hanhirova Aalto University, Helsinki, Finland Hanhirova CS/Aalto

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Hypervisor Competitive Differences: Beyond the Data Sheet. Chris Wolf Senior Analyst, Burton Group

Virtualization Technology. Zhiming Shen

Linux KVM Virtual Traffic Monitoring

Windows Server 2008 R2 Hyper V. Public FAQ

Architecture of the Kernel-based Virtual Machine (KVM)

State of the Art Cloud Infrastructure

Cloud Orchestration. Mario Cho. Open Frontier Lab.

Developing High-Performance, Flexible SDN & NFV Solutions with Intel Open Network Platform Server Reference Architecture

ODP Application proof point: OpenFastPath. ODP mini-summit

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Enabling Technologies for Distributed and Cloud Computing

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Accelerating 4G Network Performance

Scaling Microsoft Exchange in a Red Hat Enterprise Virtualization Environment

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Security Overview of the Integrity Virtual Machines Architecture

COS 318: Operating Systems. Virtual Machine Monitors

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Cisco and Red Hat: Application Centric Infrastructure Integration with OpenStack

Virtualizing a Virtual Machine

Research trends in abstraction of networks and orchestration of network services

Intel Graphics Virtualization Technology Update. Zhi Wang,

Intel DPDK Boosts Server Appliance Performance White Paper

Data Centers and Cloud Computing

Virtualization for Cloud Computing

Virtualization and the U2 Databases

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Software Defined Environments

Isaku Yamahata CloudOpen Japan May 22, 2014

Transcription:

OpenStack Enhancements to Support NFV Use Cases Steve Gordon, Red Hat Adrian Hoban, Intel Alan Kavanagh, Ericsson

Agenda o OpenStack Engagement Model for NFV o Kilo Extensions for NFV o Evolving the Cloud to Support NFV 2

OpenStack Engagement Model for NFV <PRESENTATION TITLE>

How did we get here? <PRESENTATION TITLE>

ETSI NFV ISG Decoder ring: oeuropean Telecommunication Standards Institute onetwork Function Virtualization oindustry Specification Group Putting the standards in standards body! <PRESENTATION TITLE>

ETSI NFV ISG Phase 1: oconvergence on network operator requirements oincluding applicable existing standards odeveloping new requirements to stimulate innovation and open ecosystem of vendors <PRESENTATION TITLE>

ETSI NFV ISG Phase 2: ogrow an interoperable VNF ecosystem othoroughly specify reference points and requirements defined in Phase 1 oachieve broader industry engagement oclarify how NFV intersects with SDN and related standards/industry/open source initiatives. <PRESENTATION TITLE>

NFV ARCHITECTURE <PRESENTATION TITLE>

Open Platform for NFV (OPNFV) Establishing open source reference platform including: onfv Infrastructure (NFVI) ovirtual Infrastructure Management (VIM) Focused on: oconsistency, performance, interoperability between components. oworking with existing upstream communities. <PRESENTATION TITLE>

NFV ARCHITECTURE Initial OPNFV Focus <PRESENTATION TITLE>

Open Platform for NFV (OPNFV) Growing list of projects: orequirements projects E.g. Fault Management ointegration and Testing projects E.g. IPv6 enabled OPNFV ocollaborative Development projects E.g. Software Fastpath Service Quality Metrics odocumentation projects <PRESENTATION TITLE>

Telco Working Group Mission: oidentify Telco/NFV use cases odefine and prioritize requirements internally oharmonize inputs into OpenStack projects Blueprint/patch creation, submission, and review. Move discussion closer to OpenStack projects. <PRESENTATION TITLE>

OpenStack Large community of technical contributors in wide array of loosely governed projects. NFV requirements fall across many of these. Require buy in from these diverse groups of contributors. <PRESENTATION TITLE>

OpenStack Most projects moving to specification process for approval of major changes Ingredients of a good specification: oproblem description incl. use cases oconcrete design proposal osomeone to implement! <PRESENTATION TITLE>

Working Together Success! <PRESENTATION TITLE>

Current State Overlap exists between various groups in: omission omembership oscope oactivities Navigating can be tough! <PRESENTATION TITLE>

Working from both ends ETSI NFV OPNFV Telco Working Group OpenStack <PRESENTATION TITLE>

Working from both ends ETSI NFV OPNFV Telco Working Group OpenStack Merging of Worlds happens here! <PRESENTATION TITLE>

Kilo Extensions for NFV Based on OpenStack community contributions & collaborations 19

Memory Memory o Non Uniform Memory Architecture (NUMA) Memory Proximity o Performance and latency characteristics differ depending on the core a process is executing on and where the memory a process is accessing is located. CORE CORE CORE CORE Processor Socket 0 Processor Socket 1 Application Process CORE CORE CORE CORE Application Process Application Process Application Process 20 Server Optimising placement for memory proximity enables greater performance & efficiency

Memory Memory o Filter Extensions: NUMA numa_topology_filter o o Helps to co-locate CPU core allocations to a single socket (when possible) Resource tracks core/socket consumption and filters to available subset of suitable platforms. CORE CORE CORE CORE Processor Socket 0 Processor Socket 1 Application Process 21 CORE CORE CORE CORE Application Process Server Application Process Application Process Co-location helps with cache efficiency for faster inter-process data communication

Memory Memory o numa_topology_filter o o Filter Extensions: NUMA Helps to co-locate CPU core allocations to a single socket (when possible) Resource tracks core/socket consumption and filters to available subset of suitable platforms. CORE CORE CORE CORE Application Process Processor Socket 0 Application Process Processor Socket 1 CORE CORE CORE CORE Application Process Application Process Server 22 Enables the OSes to allocate local memory for greater performance & efficiency

Memory Memory o o Filter Extensions: NUMA I/O Awareness Adds ability to select the socket based on the I/O device requirement E.g. What if you d prefer network access on NIC B CORE CORE CORE CORE Application Application Process Process Processor Socket 0 Processor Socket 1 CORE CORE CORE CORE Application Application Process Process NIC A Server NIC B 23

Memory Memory Filter Extensions: NUMA I/O Awareness o Adds ability to select the socket based on the I/O device requirement o E.g. What if you d prefer/require network access on NIC B? CORE CORE CORE CORE Application Application Process Process Processor Socket 0 Processor Socket 1 CORE CORE CORE CORE Application Application Process Process NIC A Server NIC B 24 Enables improved I/O performance

Time (proc. cycles) Simultaneous Multi-Threading (SMT) SMT o On Intel platforms, run 2 threads at the same time per core Take advantage of wide execution engine o Keep it fed with multiple threads o Hide latency of a single thread Power efficient performance feature o Very low die area cost o Can provide significant performance benefit depending on application o Much more efficient than adding an entire core w/o SMT SMT Note: Each box represents a processor execution unit SMT enhances performance and energy efficiency 25

Simultaneous Multi-Threading (SMT) o o Sample Linux enumeration of cores Linux scheduler (in the host) manages work load (process) allocation to CPUs pcpu 0 pcpu 4 pcpu 1 pcpu 5 pcpu 8 pcpu 12 pcpu 9 pcpu 13 Execution Unit Execution Unit Execution Unit Execution Unit Processor Socket 0 Processor Socket 1 pcpu 2 pcpu 6 pcpu 3 pcpu 7 pcpu 10 pcpu 14 pcpu 11 pcpu 15 Execution Unit Execution Unit Execution Unit Execution Unit Server 26

CPU Pinning Prefer Policy (In Kilo) Guest OS A vcpu 0 vcpu 1 Guest OS B vcpu 0 vcpu 1 pcpu 0 pcpu 4 pcpu 1 pcpu 5 pcpu 8 pcpu 12 pcpu 9 pcpu 13 Execution Unit Execution Unit Execution Unit Execution Unit Processor Socket 0 Processor Socket 1 pcpu 2 pcpu 6 pcpu 3 pcpu 7 pcpu 10 pcpu 14 pcpu 11 pcpu 15 Execution Unit Execution Unit Execution Unit Execution Unit Server Prefer Policy: Place vcpus on pcpu siblings (when SMT is enabled) 27

CPU Pinning Separate Policy (For Liberty) Guest OS A vcpu 0 vcpu 1 Guest OS B vcpu 0 vcpu 1 pcpu 0 pcpu4 pcpu 1 pcpu 5 pcpu 8 pcpu 12 pcpu 9 pcpu 13 Execution Unit Execution Unit Execution Unit Execution Unit Processor Socket 0 Processor Socket 1 28 pcpu 2 pcpu 6 pcpu 3 pcpu 7 pcpu 10 pcpu 14 pcpu 11 pcpu 15 Execution Unit Execution Unit Execution Unit Execution Unit Server Separate Policy: Scheduler will not place vcpus from same guest on pcpu siblings

CPU Pinning Isolate Policy (For Liberty) Guest OS A vcpu 0 vcpu 1 Guest OS B vcpu 0 vcpu 1 pcpu 0 pcpu4 pcpu 1 pcpu 5 pcpu 8 pcpu 12 pcpu 9 pcpu 13 Execution Unit Execution Unit Execution Unit Execution Unit Processor Socket 0 Processor Socket 1 pcpu 2 pcpu 6 pcpu 3 pcpu 7 pcpu 10 pcpu 14 pcpu 11 pcpu 15 Execution Unit Execution Unit Execution Unit Execution Unit Server Isolate Policy: Nova will not place vcpus from any pcpu that has an allocated sibling 29

CPU Pinning Avoid Policy (For Liberty) Guest OS A vcpu 0 vcpu 1 Guest OS B vcpu 0 vcpu 1 pcpu 0 pcpu 4 pcpu 1 pcpu 5 pcpu 8 pcpu 12 pcpu 9 pcpu 13 Execution Unit Execution Unit Execution Unit Execution Unit Processor Socket 0 Processor Socket 1 pcpu 2 pcpu 6 pcpu 3 pcpu 7 pcpu 10 pcpu 14 pcpu 11 pcpu 15 Execution Unit Execution Unit Execution Unit Execution Unit Server Avoid Policy: Nova scheduler will not place the guest on a host with SMT enabled 30

31 Huge Page Tables Translation Lookaside Buffer (TLB) o o Memory component that accelerates address translation. Caches a subset of address translations from the page table. Huge page table sizes (e.g. 1 GB) o o TLB caches a greater range of memory translations Helps reduces TLB misses. Memory Address Translation Request Check TLB Cache Page Entry in Cache? TLB Cache Small (4KB) & Huge (1GB) Page Entries Small page table entries (4KB) can result in a greater number of TLB misses Yes Fast Translation No Fetch Page Table from memory

Optimize Host for NFV - Huge Page Table and CPU Isolation Compute Node(s) Edit /etc/default/grub GRUB_CMDLINE_LINUX="intel_iommu=on default_hugepagesz=2mb hugepagesz=1g hugepages=8 isolcpus= 1, 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 15 sudo grub-mkconfig -o /boot/grub/grub.cfg sudo reboot 32

Optimize for NFV: Create Host Aggregate Create aggregate for NFV usage nova aggregate-create nfv-aggregate nova aggregate-set-metadata nfv-aggregate nfv=true Add hosts to the NFV aggregate 33

Optimize for NFV: Create Host Aggregate N.B.: Good practice to create an aggregate for non-nfv use cases nova aggregate-create default-usage nova aggregate-set-metadata default-usage nfv=false Update all other flavours to include the meta-data nova flavor-key <flavour-name> set aggregate_instance_extra_specs:nfv=false Add hosts to the default aggregate 34

Optimize for NFV: /etc/nova/nova.conf [default] pci_alias={"name":"niantic", vendor_id : 8086, "product_id":"10fd"} pci_passthrough_whitelist={"address":"00 00:08:00.0","physical_network": physnetn FV"} 35

Optimize for NFV: /etc/nova/nova.conf [default] scheduler_default_filters = RamFilter, ComputeFilter, AvailabilityZoneFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, AggregateInstanceExtraSpecFilter, PciPassthroughFilter, NUMATopologyFilter 36

Optimize for NFV: /etc/nova/nova.conf [libvirt] cpu_mode=host-model or host-passthrough vcpu_pin_set=1,2,3,5,6,7,9,10,11,13,14,15 37

Optimize for NFV: ml2_conf.ini Configure /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] tenant_network_types = vlan type_drivers = vlan mechanism_drivers = openvswitch,sriovnicswitch [ml2_type_vlan] network_vlan_ranges = physnetnfv:50:100 38

Optimize for NFV: ml2_conf_sriov.ini Configure /etc/neutron/plugins/ml2/ml2_conf_sriov.ini [ml2_sriov] supported_pci_vendor_devs = 8086:10fb agent_required = False [sriov_nic] physical_device_mappings = physnetnfv:eth1 39

Optimize for NFV: Create VNF Flavor nova flavor-create nfv-node auto 1024 0 4 nova flavor-key nfv-node set hw:cpu_policy=dedicated hw:cpu_threads_policy=prefer capabilities:cpu_info:features=aes pci_passthrough:alias=niantic:1 aggregate_instance_extra_specs:nfv=true 40

Optimize for NFV: Create VNF Flavor nova flavor-key nfv-node set hw:numa_nodes=1 hw:numa_cpus.0=0,1,2,3 hw:numa_mempolicy=strict hw:numa_mem.0=1024 hw:mem_page_size=2048 41

Optimize for NFV: Create Network neutron net-create provider:physical_network =physnetnfv -provider:network_type=vlan NFVnetwork neutron subnet-create NFV-network <CIDR> name <Subnet_Name> allocationpool=<start_ip>, end=<end_ip> neutron port-create NFV-network -- binding:vnic-type direct 42

Optimize for NFV: Boot VNF VM nova boot --flavor nfv-node --image <image> - -nic port-id=<from port-create command> <vm name> 43

Other Notable Changes: New ML2 OVS driver for ovs+netdev-dpdk o o High Performance User Space based vswitching High Performance path to the VM (vhost User), with new VIF type in Nova. ovsswitchd User Space Forwarding Tunnels Available on stackforge/networking-ovs-dpdk TAP netdev socket Kernel Packet Processing vhost DPDK netdev DPDK Libraries Polled Mode Driver NIC VM virtio qemu Supports DVR in VLAN and VXLAN modes 44

Other Notable Changes: o o VLAN Trunking API Extension o o o o New network property that indicates requirement for transparent VLANs ML2 drivers that indicate that they do not support transparent VLANs or do not have the attribute will fail to create the transparent network. LB, VXLAN and GRE drivers support VLAN transparent networks The VLAN and OVS drivers do not support VLAN transparent networks Service VM Port Security (Disable) Extension o o Neutron's security group always applies anti-spoof rules on the VMs. This allows traffic to originate and terminate at the VM as expected, but prevents traffic to pass through the VM. Disabling security is required in cases where the VM routes traffic through it. 45

VNF Deployment Considerations & Future Work Ericsson Page 46

Evolution of node delivery Optimized App Optimized App Ericsson Page 47

Customized application and Hardware Optimized Service APPLICATION Bin/Libs Host OS Custom Hardware Platform Application designed based on Custom hardware and OS Optimized Host OS for custom hardware Ericsson Page 48

Application running on Industry Standard High Volume Standard Optimized VNF on Bare metal APPLICATION Bin/Libs Host OS Industry Standard Hardware Application designed based on x86 Platform Optimized Host OS HDS 8000 Ericsson Page 49

Type 2 Hardware Virtualization VM VM App run inside VM APP APP Bin/Libs Bin/Libs Guest Guest OS OS Hypervisor Host OS Physical Server Virtual Machine Any Guest H/w Emulation: Expose Instruction Set to Guest OS CPU Support for Virtualization Intel VT-x, Intel VT-d: H/W Emulation Ericsson Page 50

Linux Containers Container APP Bin/Libs Container APP Bin/Libs Linux Containers on bare metal Application sharing common libraries and Kernel Host OS Physical Server Containers share same OS Kernel and are separated by Name Spaces Ericsson Page 51

Containers inside VM VM-1 VM-2 Container Container Container Container APP APP APP APP Bin/Libs Bin/Libs Bin/Libs Bin/Libs Guest OS Guest OS Hypervisor Physical Server Ericsson Page 52

Which deployment option? suits my vnf? Hypervisor baremetal container answer Provides density and isolation to run different Guest OS Virtualising the h/w platform for VNF s to run on any x86 machine Platform Resources e.g. CPU and Memory can be shared or dedicated and allocated to different VNF s Ericsson Page 53 Applications that consume all resources on the blade and mission critical applications Infrastructure applications that perform high user plane and control plane packet processing Dedicated resource isolation due to regulatory requirement No hypervisor license fee, i.e.capex reduction, removes overhead and potential layers of failure Suitable for VNF/Apps that can share a common kernel Offers high form of density and removal of multiple guest and hypervisor overheads H/W acceleration support in progress Reduced isolation compared to VM s All 3 Deployment options are needed VNF/Apps will benefit differently in each deployment option By supporting all 3 deployment options in an IaaS Manager, we can support all possible VNF/Apps type deployment models

Summary Transparent collaboration between ETSI-NFV, OPNFV, Telco-WG and OpenStack core projects vital to enabling OpenStack for NFV. Making steady but meaningful progress on NFV enablement. Hypervisor, bare metal and container deployment options in an IaaS system are needed to support all possible VNF/Apps types. 54

55 Q&A

References - Contributing ETSI: ohttps://portal.etsi.org/tbsitemap/nfv/nfvmembership.aspx OPNFV: ohttps://www.opnfv.org/developers/how-participate TelcoWG: ohttps://wiki.openstack.org/wiki/telcoworkinggroup OpenStack: ohttps://wiki.openstack.org/wiki/how_to_contribute <PRESENTATION TITLE>

Intel Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at [intel.com]. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Intel, the Intel logo, Intel Advanced Encryption Standard New Instructions (Intel AES-NI) are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. 2015 Intel Corporation. 57