1 Virtual Network Overlays Product / RFI Requirements Version 1.0 A white paper from the ONUG Overlay Working Group October, 2014
2 Definition of Open Networking Open networking is a suite of interoperable software and/or hardware that delivers choice and design options to IT business leaders, service and cloud providers. At its core, open networking is the separation or decoupling of specialized network hardware and software - all in an effort to give IT architects options in the way in which they choose to design, provision, and manage their networks. These technologies must be based on industry standards. The standards can be de-facto as adopted by a large consortium of the vendor community, open in the sense that they are community based, or defined as standards by the prevailing standards bodies. Open networking hopes to deliver on two promises: 1) Decoupling of network hardware and software which mitigates vendor lock-in and shifts network architecture structure options to users 2) Significant reduction of the total cost of ownership model, especially operational expense Executive Summary This document defines a common set of functional solution requirements for one of the open networking use cases, virtual overlay networking, identified by the ONUG community. The content of this document is intended as general guidelines for IT enterprise end-users to compare vendor solutions and develop formal RFI specifications, and for IT vendors to develop and align product requirements. Defining a common set of solution requirements aligns with ONUG s goal to drive the IT vendor community to deliver open interoperability networking solutions in order to provide IT end users maximum choice and flexibility in deploying open networking solutions. The expectation is that this document provides a common baseline, covering the majority of enterprise deployment requirements for network overlay virtualization solutions. The assumption is being made that this set of requirements will be completed by enterprise-specific requirements to meet specific deployment needs. Finally, the expectation is that the scope of requirements defined in this document will evolve. Hence, the versioning of this document. Virtual Networks/Overlays Virtual overlays are networking solutions characterized by one or more packet encapsulation techniques in order to forward end-to-end service traffic, independent of underlying transport network technology and architecture. Early successful implementations of this type of architecture were MPLS layer-3 and layer 2 VPNs (Virtual Private Networks), which were introduced in the late 90 s. Since then, vendors and industry bodies have introduced various other network overlay solutions, some based on open and some based on proprietary technologies. The scope of network overlay technology solutions in this document is based on data center deployments and IP-based networks. Note that similar network overlay techniques can be applied to other networking segments such as branch, WAN and campus. These scenarios are out of scope for this document at this point. The same applies for network overlays for layer-2 networks; requirements for these types of network deployment scenarios are not being covered at this point. Figure 1 shows a general framework illustrating the main architecture components of a virtual network overlay architecture solution. Solution requirements are organized along these functional building blocks in the following sections. Figure 1. Functional Framework for Virtual Network Overlays Orchestration Overlay Controller(s) Network Overlay End-Point Edge Node Network Overlay Tunnel L2/L3 Network Network Overlay End-Point Edge Node Open Interface
3 Open Networking User Group (ONUG) ONUG is one of the largest industry user groups in the networking and storage sectors. Its board is made up exclusively of IT business leaders, with representation from Fidelity Investments, FedEx, Bank of America, UBS, Cigna, Pfizer, JPMorgan Chase, Citigroup, Credit Suisse, Gap, Inc., and Symantec. The ONUG mission is to guide and accelerate the adoption of open networking solutions that meet user requirements as defined through use cases, proof of concepts, hackathons, and deployment examples to ensure open networking promises are kept. The ONUG community is led by IT business leaders and aims to drive industry dialogue to set the technology direction and agenda with vendors. To that end, ONUG hosts two major conferences per year where use cases are defined and members vote to establish a prioritized list of early adopter, open networking projects that communicate propensity to buy and budget development. The vendor community stages proof of concepts based upon ONUG Use Cases, while standards and open source organizations prioritize their initiatives and investments based upon them. ONUG also hosts user summits and smaller, regional user-focused Fireside Chat Meet-Ups through the year. ONUG defines six architectural areas that will open the networking industry and deliver choice and design options. To enable an open networking ecosystem, a common multivendor approach is necessary for the following six architecture components: 1) Device discovery, provisioning, and asset registration for physical and virtual devices 2) Automated no hands on keyboards configuration and change management tools that align DevOps and NetOps 3) A common controller and control protocol for both physical and virtual devices 4) A baseline policy manager that communicates to the common controller for enforcement 5) A mechanism for sharing (communicating or consuming) network state and a unified network state database that collects, at a minimum, MAC and IP address forwarding tables automatically 6) Integrated monitoring of overlays and underlays In the virtual network overlay framework, three main functional areas can be identified: a data plane, control plane and a management plane. Network overlay tunnels are part of the data plane and carry end-to-end data traffic between edge nodes, connected to a common underlying layer-2/3 network infrastructure, also referred to as underlay. Edge nodes can be physical devices or virtual networking functionality supported in software. Examples are Topof-Rack switches (ToRs), network services appliances (e.g., firewalls and load balancers), hypervisors and OS containers. Overlay tunnels are supported via packet encapsulation techniques. At ingress of a tunnel, traffic is encapsulated with a new additional packet header, and this header is removed at the egress end-point of the tunnel. The control plane is responsible for placement and configuration of overlay tunnel end-points. The assumption is being made that control plane functionality is implemented by one or more overlay controller software components, which are deployed in some cluster and/or distributed fashion for resiliency and scalability purposes. The control plane also provides programmatic access to upper layer orchestration software, responsible for end-to-end provisioning across technology domains, e.g., compute, network and storage. The management plane consists of all functions needed to configure, monitor and troubleshooting of virtual network overlays, including overlay end-points and endto-end connectivity. Figure 1 shows various areas in the network overlay architecture where there is a requirement for open interfaces, both for overlay provisioning and between overlay control plane components. These areas will be addressed in more detail in the following sections of this document. Problem Statement A brief description of the problems and challenges being addressed by network virtualization and overlays is provided in the [ONUG use case whitepaper] and were discussed within separate user discussion groups at the recent ONUG May 2014 conference. These problems can be summarized as follows: 1. Network Segmentation: Multi-tenant data center networks often have a requirement for separate virtualized workloads and separate connectivity for each tenant to meet security and compliance guidelines. Using conventional centralized network security techniques to meet these workload specific security requirements is challenging and often not feasible. 2. Workload Mobility: In order to optimize asset utilization, there is a need to seamlessly move workloads across server clusters and, this way, move workloads to different server PoDs. Certain vendor cluster solutions require layer-2 connectivity between cluster nodes and moving workloads between clusters requires extending layer-2 domains between clusters. Extending layer-2 connectivity using conventional layer-2 VLAN configuration requires multiple network touch points, which is error prone and resource intensive. 3. Workload Migration: physical server (rack) upgrades sometimes require migration (temporarily) of workloads to a different server PoD within the same or between data centers. To ensure continued reachability and operation of these workloads, there is a requirement for the workload to keep the same IP addressing (and other network configuration). With today s conventional networking
4 techniques, this typically involves reconfigurations in the multiple parts of the data center network, which often requires manual configuration, which is error prone and resource intensive. Virtual network overlays address these problems by decoupling network configuration of virtual workloads from the physical network and support automated API-based networking configuration in server virtualization software. In addition, data traffic from workloads is encapsulated and, this way, logically separated from physical network traffic. Virtual network overlays enable decoupling of edge networking from physical network underlays and, this way, limit or even eliminate any vendor lock-in. Virtual network overlays can run over any (IP-based) physical underlay network. So, end-users have the ability to buy virtual network overlay and physical network underlay solutions from different vendors, if desired. In addition, virtualizing edge networking enables hiding large number of MAC and IP addresses, consumed by VMs (Virtual Machines) or containers, from the physical edge switches, which simplifies the networking (scale) requirements for these devices. This, in turn, makes it easier for end-users to buy networking switching hardware from any vendor, including the option to use low-cost white box switches. Requirements The syntax of a requirement definition in the following sections is defined as follows: R-10 < Requirement text/definition > Priority: < High, Medium or Low > Requirements in this document are numbered using increments of 10. Where needed, related sub-requirements are numbered using increments of 1. In addition, for each requirement, a priority is assigned using the following guidelines: High: Functionality that must be supported at day one and is critical for baseline deployment. Medium: Functionality that must be supported, but is not mandatory for initial baseline deployment. Low: Desired functionality, which should be supported, but can be phased in as part of longer term solution evolution. Data Plane Requirements [The network overlay data plane is expected to provide seamless end-to-end connectivity among a heterogeneous set of virtual and physical end-points, which will evolve over time. End-toend overlay connectivity scenarios include (in order of expected deployment scenario): Virtual-to-virtual: network overlay connectivity between virtualized host end-point instances, such as hypervisors and Linux containers, referred to Network Virtualization Edge (NVE). Virtual-to-physical: network overlay connectivity between a NVE and physical network end-point, such as a ToR switch or network service appliance. Physical-to-physical: network overlay connectivity between physical network end-points, such as ToR switches and/or network service appliances. Different encapsulation techniques are being supported by vendor network overlay solutions at this time of writing. In general, ONUG s preferred approach is that a single published and agreed-upon encapsulation technique is being used in network overlay solutions by both software and hardware vendors. This way, cross-vendor interoperability can be achieved, as network overlay deployments evolve over time. General Data Plane Requirements The following general requirements are defined for network overlay packet forwarding operations: R-10 Network overlay packet processing and forwarding at a NVE (e.g., hypervisor) should be supported via common open mechanisms defined by [Open Virtual Switch]. Network Overlay Encapsulation The following requirements are defined for network overlay encapsulation: R-20 CPU processing overhead, as a result of overlay packet processing, on server host systems should not exceed more than five percent (5%).
5 R-30 Any specific features required for hardware offload of network overlay packet processing (for meeting the CPU processing overhead requirements) should be documented clearly. For example, use of TSO or VXLAN offload on NICs. Similarly, any requirement for enabling or disabling certain (e.g., Linux) OS kernel modules must be clearly documented. R-40 Characteristics of CPU processing overhead, as a result of overlay packet processing, on server host systems should be documented and made available to end-users. For example, incremental CPU load as a function of throughput of overlay data traffic. R-50 Network overlay traffic encapsulation, as defined in Section 5 VXLAN Frame Format of [draftmahalingam-dutt-dcops-vxlan], must be supported. R-60 Network overlay traffic encapsulation, as defined in Section 3.2 Network virtualization frame format of [draft-sridharan-virtualization-nvgre], must be supported. In addition to VXLAN and NVGRE, several other proprietary traffic encapsulation types are being used by vendor solutions today. Until these alternative encapsulation types are clearly documented, publicly available and have cross-vendor support, these encapsulation techniques will not be considered and/or promoted by ONUG at this point of time. ONUG strongly encourages that agreed-upon virtual network overlay encapsulation techniques are being pushed by end-users and vendors through the various standard committees, such that these encapsulation techniques formally get ratified to approved standards. Network Overlay Termination The following requirements are defined for network overlay endpoint termination: R-70 The ability for virtual network overlays to terminate on a software hypervisor and have guest VMs connected to a specific virtual network overlay end-point on the hypervisor must be supported. R-80 The ability for virtual network overlays to terminate on a server host OS with container configured and have container instance connected to a specific virtual network overlay end-point must be supported. R-90 The ability for virtual network overlays to terminate on a physical network switch or router must be supported. The virtual network overlay end-point should be configurable, similar to virtual/logical ports. R-100 The ability for virtual network overlays to terminate on a physical network services appliances (e.g., Firewalls, Load Balancers) must be supported. The virtual network overlay end-point should be configurable, similar to virtual/logical ports. Network Overlay End-Point Traffic Policies The following requirements are defined for policies to steer and manipulate traffic to and from network overlay end-points: R-110 Virtual (e.g., hypervisor) or physical (e.g., ToR switch) edge nodes must support mapping of layer-2 traffic onto and from virtual network overlays. The later requires the edge nodes to support layer-2 forwarding lookup and overlay header encap/decap operations. R-120 Virtual (e.g., hypervisor) or physical (e.g., ToR switch) edge nodes must support mapping of layer-3 traffic onto and from virtual network overlays. The later requires the edge nodes, in addition to layer-2, to support layer-3 forwarding lookup and overlay header encap/decap operations.
6 R-130 In addition to mapping of layer-2 and layer-3 traffic onto network overlays, the ability to do layer-2 switching and layer-3 routing within a network overlay must be supported as well. The assumption is being made that layer-2/3 networking functionality is fully distributed across the network overlay end-points and, this way, avoids hair-pinning of traffic between subnets with routed traffic having to travel to a centralized gateway. R-140 Virtual or physical edge nodes, supporting network overlay end-point termination, must support configurable QoS mapping(s) as part of traffic overlay header encapsulation operations. R-150 Virtual or physical edge nodes, supporting network overlay end-point termination, must support configurable access control lists for filtering ingress and egress traffic of virtual network. Control Plane Requirements The network overlay control plane is expected to support placement and provisioning of overlay tunnel end-points and apply ingress/egress traffic policies on these end-points. Endpoint forwarding state needs to be configured in order to direct traffic onto (on ramp) and from (off ramp) network overlays, which requires communication between the controller(s) and overlay end-points. In addition, exchange of forwarding state between network overlay controllers may be needed when separate network overlay controller functions are deployed for different network domains (e.g., separate data centers). Finally, virtual network overlays provide connectivity between compute, storage and end-users applications. Coordination and placement of network connectivity among these domains typically involves higher layer orchestration functions, which need to interact with the network overlay controller(s). The various interfaces for the virtual network overlay control plane are depicted in Figure 1 and can be summarized as follows: Controller-to-End-Point (South-Bound) interface: for configuration of overlay end-points and communication of forwarding state information that needs to be configured at overlay end-points. Controller-to-Controller (East-West) interface: for exchange of network topology, state and policy information, and network overlay end-point forwarding state information in order to enable end-to-end network overlay connectivity across a single or multiple network domains. Controller-to-Orchestration (North-Bound) interface: for orchestration systems to communicate high-level application connectivity policy information to network overlay controllers and for network overlay controller to expose logical network topology and policy information to north-bound orchestration systems. Network overlay solutions available at the time of this writing support either all or a subset of the network control plane interfaces listed above. There are a variety of protocols/ mechanisms being used to implement each of these interfaces. Since we are still in the early stages of the network overlay technology adoption cycle, it is not always possible to identify clear winners of specific technology options for network overlay control plane interfaces. Instead, rather than specific technologies, functional requirements are being defined for the various interfaces, which should be used as guidelines for development of technologies and standards related to these interfaces. A general guideline, ONUG s preferred approach is that published, well-defined and agreed-upon (across vendors) protocol mechanisms are being used for implementing the various network overlay control plane interfaces. This way, crossvendor interoperability can be achieved, providing maximum flexibility for end-users, as network overlay deployments evolve over time. For completeness, where applicable, technology options currently in use by existing network overlay solutions will be mentioned for the associated requirements sections. The fact that these technology options are being listed does not necessarily mean that these options are being supported by ONUG (that is, is an explicit ONUG requirement) at this point of time. General Overlay Control Plane Requirements The following general requirements are defined for a virtual network overlay control plane in a large scale deployment scenario:
7 R-160 The control plane of a virtual network overlay solution must support state and policy provisioning of at least 10,000 end-points in a given (single) network domain. The assumption is being made that scale is being addressed via either a multi-node cluster or federated control plane architecture. R-170 The control plane of a virtual network overlay solution must support state and policy provisioning of up to 100,000 end-points in a given (single) network domain. The assumption is being made that scale is being addressed via either a multi-node cluster or federated control plane architecture. R-180 The control plane of a virtual network overlay deployment scenario, defined in R-170 and R-180, must able to recover (re-converge) from failures within an acceptable timeframe. The assumption is being made that the duration of failure recovery will depend on the number of supported end-points. Worst-case scenario is assumed to be when one or controller nodes fail or lose network connectivity and need to completely resynchronize/rebuild. In these cases, the re-converge/rebuild time should not take more than 15 minutes. The assumption is being made that aforementioned-control-plane failures will not result into any data plane forwarding interruptions. The following general requirements are defined for a virtual network overlay control plane in a small-to-medium scale (e.g., PoD, data center) deployment scenario: R-190 The control plane of a virtual network overlay solution must support state and policy provisioning of at least 1,000 end-points in a given (single) network domain. The assumption is being made that, if needed, scale is being addressed via either a multi-node cluster or federated control plane architecture. R-200 The control plane of a virtual network overlay deployment scenario, defined in R-200, must able to recover (re-converge) from failures within an acceptable timeframe. The assumption is being made that the duration of failure recovery will depend on the number of supported end-points. Worst-case scenario is assumed to be when one or controller nodes fail or loose network connectivity and need to completely resynchronized/rebuild. In these cases, the re-converge/rebuild time should not take more than five minutes. The assumption is being made that aforementioned-control-plane failures will not result into any data plane forwarding interruptions. For both large and small scale deployment scenarios, the following general virtual network overlay control plane requirements are defined: R-210 The control plane of a virtual network overlay solution must be able to handle connectivity and reachability failures with one or more end-points (e.g., server/host resets or NIC failures). After these types of events, the control plane of a virtual network overlay solution should be able to automatically re-discover and restore the virtual network end-point state and configuration. R-220 Failure of one or more control plane component in one given domain should not affect overall operation of the system and/or control plane components in other adjacent domains (i.e., faults should be contained in one single overlay control plane domain). R-230 Failures in one given domain should be signaled to and detected by other adjacent domains and may result in updates of overlay traffic forwarding policies in adjacent control plane domains. R-240 Scale and re-converge results should be clearly documented and be accessible to end-users (e.g., when an NDA is put in place). Controller-to-End-Point (South-Bound) Interface Requirements The following functional requirements are defined for the interface between the network overlay controller and NVEs and physical end-points:
8 R-250 Common mechanisms and bi-directional transport should be used for exchange of configuration and forwarding state associated with network overlay end-points. R-260 The information syntax and semantics, together with the data transport procedures, for exchange of configuration and forwarding state associated with network overlay end-points should be clearly documented and publicly accessible. [OVSDB] and several other open and proprietary mechanisms are currently in use by various network overlay solutions on the market today. In addition, new generic mechanisms, such as [OPFLEX], are being proposed for exchange of overlay end-point policy information. None of these mechanisms entirely meets the requirements stated above due to either (1) lack of complete publicly available documentation (i.e., there are undocumented proprietary extensions), and/or (2) lack of common mechanisms for both exchange of state and configuration information of network overlay end-points. Controller-to-Controller (East-West) Interface Requirements The following functional requirements are defined for the interface between network overlay controller instances: R-270 The ability for separate network overlay controllers, each managing a separate non-overlapping network domains with inter-connectivity, to communicate with each other exchange overlay end-point configuration and forwarding state information in order to establish end-to-end virtual overlay connectivity across these separate network domains must be supported. R-280 The information syntax and semantics, together with the data transport procedures, for exchange of configuration and forwarding state associated with network overlay end-points between network overlay controllers must be clearly documented and publicly accessible. A select number of networking overlay solutions today support federation of controllers. Some use proprietary mechanisms and some use a combination of [BGP] plus protocol extensions. If these BGP extensions currently in use for overlay controller federation would be formally published and get widely support by the vendor community, this would meet the functional ONUG requirements stated above. R-290 The exchange of reachability information associated with overlay end-points across network overlay domains between federated network overlay controllers via [BGP] should be supported. Controller-to-Orchestration (North-Bound) Interface Requirements The following functional requirements are defined for the interface between network overlay controller(s) and upstream cross-domain orchestration systems: R-300 Logical topology, policy and network state information should be accessible via a RESTful API for upstream applications and orchestration systems. The assumption is being made that a subset of information will be read-only and another subset will be readwrite, especially for network policy configuration and control. R-310 The information elements, made available via a RESTful API for upstream applications and orchestration systems, must be clearly documented, under change control and publicly accessible. R-320 Provisioning of virtual network topology objects, via the RESTful API for upstream applications and orchestration systems, must support the virtual network overlay control plane scale numbers defined by R-150, R-160 and R-180. R-330 The overlay controller API must support a robust Role Based Access Control (RBAC) mechanism for authentication and access control of API requests. R-340 Support for RBAC-based change management and audit information provisioning request via the northbound API must be supported. Information and logs on type and who made changes, and possible validation of certain conditions after changes, must be supported.
9 The OpenStack [Neutron API] specification meets the functional ONUG requirements stated above and could be considered as a baseline reference for a north-bound API for network overlay controllers. Management Plane Requirements The following functional requirements are defined for the management of network overlay end-points on edge nodes: R-350 Interface traffic statistics and configuration parameters, defined in [IF-MIB], should be supported for virtual network end-points. R-360 Administrative and status Up/Down events, as defined in [IF-MIB], should be supported for virtual network end-points. R-370 Events related to state (Up/Down) and status of a session between a virtual network overlay end-point and associated controller(s) must be supported. R-380 SNMP access to virtual network end-point interface traffic statistics and configuration parameters and events must be supported. R-390 API-based access to virtual network end-point interface traffic statistics and configuration parameters and events must be supported R-400 Syslog access to events related to virtual network endpoint and session between virtual network end-points and controllers should be supported. R-410 SFlow, IPFIX or other flow monitoring/sampling technologies to collect ingress and egress NVE traffic statistics must be supported. End-to-end Network Overlay Monitoring The following functional requirements are defined for end-toend virtual network management: R-420 Automated reachability validation, leveraging heartbeat messages, between any given set of endpoints within a given virtual network overlay must be supported. The assumption is being made that the frequency of heartbeat message and the threshold number of unsuccessful heartbeats before reporting a reachability failure event is configurable. R-430 Traffic flow monitoring between any given set of endpoints within a given virtual network overlay must be supported. The assumption is being made that the sampling frequency is configurable. Network Overlay-Underlay Monitoring The following functional requirements are defined for correlating physical underlay network state and events with corresponding network overlays: R-440 The ability to correlate status and failure events in physical network underlays with network overlays must be supported. Recommendations for Standards For completeness, the specific areas in virtual network overlays solutions where there is a need for open well-defined and vendor-agreed-on technology standards can be summarized as follows: Virtual network controller-to-end-point interface: open south-bound interface for configuration of overlay endpoints and communication of forwarding state information that needs to be configured at overlay end-points. Virtual network controller-to-controller interface: open east-west interface for exchange of network topology, state and policy information, and network overlay end-point forwarding state information in order to enable end-to-end network overlay connectivity across a single or multiple network domains. Virtual network controller-to-orchestration interface: open north-bound interface for orchestration systems to communicate high-level application connectivity policy information to network overlay controllers, and for network overlay controller to expose logical network topology and policy information to north-bound orchestration systems.
10 References [ONUG web site] - ONUG white paper, Open Networking Challenges and Opportunities. [Open Virtual Switch] Apache project Open vswitch. [draft-mahalingam-dutt-dcops-vxlan] - IETF Informational RFC, VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks. [draft-sridharan-virtualization-nvgre] - IETF Informational RFC, NVGRE: Network Virtualization using Generic Routing Encapsulation. [OVSDB] - IETF Informational RFC, The Open vswitch Database Management Protocol. [draft-smith-opflex] - IETF Informational RFC, OpFlex Control Protocol. [draft-ietf-l2vpn-evpn-07] - IETF Draft RFC, BGP MPLS Based Ethernet VPN. [Networking API v2.0 (CURRENT)] - OpenStack Networking API specification. [RFC2233] - IETF Draft Standard RFC, The Interfaces Group MIB. ONUG Overlay Working Group Harmen Van der Linde, Co-Chairman Carlos Matos, Co-Chairman Mike Cohen Kyle Forster Piyush Gupta Terry McGibbon Thomas Nadeau Neal Secher Srini Seetharaman Dimitri Stiliadis Francois Tallet Sunay Tripathi Jaiwant Virk Sean Wang
Network Traffic Monitoring/Visibility Product/RFI Requirements Version 1.1 A white paper from the ONUG Traffic Monitoring/Visibility Use Working Group May, 2015 Definition of Open Networking Open networking
May 13-14, 2015 Virtual Network Overlays Working Group Follow up from last ONUG use case and fire side discussions ONUG users wanted to see formalized feedback ONUG users wanted to see progression in use
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (email@example.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (firstname.lastname@example.org) Senior Architect, Limelight
Georg Ochs, Smart Cloud Orchestrator (email@example.com) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
Solution Overview Cisco and Canonical: Cisco Network Virtualization Solution for Ubuntu OpenStack What You Will Learn Cisco and Canonical extend the network virtualization offered by the Cisco Nexus 1000V
SDN CONTROLLER IN VIRTUAL DATA CENTER Emil Gągała PLNOG, 30.09.2013, Kraków INSTEAD OF AGENDA 2 Copyright 2013 Juniper Networks, Inc. www.juniper.net ACKLOWLEDGEMENTS Many thanks to Bruno Rijsman for his
White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
WHITE PAPER Hybrid Networking SDN and NFV in the WAN HOW THESE POWERFUL TECHNOLOGIES ARE DRIVING ENTERPRISE INNOVATION rev. 110615 Table of Contents Introduction 3 Software Defined Networking 3 Network
Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus
Leveraging SDN and NFV in the WAN Introduction Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two of the key components of the overall movement towards software defined
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
Mock RFI for Enterprise SDN Solutions Written By Sponsored By Table of Contents Background and Intended Use... 3 Introduction... 3 Definitions and Terminology... 7 The Solution Architecture... 10 The SDN
Why Software Defined Networking (SDN)? Boyan Sotirov Agenda Current State of Networking Why What How When 2 Conventional Networking Many complex functions embedded into the infrastructure OSPF, BGP, Multicast,
Network State Collection, Correlation and Analytics Product / RFI Requirements Version 1.1 A white paper from the ONUG Network State Collection, Correlation and Analytics Working Group May, 2015 Definition
What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates 1 Goals of the Presentation 1. Define/describe SDN 2. Identify the drivers and inhibitors of SDN 3. Identify what
Data Center Virtualization and Cloud QA Expertise Highlights Broad Functional QA Experience Deep understanding of Switching and Routing Protocols Strong hands on experience in multiple hyper-visors like
Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and
Evolution of Software Defined Networking within Cisco s VMDC Software-Defined Networking (SDN) has the capability to revolutionize the current data center architecture and its associated networking model.
Network Service Virtualization Requirements Version 1.0 A white paper from the ONUG NSV Working Group October, 2014 Definition of Open Networking Open networking is a suite of interoperable software and/or
Network Virtualization and Software-defined Networking Chris Wright and Thomas Graf Red Hat June 14, 2013 Agenda Problem Statement Definitions Solutions She can't take much more of this, captain! Challenges
Software Defined Networks Virtualized networks & SDN Tony Smith Solution Architect HPN 2 What is Software Defined Networking Switch/Router MANAGEMENTPLANE Responsible for managing the device (CLI) CONTROLPLANE
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < firstname.lastname@example.org> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
TECHNOLOGY WHITE PAPER Correlating SDN overlays and the physical network with Nuage Networks Virtualized Services Assurance Platform Abstract Enterprises are expanding their private clouds and extending
2013 ONS Tutorial 2: SDN Market Opportunities SDN Vendor Landscape and User Readiness Jim Metzler, Ashton, Metzler & Associates Jim@ashtonmetzler.com April 15, 2013 1 1 Goals & Non-Goals Goals: Describe
VIRTUALIZED SERVICES PLATFORM Software Defined Networking for enterprises and service providers Why it s unique The Nuage Networks VSP is the only enterprise and service provider-grade SDN platform that:
Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Wilfried van Haeren CTO Edgeworx Solutions Inc. www.edgeworx.solutions Topics Intro Edgeworx Past-Present-Future
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
White Paper CONNECTING PHYSICAL AND VIRTUAL WORLDS WITH WARE NSX AND JUNIPER PLATFORMS A Joint Juniper Networks-ware White Paper Copyright 2014, Juniper Networks, Inc. 1 Connecting Physical and Virtual
SDN 101: An Introduction to Software Defined Networking citrix.com Over the last year, the hottest topics in networking have been software defined networking (SDN) and Network ization (NV). There is, however,
Palo Alto Networks Security Models in the Software Defined Data Center Christer Swartz Palo Alto Networks CCIE #2894 Network Overlay Boundaries & Security Traditionally, all Network Overlay or Tunneling
SDN PARTNER INTEGRATION: SANDVINE SDN PARTNERSHIPS SSD STRATEGY & MARKETING SERVICE PROVIDER CHALLENGES TIME TO SERVICE PRODUCT EVOLUTION OVER THE TOP THREAT NETWORK TO CLOUD B/OSS AGILITY Lengthy service
A Coordinated Virtual Infrastructure for SDN in Enterprise Networks Software Defined Networking (SDN), OpenFlow and Application Fluent Programmable Networks Strategic White Paper Increasing agility and
Simplify IT With Cisco Application Centric Infrastructure Roberto Barrera email@example.com VERSION May, 2015 Content Understanding Software Definded Network (SDN) Why SDN? What is SDN and Its Benefits?
Business Case for Open Data Center Architecture in Enterprise Private Cloud Executive Summary Enterprise IT organizations that align themselves with their enterprise s overall goals help the organization
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
CoIP (Cloud over IP): The Future of Hybrid Networking An overlay virtual network that connects, protects and shields enterprise applications deployed across cloud ecosystems The Cloud is Now a Critical
W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Nuage Networks Virtualised Services Platform Packet Pushers White Paper About the Author Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Cloud Fabric Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Huawei Cloud Fabric - Cloud Connect Data Center Solution Enable Data Center Networks to Be More Agile for
WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.
Qualifying SDN/OpenFlow Enabled Networks Dean Lee Senior Director, Product Management Ixia Santa Clara, CA USA April-May 2014 1 Agenda SDN/NFV a new paradigm shift and challenges Benchmarking SDN enabled
ARISTA WHITE PAPER Solving the Virtualization Conundrum Introduction: Collapsing hierarchical, multi-tiered networks of the past into more compact, resilient, feature rich, two-tiered, leaf-spine or Spline
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
Simplify IT With Cisco Application Centric Infrastructure Barry Huang firstname.lastname@example.org Nov 13, 2014 There are two approaches to Control Systems IMPERATIVE CONTROL DECLARATIVE CONTROL Baggage handlers follow
Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer
BROCADE SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT Rajesh Dhople Brocade Communications Systems, Inc. email@example.com 2012 Brocade Communications Systems, Inc. 1 Why can t you do these things
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
SOFTWARE DEFINED NETWORKING Bringing Networks to the Cloud Brendan Hayes DIRECTOR, SDN MARKETING AGENDA Market trends and Juniper s SDN strategy Network virtualization evolution Juniper s SDN technology
OpenStack Neutron Outline Why Neutron? What is Neutron? API Abstractions Plugin Architecture Why Neutron? Networks for Enterprise Applications are Complex. Image from windowssecurity.com Why Neutron? Reason
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
Defining SDN Overview of SDN Terminology & Concepts Presented by: Shangxin Du, Cisco TAC Panelist: Pix Xu Jan 2014 2013 Cisco and/or its affiliates. All rights reserved. 2 2013 Cisco and/or its affiliates.
Complete Security and Compliance for Virtual Environments Vyatta takes the concept of virtualization beyond just applications and operating systems and allows enterprise IT to also virtualize network components
Network Virtualization for the Enterprise Data Center Guido Appenzeller Open Networking Summit October 2011 THE ENTERPRISE DATA CENTER! Major Trends change Enterprise Data Center Networking Trends in the
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
STRATEGIC WHITE PAPER Securing cloud environments with Nuage Networks VSP: Policy-based security automation and microsegmentation overview Abstract Cloud architectures rely on Software-Defined Networking
Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by
Building an Open, Adaptive & Responsive Data Center using OpenDaylight Vijoy Pandey, IBM 04 th February 2014 Email: firstname.lastname@example.org Twitter: @vijoy Agenda Where does ODP (& SDN) fit in the bigger
Open Source Networking for Cloud Data Centers Gaetano Borgione Distinguished Engineer @ PLUMgrid April 2015 1 Agenda Open Source Clouds with OpenStack Building Blocks of Cloud Networking Tenant Networks
. White Paper Network Services Virtualization What Is Network Virtualization? Business and IT leaders require a more responsive IT infrastructure that can help accelerate business initiatives and remove
White Paper: Operationalizing IT Services Software Defined Networking Federation/Operability Orchestration Brian Hedstrom Lead BSS/OSS Architect email@example.com Mark Abolafia VP, SDN Practice
Switching Fabric Designs for Data Centers David Klebanov Technical Solutions Architect, Cisco Systems firstname.lastname@example.org @DavidKlebanov 1 Agenda Data Center Fabric Design Principles and Industry Trends
Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads
White Paper Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL Pascal Geenens CONSULTING ENGINEER, JUNIPER NETWORKS email@example.com BUSINESS AGILITY Need to create and deliver new revenue opportunities faster Services
Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
CloudEngine 1800V Virtual Switch CloudEngine 1800V Virtual Switch Product Overview Huawei CloudEngine 1800V (CE1800V) is a distributed Virtual Switch (vswitch) designed by Huawei for data center virtualization
Open Networking User Group SD-WAN Requirements Demonstration Talari Test Results May 13, 2015 Talari 550 South Winchester Suite 550 San Jose, CA 95128 www.talari.com Defining the Software Defined WAN The
JUNIPER One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net 2-3-7: JUNIPER S BUSINESS STRATEGY 2 Customer Segments 3 Businesses Service
Building Scalable Multi-Tenant Cloud Networks with OpenFlow and OpenStack Dave Tucker Hewlett-Packard April 2013 1 About Me Dave Tucker WW Technical Marketing HP Networking firstname.lastname@example.org Twitter:
White Paper Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads What You Will Learn Application centric infrastructure (ACI) provides a robust transport
The Promise and the Reality of a Software Defined Data Center Authored by Sponsored by Introduction The traditional IT operational model is highly manual and very hardware centric. As a result, IT infrastructure
WHITE PAPER DATA CENTER Brocade VCS Fabrics: The Foundation for Software-Defined Networks Software-Defined Networking (SDN) offers significant new opportunities to centralize management and implement network
2013 SDN 高 峰 論 壇 SDN Architecture and Service Trend Dr. Yu-Huang Chu Broadband Network Lab Chunghwa Telecom Co., Ltd., Taiwan 10/09/13 1 Outlines SDN & NFV introduction Network Architecture Trend SDN Services
Network Virtualization Solutions An Analysis of Solutions, Use Cases and Vendor and Product Profiles October 2013 The Independent Community and #1 Resource for SDN and NFV Tables of Contents Introduction
Your consent to our cookies if you continue to use this website.