NoEncap: Overlay Network Virtualization with no Encapsulation Overheads

Size: px
Start display at page:

Download "NoEncap: Overlay Network Virtualization with no Encapsulation Overheads"

Transcription

1 : Overlay Network Virtualization with no Encapsulation Overheads Sergey Guenender, Katherine Barabash, Yaniv Ben-Itzhak, Anna Levin, Eran Raichstein, and Liran Schour IBM Research Lab Haifa, Israel {guenen, kathy, yanivb, lanna, eranra, ABSTRACT Overlay network virtualization quickly gains traction in today s multi-tenant data centers due to its ability to provide independent virtual networks, at scale, along with complete isolation from the underlying physical network. Despite the benefits, performance degradation due to the imposed perpacket encapsulation overhead is a serious impediment. Mitigation approaches are mostly hardware based and thus depend on costly networking gear upgrades and suffer from lesser flexibility and longer times to market, compared to software solutions. Software optimizations proposed so far are limited in scope, applicability, and interoperability. In this paper we present, a software-only optimization, capable of eliminating almost completely the overheads, while fully preserving the benefits of an overlay-based network virtualization. Categories and Subject Descriptors C.2.0[Computer Communication Networks]: General Data communications General Terms Design, Experimentation, Optimization Keywords Network Virtualization, Overlay Network Virtualization, Data- Center Virtualization, SDN 1. INTRODUCTION Modern cloud-scale data centers sustain increasingly large amounts of independent, highly dynamic, and communication intensive workloads, consolidated over common physical infrastructure. Workload consolidation is enabled by advances in server virtualization technology which allows running multiple independent execution environments, e.g. Virtual Machines (VMs) or containers, on a single physical Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. SOSR2015 June 17-18, 2015, Santa Clara, CA, USA Copyright 2015 ACM. ISBN /15/06...$ host, with additional benefits of automatic VM provisioning, VM migration between the physical servers, and greater independence from the host hardware. Traditionally, VM communication is achieved by bridging virtualized network interfaces to the underlying physical data center network (DCN), through a thin layer of host software (e.g. Linux bridge). As the consolidation levels and the amount and the diversity of workloads and their communication needs increase, virtualizing the network becomes crucial to the success of cloud-scale data centers [12]. Although acute in the multi-tenant cloud-scale environments, network virtualization has been around for over a decade [7] in traditional, non-virtualized, data centers, e.g. for workload isolation. Best known traditional network virtualization technology is VLAN with its 12-bit identifier located between the Ethernet and the IP headers. Being widely adopted and ubiquitously supported in networking gear, both hardware and software, VLAN was natural first choice for isolating VM networks. It has quickly become apparent, however, that some of the inherent limitations of VLAN technology make it unsuitable for efficient, scalable, and flexible network virtualization in large-scale multitenant virtualized environments, e.g. clouds. First, 12-bit VLAN identifier limits the scalability of cloud networks to K isolation domains. Second, VLAN segmentation is restricted to a single L2 network and therefore cannot by itself support extension of a single virtual network across L3 boundaries. Third, dynamic nature of today s workloads calls for high event frequency in VMs life cycle, while VLAN configuration, traditionally being fairly static and hardwarebound, does not lend itself easily to efficient automation at scale [12]. One of the solutions for the aforementioned and other limitations of the VLAN technology is overlay network virtualization, capable of complete separation of virtual networks not only from each other but from the underlying physical network. Examples of overlay-based network virtualization solutions existing today include VMware NSX, IBM DOVE and SDN VE, Midokura Midonet, Google Andromeda, Microsoft Azure, and more. Solutions differ in approaches to major building blocks and compete in offered features and capabilities, while having the same common structure with the following required constituents: encapsulation protocol, tunnel termination devices (SW or HW), and a mechanism for VM location management(collection and dissemination). In the data plane, different encapsulation protocols are used, e.g. VXLAN [17], NVGRE [18], STT [10], and Geneve [13]. Tunnel termination can be implemented in edge switches,

2 in virtual switches, or in add-on appliances. Location management can be centralized or distributed, SDN-based, or even delegated to the underlying network services, e.g. multicast groups or evpn can be used for managing VxLAN networks. Although highly beneficial for its capability to virtualize networks at scale, overlay approach has its limitations, the most prominent being the need to encapsulate all the crossserver virtual networks data. When done in host software, encapsulation drains server resources and reduces the communication speed, impacting the end to end performance of the deployed applications. In addition to the direct overhead of additional headers handling, overlay approach stresses the network by increasing the total amount of data sent over the physical links and sometimes requires MTU reconfiguration along the path of the encapsulated traffic. More significantly, encapsulated data is opaque to existing in-network devices and services, including optimizers and offload engines like TSO, LRO, etc. In this paper, we introduce a completely software approach for mitigating encapsulation overheads. allows to continue enjoying all the benefits of overlay virtualization while avoiding the performance degradation for a significant subset of communication flows. In a nutshell, trades off major improvement in data plane performance for minor increase in control plane complexity. The rest of the paper is structured as follows: solution architecture is described in Section 2; solution implementation based on IBM DOVE prototype is described in Section 3; evaluation results are presented and analysed in Section ; Section 5 surveys the related work and Section 6 concludes the paper with discussion and future work directions. 2. ARCHITECTURE Before describing architecture, we first outline the generic overlay based network virtualization solution, using the framework and the terminology suggested by the NVO3 standardization group [16]. The solution provides isolated Virtual Networks (VNs) identified by their Virtual Network Contexts and comprises multiple Network Virtualization Edge (NVE) modules and a Network Virtualization Authority (NVA). NVE modules intercept traffic generated by VN clients and use encapsulation tunnels to send it over the physical network, while NVA manages the VNs, the tunnels, and, to some extent, the NVE modules. Figure 1(a) illustrates data flow between two VN clients that can be VMs, bare metal servers, or containers. Data sent by VN Client s is intercepted by its hosting NVE module, NVE s, which inspects the packet headers and, in consultation with the NVA, resolves the VN Context and the location of the destination VN client, VN Client d, and its hosting NVE module, NVE d. NVE s encapsulates the flow s packets and sends them over the physical network towards the NVE d which, in consultation with the NVA, decapsulates the packets and delivers them to VN Client d. To minimize the amount of VN Context and location resolutions, all NVE modules cache the resolved information, preferably for the entire flow duration time. Cache expiration and invalidation due to virtual or physical events, e.g. VM migration, policy updates, port failover, etc, must be seamlessly supported by every specific implementation. With overlay network virtualization, data exchanged between VN clients is delivered over the physical network in fragments carrying multiple headers: inner, or virtualization level, headers created by VN clients and outer, or physical level headers, appended by NVEs. The format and the contents of the inner headers depend on the communication protocol in use by the VN clients, while the format and the contents of the outer headers depend both on the communication protocol in use by the NVEs and on the overlay encapsulation format. Figure 1(b) presents an example of inner and outer headers for a case of VN clients communicating over TCP/IP/Ethernet stack and outer headers dictated by the VxLAN[17] encapsulation format. In this case, outer headers obscure the VN-level TCP headers from TCP Segmentation Offload (TSO) and Large Recieve Offload (LRO) engines ubiquitously deployed in host network adapters (NICs). As a result, VN-level TCP segmentation and reassembly must happen in software, leading to major performance degradation and failure to leverage the available NIC capability. To regain the benefits of NIC-based TCP optimizations, we build upon two observations. First, IP over Ethernet is the most widespread communication service today, deployed in majority of DCNs and expected by majority of network clients. Second, when both the VN clients and the NVEs communicate through the TCP/IP/Eth, inner packet headers are structured in exactly the same way as outer headers. With, we exploit this property to offload entire TCP sessions from VN clients to NVEs, by replacing the contents of inner header fields with the NVE-level data. Actual communication payload of VN clients is therefore transmitted as if it was data communicated between NVEs. Original VN-level header fields are entrusted to the NVA data store in exchange for a numeric key that we denote Flow Identification Tag or, shortly, FIT. In architecture FIT is assigned at flow initiation time and included in all the packets of its flow. The mapping between the original inner headers content and the FIT must be unique in the context of a pair of communicating NVEs and can be generated either centrally, e.g. by SDN controller, or distributively, e.g. through direct communication between the source and the target NVEs. Small integer range is sufficient to support all the possible simultaneous TCP sessions served by a pair of NVEs, therefore FIT can be easily accommodated in an unused outer header field of TCP packet, e.g. in the TCP source port. Figure 1(c) presents control and data plane information for TCP processing under, to accompany generic series of flow processing events in Figure 1(a). As before, data packets sent by VN Client s are intercepted by NVE s. In addition to resolving VN Context and destination location, NVE s makes (or receives) a decision whether the flow is eligible to processing or should be handled with the encapsulation approach. Flows not chosen for processing are treated as dictated by the overlay solution at hand. Each flow, selected for processing, is assigned a FIT, currently unused in the scope of communicating NVEs pair. A mapping between the original header fields and the FIT is stored in NVA and is cached in the involved NVEs. NVE s rewrites the original header fields to contain NVE-level addresses and to include FIT (in implementation specific way, e.g. as a TCP source port) and sends the modified packet over the physical network towards NVE d. Upon receiving packet, NVE d retrieves the FIT from the source TCP port field, obtains flow s original

3 Data and Control Path Encapsulation-Based Solutions VN Client S (vip S ; vmac S ) 1 VNE S (pip S ; pmac S ) 2 Virtual Environment Physical Environment 3 Control Environment VNA VN Client D (vip D ; vmac D ) 5 VNE D (pip D ; pmac D ) Controller DB: pip pmac VNID UDP Header vmac vmac vip vip Src Src Encapsulation Header (e.g., VxLAN) Dst Dst vip S vip D VNID S pip D VNID D ( ( ( Data Path pip S pip D Proto FIT vip S vip D SrcP * DstP * VNID S Control Path * SrcP/DstP= Source/Destination a) b) c) vmac pmac vip pip Src FIT Dst No Encap vip S vip D VNID S pip D VNID D Figure 1: Overlay Network Virtualization Architecture: (a) - generalized flow of packet processing events; (b) - example data plane and control plane contents for VxLAN encapsulation; (c) - example data plane and control plane contents for optimization header fields together with the VN Context from NVA or from its cache, restores the original packet headers, and delivers the packet to its ultimate destination, VN Client d. Architecture requires implementing a method for classifying traffic as such at receiving NVE. This can be done, for example, by designating a TCP destination port to communications. Decision to apply depends on current network state, on whether flow s SLA is met, or on whether the flow is large and long (elephant), communication intensive, or mission critical. Architecturally, the decision maker is a separate component, while implementations may vary, entrusting the decision making to the centralized or distributed NVA, to the higher level management/orchestration engines, or even to end users e.g. all TCP flows of tenant paying for platinum service are eligible. The decision making can be automated, e.g. based on continuous monitoring of performance and state. In addition to selective applicability, decision to subject a flow to processing can be dynamically reconsidered during flow lifetime, so that flows can initially be handled with a regular encapsulation and later upgraded to handling and vice versa. As with regular overlay processing, all NVEs cache resolved flows data, to be applied to further packets of already active flows. Cache expiration and invalidation rules apply to flows similarly to regular overlay flows, with the addition of dynamic reconfiguration of flow handling method. 3. IMPLEMENTATION To validate architecture, we have prototyped it as an extension over IBM Distributed Overlay Virtual Network (DOVE) [9]. In this section, we briefly describe an extensible prototype of the underlying overlay network virtualization solution, based on open components. We then present extensions implementing the processing and ensuring co-existence of the and the regular VxLAN processing in the same forwarding plane. IBM DOVE is an overlay network virtualization technology combining VxLAN encapsulation in the data plane, logically centralized controllers cluster in the control plane, and a unique policy-based virtual network abstraction in the management and configuration plane. DOVE is integrated with OpenStack Neutron [5] for data center wide orchestration and management [8]. For easier extensibility and experimentation, we maintain a light weight version of DOVE, created with open source components for KVM [1] based x86-6 virtualization environments. In our implementation, an unmodified Open vswitch (OVS) [] is used as NVE module and Ryu Open Flow (OF) controller framework [6] as a base for a simplified version of DOVE Policy Service (DPS). This simple DPS implements both the configuration/management and the control functionality, i.e. it is responsible for maintaining the VN semantics DOVE networks context and policies, for distributing VM location and VN policy information among the DOVE NVE modules OVS instances, and for controlling OVS data processing rules. In DOVE, virtual network clients are partitioned into virtual groups. Special policies allow or forbid communication between members of any pair of groups, or force their communication to pass through one or more network services. In our minimalistic DOVE prototype, every virtual group is allocated a unique VxLAN ID. 3.1 Extending DOVE for WehaveextendedDPStohandleFITallocationandmanagement and to program OVS switches to overwrite and restore packet headers using OpenFlow(OF) protocol. Rewritten packets carry FIT from the source to the destination OVS in the source TCP port field, while designated TCP destination port number is used to identify packets as such. Distinguishing between the regular TCP traffic targeted tovmhostsfrom traffictargetedtohostedvmsat destination is accomplished through OVS configuration and static OF rules. As shown in Figure 2, OVS is configured in a dual datapath 1 setup with static OF rules in additional Mux datapath, forcing traffic to bypass the host IP stack. Design and implementation of component responsible for selective application of is out of the scope of this work. Initial prototype described here allows for plugging such a component into a DPS in the future and ensures that both the and the encapsulated flows can co-exist and be forwarded simultaneously. 1 The overhead of using the second datapath was tested and found to be negligible.

4 0 Dynamic Configuration Initial Configuration VM1 A (vip1 VM1 A ) B vnic (vip1 B ) vnic MTU vnic MTU vnic. VM2.. B (vip2 B ) DOVE OVS Data-Path Traffic Traffic DOVE OVS Data-Path Mux OVS Data-Path VM A (vip A ) VxLAN-UDP-Socket VxLAN Traffic Throughput [Gbps] Control Path Data Path Physical NIC (pip A ) MTU Phy Host A Host B Figure 2: OVS setup for. EVALUATION We evaluate implementation described in Section 3 through a series of experiments. All experiments share the same environment, consisting of a pair of IBM System x3550 M servers connected back-to-back with 0GbE link. Each server is equipped with 197GB of RAM, two Xeon(R) CPUs 2, and Mellanox ConnectX-3 0GbE network card capable of VxLAN offload. Physical NICs are pinned to dedicated CPU cores (3 first cores on each server) and configured with 1500 bytes MTU Phy. Both servers have BIOS settings set for maximum performance, irqbalance service turned off, vhost-net module loaded at boot time, and TCP/IP stacks fine-tuned for achieving the highest network performance. For virtualization, each server runs KVM and libvirt [2] and hosts four VMs, provisioned with two dedicated cores, 1GB RAM, a virtual NIC connected to management network, and a virtual NIC connected to DOVE OVS datapath. Schematic representation of experimental environment is shown on Figure 2. We have configured the experimental environment into four distinct setups. For each setup, we have run a series of tests to collect statistically solid numbers for key network performance indicators (KPIs) throughput, latency, and efficiency. The following four experiment setups are defined and compared: SwVxLAN setup represents unoptimized overlay network vitalization solution where all the VMs communications are subject to VxLAN processing. setup represents the most active usage of where all the VMs TCP communications are subject to processing, while rest of the traffic is processed with VxLAN. HwVxLAN setup represents the HW offload solution, were all the VxLAN hardware acceleration capabilities are enabled in Mellanox ConnectX-3 PRO network cards. setup is used as the upper performance bound. Here, VMs are bridged to the physical network and experience no encapsulation overhead. In order to accommodate for encapsulation headers, guest machines have their MTU vnic reduced to 150 bytes in 2 12 cores, HyperThreading disabled. Figure 3: Aggregated host throughput SwVxLAN, and HwVxLAN setups. For the remaining setup, 1500 MTU vnic is used (to this setup relative advantage). For all the KPIs we evaluated, the differences between the HwVxLAN and setups were negligible, therefore in what follows we, for a sake of brevity, present and omit HwVxLAN results..1 Evaluation results and analysis For all the experiments, we generate traffic between pairs of VMs such that the source and the destination VM reside on different host, e.g., VM1 A in Figure 2 talks to VM1 B. The results are obtained for runs with one to four communicating simultaneously. Throughput. In order to evaluate the achieved aggregated throughput, we run four 3 TCP sessions between each pair of VMs, using iperf, and measure the results over the physical host s NIC with ifstat. As shown in Figure 3, achieves approximately the same aggregated throughput as. In addition, while both and reach the physical line rate (0Gbps) for three and four, SwVxLAN is not capable of saturating the link or scale with the amount of communication flows. Efficiency. To evaluate efficiency, we normalize achieved aggregated throughput by the total CPU load measured by vmstat running on hosts. Figures a and b present sender and receiver efficiency, indicating that SwVxLAN is significantly less efficient than and for both sides. It can be seen that is more efficient at sending while at receiving, with position improving with load. Having run a separate set of tests with mpstat collecting detailed CPU load statistics for irq, sys and guest CPU time per core, we attribute this tendency to hasing algorithm used by the network cards favoring higher entropy in packet headers for growing number of simultaneous streams. Latency. To measure latency, we run five consecutive netperf TCP_RR tests, each for 120 seconds, trimming the first and the last 20 seconds to eliminate the measurement inaccuracies of TCP connection establishment and termination. The round-trip-time (RTT) results, averaged over all the VMs, are presented in Figure 5. It can be seen that achieves the lowest RTT. The RTT difference between and varies from 1.5µseconds 3 Beyond this number CPU becomes the bottleneck.

5 [Gbps]/CPU Load[%] [Gbps]/CPU Load[%] (a) Sender (b) Receiver Figure : Sender and receiver efficiency for a single VM pair to 2.3 µseconds for four. RTT is the highest for SwVxLAN as in that setup the packets are processed by the hosts IP stack before being forwarded to DOVE OVS. In, on the other hand, the packets are processed only by the host MAC layer, resulting in on averge 13.2 µseconds shorter RTT than in SwVxLAN. In summary, our evaluation demonstrates that offers significantly better performance as compared to Software VxLAN, in terms of throughput, efficiency, and latency. performance is very close to both the and the HwVxLAN, while not having their drawbacks, e.g. HW dependency for HwVxLAN and lack of virtualization for. 5. RELATED WORK is not the first solution for eliminating performance degradation of overlay virtualization while retaining its highly prized benefits. All the proposed solutions can be roughly subdivided into the hardware based (HW) and the software based (SW) camps. As overlay network virtualization gains popularity, more and more HW vendors offer virtualization support in their gear, both in NICs and in edge switches. In the simplest case, hardware forwarding pipeline is upgraded to recognise the encapsulation, skip it, and apply optimizations like TSO. In more advanced cases, HW can fully take over the tunnel termination functionality. For instance, both Intel and Mellanox have introduced server adapters capable of VXLAN and NVGRE offloading [3], while all the major switch vendors support advanced VxLAN processing in their switches. Although promising, HW-based approach has its drawbacks, namely: the need to replace existing gear with potentially more expensive; longer development cycles for new features, e.g. even small changes in encapsulation format; more complex HW often requires more tweaking and tuning to achieve optimal performance; increased risk of vendor dependency and lock-in. Software solutions are more flexible and do not have most of the drawbacks associated with hardware. For most situations, SW flexibility and extensibility poses serious competition to the additional performance improvement offered by special purpose HW. In what follows, we overview the most prominent software optimization approaches and compare them to. Like, STT protocol [10] was designed to regain the TCP segmentation offload benefits. STT is an IP-based encapsulation utilizing a TCP-like header over IP which does not include any TCP connection state associated with the tunnel, so that loss of a single segment can potentially lead to retransmission of the whole 6K packet. With, the whole TCP session is offloaded to the host and packets are sent with real, slightly modified, TCP headers, for a significant advantage of over STT in lossy environments. Like with almost any encapsulation approach, additional headers imposed by STT complicate and sometimes neutralize traffic processing by middleboxes, such as firewalls, SSL accelerators, etc. Without implementing encapsulation specific capabilities or deploying translation gateways, middleboxes are likely to drop encapsulated traffic. is more middlebox-friendly as it is only required to obtain (and cache) the original header fields in the TCP header from a pre-configured database accessible by every switch, hypervisor, or middlebox. Additional benefit can come from combining with novel middlebox technologies like FlowTags [11], e.g. by reusing FIT to deliver session specific information to FlowTags enabled middleboxes. Additional SW optimization approach for overlay network virtualization was proposed by Kawashima et al. in[1], and evaluated in [15]. Based on address translation, this solution maps VM MAC addresses to MAC addresses of the physical hosts, and performs MAC address replacement using Open- Flow protocol and centralized OpenFlow controller. The authors report performance improvement of almost four times, as compared to VxLAN and GRE approaches. Compared to, this approach is limited as it is scoped to a single L2 domain, while is based on L3 address translation providing more flexibility. Additional advantages of over other known software solutions are its selective and dynamic applicability and its ability to coexist seamlessly with overlay processing. These capabilities open up additional opportunities for network wide decision making and optimization, and ease the adoption.

6 Latency [µsec] Figure 5: Latency 6. DISCUSSION AND FUTURE WORK In this section we briefly survey features that open up further research questions or have a potential to strengthen the solution itself and the environment where it is deployed, if thoroughly investigated. Control plane complexity. Due to space limitations, we did not discuss the control plane aspects of at length and only have mentioned that in, the improvement in dataplane performance comes at the cost of added control plane complexity. Acknowledging this, we envision extension in two complimentary directions. First, improve scale by smart distributed data management techniques. Second, extend to interact with DC-wide operations to scope the application of to where it is most beneficial. Security. Security implications of must be further investigated, even if its DCN scope limits the potential attack surface. DoS attack on FIT allocator seems as an immediate threat requiring attention. IP fragmentation support. Although rare in DC environments, non-uniform MTU settings along the communication path can potentially cause the packets to be fragmented in transfer. This can break the communications since, in the current design, destination NVE might not always be able to retrieve the FIT from the destination TCP port and will have to drop VM packets. Adding fragmentation support (or avoidance) is an important improvement direction. Service chaining support. The ability of network virtualization solution to co-exist with different types of service appliances and ways they are used, is one of the strongest metrics. This includes dedicated hardware appliances, virtual appliances (NFV), distributed network services, and service chains. Although we did not explore this fully yet (see also Section 5), we observe that has relative advantage as it avoids decapsulating and re-encapsulating packets at each hop along the service chain. Cost vs usability trade-off. is a SW only, highly efficient add-on to edge based overlay network virtualization. A possible alternative to edge based overlay network virtualization is to design, build, and deploy the infrastructure based network virtualization (differentiate from offloading or accelerating SW-based approached with HW as described in Section 5). While highly attractive in terms of performance, infrastructure based solutions are tightly coupled with HW, leading to higher costs, slower innovation cycles, and a risk of vendor lock-in. It is therefore worth exploring HW-assists for edge-based, often SW solutions, where presents strong immediate business value for data centers and allows, as prices drop, to smooth the transition from current, overlay unaware, networking gear to a new equipment generation. 7. ACKNOWLEDGEMENTS The research leading to results published in this paper is partially supported by the European Communitys Seventh Framework Programme (FP7/ ), grant agreement number , in the context of the COSIGN Project.

7 8. REFERENCES [1] KVM: Kernel Virtual Machine. [2] libvirt: The virtualization API. [3] Mellanox ConnectX-3 Pro. adapter_cards/pb_connectx-3_pro_card_en.pdf. [] Open vswitch. [5] Openstack neutron. [6] Ryu. [7] N. M. K. Chowdhury and R. Boutaba. A survey of network virtualization. Comput. Netw., 5(5): , Apr [8] R. Cohen, K. Barabash, and L. Schour. Distributed overlay virtual ethernet (dove) integration with openstack. In IM, pages , [9] R. Cohen and et al. An intent-based approach for network virtualization. In 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), Ghent, Belgium, May 27-31, 2013, pages 2 50, [10] B. Davie and J. Gross. A stateless transport tunneling protocol for network virtualization (stt). IETF draft, [11] S. K. Fayazbakhsh, V. Sekar, M. Yu, and J. C. Mogul. Flowtags: Enforcing network-wide policies in the presence of dynamic middlebox actions. In Proceedings of the Second ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking, HotSDN 13, pages 19 2, New York, NY, USA, ACM. [12] A. Greenberg, J. Hamilton, D. A. Maltz, and P. Patel. The cost of a cloud: Research problems in data center networks. SIGCOMM Comput. Commun. Rev., 39(1):68 73, Dec [13] J. Gross, T. Sridhar, P. Garg, C. Wright, and I. Ganga. Geneve: Generic network virtualization encapsulation. IETF draft, 201. [1] R. Kawashima and H. Matsuo. Non-tunneling edge-overlay model using openflow for cloud datacenter networks. In Cloud Computing Technology and Science (CloudCom), 2013 IEEE 5th International Conference on, volume 2, pages IEEE, [15] R. Kawashima and H. Matsuo. Performance evaluation of non-tunneling edge-overlay model on 0gbe environment. In Network Cloud Computing and Applications (NCCA), 201 IEEE 3rd Symposium on, pages IEEE, 201. [16] M. Lasserre, F. Balus, T. Morin, N. Bitar, and Y. Rekhter. Framework for dc network virtualization. IETF draft, 201. [17] T. Sridhar, L. Kreeger, D. Dutt, C. Wright, M. Bursell, M. Mahalingam, P. Agarwal, and K. Duda. Vxlan: A framework for overlaying virtualized layer 2 networks over layer 3 networks. IETF draft, 201. [18] M. Sridharan, K. Duda, I. Ganga, A. Greenberg, G. Lin, M. Pearson, P. Thaler, C. Tumuluri, N. Venkataramiah, and Y. Wang. Nvgre: Network virtualization using generic routing encapsulation. IETF draft, 2011.

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled

More information

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West

More information

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software

More information

SCLP: Segment-oriented Connection-less Protocol for High-Performance Software Tunneling in Datacenter Networks

SCLP: Segment-oriented Connection-less Protocol for High-Performance Software Tunneling in Datacenter Networks SCLP: Segment-oriented Connection-less Protocol for High-Performance Software Tunneling in Datacenter Networks Ryota Kawashima Shin Muramatsu Hiroki Nakayama Tsunemasa Hayashi Hiroshi Matsuo Nagoya Institute

More information

Software Defined Network (SDN)

Software Defined Network (SDN) Georg Ochs, Smart Cloud Orchestrator (gochs@de.ibm.com) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario

More information

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM Presenter: Vinit Jain, STSM, System Networking Development, IBM System & Technology Group A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio

More information

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

ConnectX -3 Pro: Solving the NVGRE Performance Challenge WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3

More information

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

SOFTWARE-DEFINED NETWORKING AND OPENFLOW SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control

More information

Virtualization, SDN and NFV

Virtualization, SDN and NFV Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.

Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case. Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case. Franco Callegati, Walter Cerroni, Chiara Contoli, Giuliano Santandrea Dept. of Electrical, Electronic and Information

More information

State of the Art Cloud Infrastructure

State of the Art Cloud Infrastructure State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks

More information

OpenStack Networking: Where to Next?

OpenStack Networking: Where to Next? WHITE PAPER OpenStack Networking: Where to Next? WHAT IS STRIKING IS THE PERVASIVE USE OF OPEN VSWITCH (OVS), AND AMONG NEUTRON FEATURES, THE STRONG INTEREST IN SOFTWARE- BASED NETWORKING ON THE SERVER,

More information

Definition of a White Box. Benefits of White Boxes

Definition of a White Box. Benefits of White Boxes Smart Network Processing for White Boxes Sandeep Shah Director, Systems Architecture EZchip Technologies sandeep@ezchip.com Linley Carrier Conference June 10-11, 2014 Santa Clara, CA 1 EZchip Overview

More information

Open vswitch and the Intelligent Edge

Open vswitch and the Intelligent Edge Open vswitch and the Intelligent Edge Justin Pettit OpenStack 2014 Atlanta 2014 VMware Inc. All rights reserved. Hypervisor as Edge VM1 VM2 VM3 Open vswitch Hypervisor 2 An Intelligent Edge We view the

More information

Data Center Virtualization and Cloud QA Expertise

Data Center Virtualization and Cloud QA Expertise Data Center Virtualization and Cloud QA Expertise Highlights Broad Functional QA Experience Deep understanding of Switching and Routing Protocols Strong hands on experience in multiple hyper-visors like

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

Using Network Virtualization to Scale Data Centers

Using Network Virtualization to Scale Data Centers Using Network Virtualization to Scale Data Centers Synopsys Santa Clara, CA USA November 2014 1 About Synopsys FY 2014 (Target) $2.055-2.065B* 9,225 Employees ~4,911 Masters / PhD Degrees ~2,248 Patents

More information

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior

More information

WHITE PAPER. Network Virtualization: A Data Plane Perspective

WHITE PAPER. Network Virtualization: A Data Plane Perspective WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable

More information

SDN CONTROLLER. Emil Gągała. PLNOG, 30.09.2013, Kraków

SDN CONTROLLER. Emil Gągała. PLNOG, 30.09.2013, Kraków SDN CONTROLLER IN VIRTUAL DATA CENTER Emil Gągała PLNOG, 30.09.2013, Kraków INSTEAD OF AGENDA 2 Copyright 2013 Juniper Networks, Inc. www.juniper.net ACKLOWLEDGEMENTS Many thanks to Bruno Rijsman for his

More information

Leveraging NIC Technology to Improve Network Performance in VMware vsphere

Leveraging NIC Technology to Improve Network Performance in VMware vsphere Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...

More information

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair Data Center Network Virtualisation Standards Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair May 2013 AGENDA 1. Why standardise? 2. Problem Statement and Architecture

More information

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com SDN 101: An Introduction to Software Defined Networking citrix.com Over the last year, the hottest topics in networking have been software defined networking (SDN) and Network ization (NV). There is, however,

More information

Analysis of Network Segmentation Techniques in Cloud Data Centers

Analysis of Network Segmentation Techniques in Cloud Data Centers 64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology

More information

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX DCB for Network Virtualization Overlays Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX What is SDN? Stanford-Defined Networking Software-Defined Networking Sexy-Defined Networking Networking

More information

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

SOFTWARE-DEFINED NETWORKING AND OPENFLOW SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < echoi@brocade.com> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking

More information

Pluribus Netvisor Solution Brief

Pluribus Netvisor Solution Brief Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and

More information

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014 How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014 About Me: Pino de Candia At Midokura since late 2010: Joined as a Software Engineer Managed the

More information

VXLAN Performance Evaluation on VMware vsphere 5.1

VXLAN Performance Evaluation on VMware vsphere 5.1 VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5

More information

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates 1 Goals of the Presentation 1. Define/describe SDN 2. Identify the drivers and inhibitors of SDN 3. Identify what

More information

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Telecom - The technology behind

Telecom - The technology behind SPEED MATTERS v9.3. All rights reserved. All brand names, trademarks and copyright information cited in this presentation shall remain the property of its registered owners. Telecom - The technology behind

More information

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

Network Virtualization

Network Virtualization Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services

More information

Building an Open, Adaptive & Responsive Data Center using OpenDaylight

Building an Open, Adaptive & Responsive Data Center using OpenDaylight Building an Open, Adaptive & Responsive Data Center using OpenDaylight Vijoy Pandey, IBM 04 th February 2014 Email: vijoy.pandey@gmail.com Twitter: @vijoy Agenda Where does ODP (& SDN) fit in the bigger

More information

Extending Networking to Fit the Cloud

Extending Networking to Fit the Cloud VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at

More information

Accelerating Micro-segmentation

Accelerating Micro-segmentation WHITE PAPER Accelerating Micro-segmentation THE INITIAL CHALLENGE WAS THAT TRADITIONAL SECURITY INFRASTRUCTURES WERE CONCERNED WITH SECURING THE NETWORK BORDER, OR EDGE, WITHOUT BUILDING IN EFFECTIVE SECURITY

More information

AIN: A Blueprint for an All-IP Data Center Network

AIN: A Blueprint for an All-IP Data Center Network AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center

More information

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer HAWAII TECH TALK SDN Paul Deakin Field Systems Engineer SDN What Is It? SDN stand for Software Defined Networking SDN is a fancy term for: Using a controller to tell switches where to send packets SDN

More information

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710

Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710 COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX

More information

Softening the Network: Virtualization s Final Frontier

Softening the Network: Virtualization s Final Frontier Softening the Network: Virtualization s Final Frontier Steve Riley Technical Director, Office of the CTO Riverbed Technology steve.riley@riverbed.com http://blog.riverbed.com Abstractions We ve Seen virtual

More information

Quantum Hyper- V plugin

Quantum Hyper- V plugin Quantum Hyper- V plugin Project blueprint Author: Alessandro Pilotti Version: 1.0 Date: 01/10/2012 Hyper-V reintroduction in OpenStack with the Folsom release was primarily focused

More information

Palo Alto Networks. Security Models in the Software Defined Data Center

Palo Alto Networks. Security Models in the Software Defined Data Center Palo Alto Networks Security Models in the Software Defined Data Center Christer Swartz Palo Alto Networks CCIE #2894 Network Overlay Boundaries & Security Traditionally, all Network Overlay or Tunneling

More information

STRATEGIC WHITE PAPER. The next step in server virtualization: How containers are changing the cloud and application landscape

STRATEGIC WHITE PAPER. The next step in server virtualization: How containers are changing the cloud and application landscape STRATEGIC WHITE PAPER The next step in server virtualization: How containers are changing the cloud and application landscape Abstract Container-based server virtualization is gaining in popularity, due

More information

Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION

Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION Use Case Brief BUILDING A PRIVATE CLOUD PROVIDING PUBLIC CLOUD FUNCTIONALITY WITHIN THE SAFETY OF YOUR ORGANIZATION At many enterprises today, end users are demanding a powerful yet easy-to-use Private

More information

Bringing OpenFlow s Power to Real Networks

Bringing OpenFlow s Power to Real Networks Bringing OpenFlow s Power to Real Networks Curt Beckmann, Brocade Forwarding Abstractions Working Group ( FAWG @ ONF) April 2013 1 Overview of this preso The Two Schools of OpenFlow OpenFlow Implementation

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

Open Source Networking for Cloud Data Centers

Open Source Networking for Cloud Data Centers Open Source Networking for Cloud Data Centers Gaetano Borgione Distinguished Engineer @ PLUMgrid April 2015 1 Agenda Open Source Clouds with OpenStack Building Blocks of Cloud Networking Tenant Networks

More information

Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers

Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers Multi-Tenant Isolation and Network Virtualization in. Cloud Data Centers Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu These slides and audio/video recordings of

More information

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters Creating Overlay Networks Using Intel Ethernet Converged Network Adapters Technical Brief Networking Division (ND) August 2013 Revision 1.0 LEGAL INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Adopting Software-Defined Networking in the Enterprise

Adopting Software-Defined Networking in the Enterprise IT@Intel White Paper Intel IT Data Center Efficiency Software-Defined Networking April 2014 Adopting Software-Defined Networking in the Enterprise Executive Overview By virtualizing the network through

More information

Software Defined Networking using VXLAN

Software Defined Networking using VXLAN Thomas Richter IBM Research and Development, Linux Technology Center LinuxCon Edinburgh 21-Oct-2013 Software Defined Networking using VXLAN Thomas Richter 2009 IBM Corporation Agenda Vxlan IETF Draft VXLAN

More information

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Virtual Network Exceleration OCe14000 Ethernet Network Adapters WHI TE PA P ER Virtual Network Exceleration OCe14000 Ethernet Network Adapters High Performance Networking for Enterprise Virtualization and the Cloud Emulex OCe14000 Ethernet Network Adapters High Performance

More information

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts Expert Reference Series of White Papers vcloud Director 5.1 Networking Concepts 1-800-COURSES www.globalknowledge.com vcloud Director 5.1 Networking Concepts Rebecca Fitzhugh, VMware Certified Instructor

More information

Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments

Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc. White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre Wilfried van Haeren CTO Edgeworx Solutions Inc. www.edgeworx.solutions Topics Intro Edgeworx Past-Present-Future

More information

Networking Virtualization Using FPGAs

Networking Virtualization Using FPGAs Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Massachusetts,

More information

NetScaler VPX FAQ. Table of Contents

NetScaler VPX FAQ. Table of Contents NetScaler VPX FAQ Table of Contents Feature and Functionality Frequently Asked Questions... 2 Pricing and Packaging Frequently Asked Questions... 4 NetScaler VPX Express Frequently Asked Questions... 5

More information

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.

More information

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Cloud Fabric Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Huawei Cloud Fabric - Cloud Connect Data Center Solution Enable Data Center Networks to Be More Agile for

More information

CS244 Lecture 5 Architecture and Principles

CS244 Lecture 5 Architecture and Principles CS244 Lecture 5 Architecture and Principles Network Virtualiza/on in Mul/- tenant Datacenters, NSDI 2014. Guido Appenzeller Background Why is SDN Happening? CLOSED & PROPRIETARY NETWORKING EQUIPMENT Vertically

More information

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture OpenStack Neutron Outline Why Neutron? What is Neutron? API Abstractions Plugin Architecture Why Neutron? Networks for Enterprise Applications are Complex. Image from windowssecurity.com Why Neutron? Reason

More information

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal

More information

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 1 Outline Cloud data center (CDC) Software Defined Network (SDN) Network Function Virtualization (NFV) Conclusion 2 Cloud Computing Cloud computing

More information

NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure

More information

Qualifying SDN/OpenFlow Enabled Networks

Qualifying SDN/OpenFlow Enabled Networks Qualifying SDN/OpenFlow Enabled Networks Dean Lee Senior Director, Product Management Ixia Santa Clara, CA USA April-May 2014 1 Agenda SDN/NFV a new paradigm shift and challenges Benchmarking SDN enabled

More information

SDN PARTNER INTEGRATION: SANDVINE

SDN PARTNER INTEGRATION: SANDVINE SDN PARTNER INTEGRATION: SANDVINE SDN PARTNERSHIPS SSD STRATEGY & MARKETING SERVICE PROVIDER CHALLENGES TIME TO SERVICE PRODUCT EVOLUTION OVER THE TOP THREAT NETWORK TO CLOUD B/OSS AGILITY Lengthy service

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL Pascal Geenens CONSULTING ENGINEER, JUNIPER NETWORKS pgeenens@juniper.net BUSINESS AGILITY Need to create and deliver new revenue opportunities faster Services

More information

Programmable Networking with Open vswitch

Programmable Networking with Open vswitch Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads

More information

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation

More information

Network Technologies for Next-generation Data Centers

Network Technologies for Next-generation Data Centers Network Technologies for Next-generation Data Centers SDN-VE: Software Defined Networking for Virtual Environment Rami Cohen, IBM Haifa Research Lab September 2013 Data Center Network Defining and deploying

More information

Multitenancy Options in Brocade VCS Fabrics

Multitenancy Options in Brocade VCS Fabrics WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.

More information

Why Software Defined Networking (SDN)? Boyan Sotirov

Why Software Defined Networking (SDN)? Boyan Sotirov Why Software Defined Networking (SDN)? Boyan Sotirov Agenda Current State of Networking Why What How When 2 Conventional Networking Many complex functions embedded into the infrastructure OSPF, BGP, Multicast,

More information

How To Orchestrate The Clouddusing Network With Andn

How To Orchestrate The Clouddusing Network With Andn ORCHESTRATING THE CLOUD USING SDN Joerg Ammon Systems Engineer Service Provider 2013-09-10 2013 Brocade Communications Systems, Inc. Company Proprietary Information 1 SDN Update -

More information

Network Function Virtualization Using Data Plane Developer s Kit

Network Function Virtualization Using Data Plane Developer s Kit Network Function Virtualization Using Enabling 25GbE to 100GbE Virtual Network Functions with QLogic FastLinQ Intelligent Ethernet Adapters DPDK addresses key scalability issues of NFV workloads QLogic

More information

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking Roberto Bonafiglia, Ivano Cerrato, Francesco Ciaccia, Mario Nemirovsky, Fulvio Risso Politecnico di Torino,

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Foundation for High-Performance, Open and Flexible Software and Services in the Carrier Network. Sandeep Shah Director, Systems Architecture EZchip

Foundation for High-Performance, Open and Flexible Software and Services in the Carrier Network. Sandeep Shah Director, Systems Architecture EZchip Foundation for High-Performance, Open and Flexible Software and Services in the Carrier Network Sandeep Shah Director, Systems Architecture EZchip Linley Carrier Conference June 10, 2015 1 EZchip Overview

More information

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks? OpenFlow and Onix Bowei Xu boweixu@umich.edu [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control

More information

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013 Network Virtualization and Software-defined Networking Chris Wright and Thomas Graf Red Hat June 14, 2013 Agenda Problem Statement Definitions Solutions She can't take much more of this, captain! Challenges

More information

OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin

OpenFlow with Intel 82599. Voravit Tanyingyong, Markus Hidell, Peter Sjödin OpenFlow with Intel 82599 Voravit Tanyingyong, Markus Hidell, Peter Sjödin Outline Background Goal Design Experiment and Evaluation Conclusion OpenFlow SW HW Open up commercial network hardware for experiment

More information

Linux KVM Virtual Traffic Monitoring

Linux KVM Virtual Traffic Monitoring Linux KVM Virtual Traffic Monitoring East-West traffic visibility Scott Harvey Director of Engineering October 7th, 2015 apcon.com Speaker Bio Scott Harvey Director of Engineering at APCON Responsible

More information

VMware vcloud Networking and Security Overview

VMware vcloud Networking and Security Overview VMware vcloud Networking and Security Overview Networks and Security for Virtualized Compute Environments WHITE PAPER Overview Organizations worldwide have gained significant efficiency and flexibility

More information

RCL: Software Prototype

RCL: Software Prototype Business Continuity as a Service ICT FP7-609828 RCL: Software Prototype D3.2.1 June 2014 Document Information Scheduled delivery 30.06.2014 Actual delivery 30.06.2014 Version 1.0 Responsible Partner IBM

More information

OpenDaylight Network Virtualization and its Future Direction

OpenDaylight Network Virtualization and its Future Direction OpenDaylight Network Virtualization and its Future Direction May 20, 2014 Masashi Kudo NEC Corporation Table of Contents SDN Market Overview OpenDaylight Topics Network Virtualization Virtual Tenant Network

More information

Network Performance Comparison of Multiple Virtual Machines

Network Performance Comparison of Multiple Virtual Machines Network Performance Comparison of Multiple Virtual Machines Alexander Bogdanov 1 1 Institute forhigh-performance computing and the integrated systems, e-mail: bogdanov@csa.ru, Saint-Petersburg, Russia

More information

The Road to SDN: Software-Based Networking and Security from Brocade

The Road to SDN: Software-Based Networking and Security from Brocade WHITE PAPER www.brocade.com SOFTWARE NETWORKING The Road to SDN: Software-Based Networking and Security from Brocade Software-Defined Networking (SDN) presents a new approach to rapidly introducing network

More information

Software-Defined Network (SDN) & Network Function Virtualization (NFV) Po-Ching Lin Dept. CSIE, National Chung Cheng University

Software-Defined Network (SDN) & Network Function Virtualization (NFV) Po-Ching Lin Dept. CSIE, National Chung Cheng University Software-Defined Network (SDN) & Network Function Virtualization (NFV) Po-Ching Lin Dept. CSIE, National Chung Cheng University Transition to NFV Cost of deploying network functions: Operating expense

More information

VXLAN, Enhancements, and Network Integration

VXLAN, Enhancements, and Network Integration VXLAN, Enhancements, and Network Integration Apricot 2014 - Malaysia Eddie Parra Principal Engineer, Juniper Networks Router Business Unit (RBU) eparra@juniper.net Legal Disclaimer: This statement of product

More information

CLOUD NETWORKING THE NEXT CHAPTER FLORIN BALUS

CLOUD NETWORKING THE NEXT CHAPTER FLORIN BALUS CLOUD NETWORKING THE NEXT CHAPTER FLORIN BALUS COMMON APPLICATION VIEW OF THE NETWORK Fallacies of Distributed Computing 1. The network is reliable. 2. Latency is zero. 3. Bandwidth is infinite. 4. The

More information