ConnectX -3 Pro: Solving the NVGRE Performance Challenge
|
|
|
- Reynard Owens
- 10 years ago
- Views:
Transcription
1 WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3 ConnectX-3 Pro Solves the Performance Challenge...4 Summary...6 About Mellanox...6 Works Cited...6 Objective This white paper presents an overview of the rising need for overlay virtualized networks and the solution that the NVGRE technology provides to meet this need. It also examines the various performance challenges associated with NVGRE and presents Mellanox s new solution for resolving these challenges. Background: The Need for Virtualized Overlay Networks Over the past few years, cloud computing has grown at a tremendous rate, with worldwide spending on IT cloud services tripling since 2008 and expected to reach over $100 billion in More specifically, Infrastructure as a Service (IaaS) is the fastest-growing segment of the public cloud services market, having grown over 45% in 2012 alone. 2 IaaS allows multiple tenants to share system resources and infrastructure, which improves hardware utilization, thereby reducing the cost of the IT infrastructure, both at implementation and ongoing. Cloud computing also provides a measure of agility that simplifies the IT management process, provides additional control over proprietary data, and improves the end-user experience. The IaaS segment of cloud computing is based on the concept of multiple tenants sharing the cloud infrastructure, enabled primarily by server virtualization. IaaS offers the following additional benefits to its consumers: Allows a company s IT department to focus on core competencies instead of assembling and maintaining network infrastructure Enables dynamic scaling of infrastructure services based on usage demands Reduces upfront investment costs, and limits ongoing costs to only OPEX Provides access to the infrastructure from anyhere in the world
2 page 2 IaaS providers need to support multiple tenancies on their datacenter infrastructure, creating the need to isolate each tenant within the network to provide the security and traffic isolation levels that independent infrastructure provides. This has typically been achieved through the use of VLANs, which segment the network into virtual network entities to provide security and network traffic control. As the demand for cloud services continues to grow, many large consumers have outgrown the most basic solutions. For example, VLAN usage is limited to 4,096 entities (VLAN IDs), which, given the tremendous growth in cloud-based networks, is far from sufficient segmentation. A more scalable solution has become a necessity. A number of solutions exist to address these issues, each with its own advantages and disadvantages. Technologies such as multiple VLANs (known as Q-in-Q) or MAC-in-MAC try to address the scalability concern by multiplying the number of possible VLAN IDs or MAC addresses that are used. Such technologies, however, remain in the Layer 2 (L2) domain, which means there are inherent challenges in scaling to a high number of entities. For example, Spanning Tree (STP), Rapid Spanning Tree (RTSP), or Multiple Spanning Tree (MSTP) are typically used to prevent loops, but half the connections in the tree are reserved (stand-by) for link failures. As a result, the network structure remains cumbersome. An additional challenge in using new VLAN or MAC addresses stems from the need for configuration changes to the existing Layer-2 infrastructure, which complicates Virtual Machine (VM) and workload migration from server to server in the virtual domain. As demonstrated in Figure 1, an approach that successfully addresses these challenges is to create a virtual network that transports data across existing Layer-3 (IP) infrastructure. A Layer-2 virtual overlay scheme builds a virtual L2 network on top of multiple L3 subnetworks using GRE tunnels between virtual machines (VMs), which operate in separate networks while operating as if they were attached to the same L2 subnet. By adding this L2 virtual overlay scheme on top of the L3 network, administrators are able to scale their cloud-based services without having to significantly reconfigure or add to existing infrastructure. Figure 1. Network Virtualization Before and After Overlay Network NVGRE Technology In order for an overlay network to be of any use, a technology is required to encapsulate data such that it can tunnel into Layer 2 and be carried across Layer 3. One leading technology that provides this encapsulation and aims to resolve both the security and scalability issues is Network Virtualization using Generic Routing Encapsulation (NVGRE). NVGRE technology provides a solution for stretching the L2 network over the virtual L3 IP network.
3 page 3 Figure 2. NVGRE Encapsulation The NVGRE concept is based on a new encapsulation for VM traffic in which a tunnel is created for the VM traffic using the GRE routing protocol. As Figure 2 shows, it encapsulates the VM s Layer 2 (Ethernet) traffic with new MAC and IP headers, and it adds an NVGRE header that includes a Tenant Network Identifier (TNI), a 24-bit identifier that radically extends the address space of the VLANs from 4,094 segments up to 16.7 million available IDs, solving the scalability issue. The encapsulation consists of: An outer MAC address, which provides the physical destination and source addresses of the hypervisors or of intermediate L3 routers An optional 802.1q address to further delineate NVGRE traffic on the LAN Outer IP addresses, which are the IP addresses assigned to the hypervisors that are communicating over L3 A GRE header that designates the TNI. Each TNI is associated with a specific GRE tunnel that identifies the tenant s unique virtual subnet within the cloud. The encapsulation and decapsulation process is handled within the hypervisor, which connects the virtual machines with the IP network. The hypervisor, therefore, ensures that the virtual machines themselves are completely unaware of the GRE protocol that has been implemented to communicate between them. Multicasting is yet another benefit of transporting messages via L3, as opposed to L2, which only offers broadcasting. The hypervisor determines whether the communicating virtual machines are within the same multicast group and thereby determines whether unicast or IP multicast is required. The hypervisor is able to differentiate between individual logical networks and to identify new virtual machines to be associated with multicast groups. NVGRE is truly a hybrid solution, combining the benefits of L2, such as the ability to shift the location of VMs to maximize the efficiency of the datacenter, with the scalability realized by transporting via L3. Considering its ability to address the challenges of VLAN grouping using a virtualization solution, an NVGRE solution is considered an ideal technology for network administrators working within a cloud computing environment. NVGRE s Hidden Challenge Although NVGRE solves scalability and security concerns, and although it does so at very little monetary expense, there are two major concerns that can impact IaaS performance: 1. In a traditional Layer-2 network, sizeable processing savings are realized by using CPU offloads. However, in an NVGRE setup, the classical offloading capabilities of the network interface controller (NIC), such as checksum offloading and large segmentation offloading (LSO), cannot be used because the inner packet is no longer accessible due to of the added layer of encapsulation. As such, additional CPU resources are required to perform tasks that would previously have been handled more efficiently by the existing NIC. 2. NVGRE restricts the ability to use Receive Side Scaling (RSS) to distribute traffic across multiple cores based on the inner packet, meaning that there is a severe reduction in bandwidth. This unanticipated degradation in performance and increase in CPU overhead significantly offsets the many benefits of using NVGRE.
4 page 4 ConnectX-3 Pro Solves the Performance Challenge While NVGRE offers significant benefits in terms of scalability and security of virtualized networks, it is not as valuable unless it maintains the availability of the CPU and a similar rate of throughput. However, because NVGRE uses an additional encapsulation layer, it requires a high level of CPU usage and prevents traditional offloads from reducing that workload. Throughput it also affected by this unfortunate and unintended consequence. In order for NVGRE to be of real value, the extra CPU overhead it creates must be eliminated. This can be achieved by supporting all existing hardware offloads in the network controllers. This includes: Allowing checksum to be performed on both the outer and inner headers Performing large segmentation offloading Handling Netqueue to ensure that virtual machine traffic is distributed between different CPU cores in the most efficient manner Mellanox s ConnectX-3 Pro is the only NIC that currently handles the traditional offloads despite the extra encapsulation layer. Its impact on the CPU workload is dramatic. As Figure 3 shows, the CPU is freed up by as much as 73.9% when sending 64k messages, thanks to the addition of HW offloading performed by ConnectX-3 Pro. Figure 3. NVGRE Send Offload While ConnectX-3 Pro s reduction of CPU overhead in receive operations is lower, it is not insignificant, lessening the strain by up to 26%, as seen in Figure 4. Figure 4. NVGRE Receive Offload
5 page 5 If one assumes that for every send operation there is a receive operation and vice versa, then it is fair to assume that a straight average (send + receive divided by 2) will provide the overall impact of ConnectX-3 Pro on CPU overhead. As Figure 5 illustrates, the average savings in CPU overhead is between 37-47% when using NVGRE with the ConnectX-3 Pro versus using NVGRE with other NICs. Figure 5. NVGRE Average Total Offload Furthermore, ConnectX-3 Pro enables RSS to steer and distribute traffic based on the inner packet, and not, like other NICs, only the outer packet. The primary benefit of this is that it allows multiple cores to handle traffic, which then enables the interconnect to run at the intended line-rate bandwidth, not at reduced bandwidth as occurs when NVGRE is used without RSS. Figure 6. Throughput With and Without NVGRE While other NICs see a dropoff of as much as 65% of the intended bandwidth when using NVGRE because of their inability to gain access to the inner packet for RSS to distribute traffic across multiple cores, ConnectX-3 Pro maintains near-line-rate throughput, as displayed in Figure 6. ConnectX-3 Pro addresses the unintended performance degradation seen when using NVGRE in both CPU usage and throughput. By using ConnectX-3 Pro, the scalability and security benefits of NVGRE can be utilized without any decrease in performance and without a dramatic increase in CPU overhead.
6 page 6 Summary Today s virtualized data centers suffer from a lack of scalability and security, restricting the ability to build a large-scale cloud-based network. NVGRE was designed to inexpensively resolve these security and scalability challenges by enabling a virtual overlay network without the need to reconfigure or add to the Layer 2 network. While NVGRE is a step in the right direction, there are unforeseen performance challenges that are introduced from the loss of existing hardware offloads and from incompatibility with RSS. In order to exploit the full potential of NVGRE, though, hardware that can properly address the performance challenges must be in place. This includes a network controller that not only supports NVGRE, but that can also access the inner packet to enable all existing hardware offloads and RSS traffic distribution. The only such NIC on the market at present is Mellanox s ConnectX-3 Pro. About Mellanox Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at Works Cited 1 Wall Street Journal, January 31, 2013: 2 Gartner, Inc., Forecast Overview: Public Cloud Services, Worldwide, , 2Q12 Update 350 Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Copyright Mellanox Technologies. All rights reserved. Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, Virtual Protocol Interconnect and Voltaire are registered trademarks of Mellanox Technologies, Ltd. Connect-IB, CoolBox, FabricIT, Mellanox Federal Systems, Mellanox Software Defined Storage, MetroX, MetroDX, Mellanox Open Ethernet, Open Ethernet, ScalableHPC, Unbreakable-Link, UFM and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners WP Rev 1.0
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Mellanox Accelerated Storage Solutions
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters
Enterprise Strategy Group Getting to the bigger truth. ESG Lab Review Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters Date: June 2016 Author: Jack Poller, Senior
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Extending Networking to Fit the Cloud
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure
W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED
Analysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
Security in Mellanox Technologies InfiniBand Fabrics Technical Overview
WHITE PAPER Security in Mellanox Technologies InfiniBand Fabrics Technical Overview Overview...1 The Big Picture...2 Mellanox Technologies Product Security...2 Current and Future Mellanox Technologies
Mellanox HPC-X Software Toolkit Release Notes
Mellanox HPC-X Software Toolkit Release Notes Rev 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Mellanox Technologies, Ltd. Announces Record Quarterly Results
PRESS RELEASE Press/Media Contacts Ashley Paula Waggener Edstrom +1-415-547-7024 [email protected] USA Investor Contact Gwyn Lauber Mellanox Technologies +1-408-916-0012 [email protected] Israel
Using Network Virtualization to Scale Data Centers
Using Network Virtualization to Scale Data Centers Synopsys Santa Clara, CA USA November 2014 1 About Synopsys FY 2014 (Target) $2.055-2.065B* 9,225 Employees ~4,911 Masters / PhD Degrees ~2,248 Patents
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Mellanox Global Professional Services
Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Mellanox OpenStack Solution Reference Architecture
Mellanox OpenStack Solution Reference Architecture Rev 1.3 January 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY
Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers
SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Mellanox WinOF for Windows 8 Quick Start Guide
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE
CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business
WHITE PAPER. Network Virtualization: A Data Plane Perspective
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
Replacing SAN with High Performance Windows Share over a Converged Network
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN
White Paper Advanced Server Network Virtualization (NV) Acceleration for VXLAN August 2012 Overview In today's cloud-scale networks, multiple organizations share the same physical infrastructure. Utilizing
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Virtual Network Exceleration OCe14000 Ethernet Network Adapters
WHI TE PA P ER Virtual Network Exceleration OCe14000 Ethernet Network Adapters High Performance Networking for Enterprise Virtualization and the Cloud Emulex OCe14000 Ethernet Network Adapters High Performance
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM
Presenter: Vinit Jain, STSM, System Networking Development, IBM System & Technology Group A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Enhancing Cisco Networks with Gigamon // White Paper
Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,
Virtual Machine in Data Center Switches Huawei Virtual System
Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE
EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE EXECUTIVE SUMMARY Enterprise network managers are being forced to do more with less. Their networks are growing in size and complexity. They need
Software Defined Network (SDN)
Georg Ochs, Smart Cloud Orchestrator ([email protected]) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario
Multitenancy Options in Brocade VCS Fabrics
WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.
What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates
What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates 1 Goals of the Presentation 1. Define/describe SDN 2. Identify the drivers and inhibitors of SDN 3. Identify what
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Network Technologies for Next-generation Data Centers
Network Technologies for Next-generation Data Centers SDN-VE: Software Defined Networking for Virtual Environment Rami Cohen, IBM Haifa Research Lab September 2013 Data Center Network Defining and deploying
Visibility into the Cloud and Virtualized Data Center // White Paper
Executive Summary IT organizations today face unprecedented challenges. Internal business customers continue to demand rapid delivery of innovative services to respond to outside threats and opportunities.
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com
SDN 101: An Introduction to Software Defined Networking citrix.com Over the last year, the hottest topics in networking have been software defined networking (SDN) and Network ization (NV). There is, however,
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG
Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG North Core Distribution Access South North Peering #1 Upstream #1 Series of Tubes Upstream #2 Core Distribution Access Cust South Internet West
Quantum Hyper- V plugin
Quantum Hyper- V plugin Project blueprint Author: Alessandro Pilotti Version: 1.0 Date: 01/10/2012 Hyper-V reintroduction in OpenStack with the Folsom release was primarily focused
Lecture 02b Cloud Computing II
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.
White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Virtualization, SDN and NFV
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Performance Management for Cloudbased STC 2012
Performance Management for Cloudbased Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Need for Performance in Cloud Performance Challenges in Cloud Generic IaaS / PaaS / SaaS
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R D e s i g n i n g a n d B u i l d i n g a D a t a c e n t e r N e t w o r k :
Definition of a White Box. Benefits of White Boxes
Smart Network Processing for White Boxes Sandeep Shah Director, Systems Architecture EZchip Technologies [email protected] Linley Carrier Conference June 10-11, 2014 Santa Clara, CA 1 EZchip Overview
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:
WHITE PAPER Scaling-out Ethernet for the Data Center Applying the Scalability, Efficiency, and Fabric Virtualization Capabilities of InfiniBand to Converged Enhanced Ethernet (CEE) The Need for Scalable
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015 Introduction 1 Netra Modular System 2 Oracle SDN Virtual Network Services 3 Configuration Details
Cisco Secure Network Container: Multi-Tenant Cloud Computing
Cisco Secure Network Container: Multi-Tenant Cloud Computing What You Will Learn Cloud services are forecast to grow dramatically in the next 5 years, providing a range of features and cost benefits for
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Demonstrating the high performance and feature richness of the compact MX Series
WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table
CloudEngine 1800V Virtual Switch
CloudEngine 1800V Virtual Switch CloudEngine 1800V Virtual Switch Product Overview Huawei CloudEngine 1800V (CE1800V) is a distributed Virtual Switch (vswitch) designed by Huawei for data center virtualization
SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER
SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER JOINT SDN SOLUTION BY ALCATEL-LUCENT ENTERPRISE AND NEC APPLICATION NOTE EXECUTIVE SUMMARY Server
Scalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Using LISP for Secure Hybrid Cloud Extension
Using LISP for Secure Hybrid Cloud Extension draft-freitasbellagamba-lisp-hybrid-cloud-use-case-00 Santiago Freitas Patrice Bellagamba Yves Hertoghs IETF 89, London, UK A New Use Case for LISP It s a use
Top-Down Network Design
Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
Hyper-V Network Virtualization Gateways - Fundamental Building Blocks of the Private Cloud
Hyper-V Network Virtualization Gateways - nappliance White Paper July 2012 Introduction There are a number of challenges that enterprise customers are facing nowadays as they move more of their resources
DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX
DCB for Network Virtualization Overlays Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX What is SDN? Stanford-Defined Networking Software-Defined Networking Sexy-Defined Networking Networking
Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes
Mellanox ConnectX -3 Firmware (fw-connectx3) Notes Rev 2.11.55 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX
Why Software Defined Networking (SDN)? Boyan Sotirov
Why Software Defined Networking (SDN)? Boyan Sotirov Agenda Current State of Networking Why What How When 2 Conventional Networking Many complex functions embedded into the infrastructure OSPF, BGP, Multicast,
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and
Cloud Computing and the Internet. Conferenza GARR 2010
Cloud Computing and the Internet Conferenza GARR 2010 Cloud Computing The current buzzword ;-) Your computing is in the cloud! Provide computing as a utility Similar to Electricity, Water, Phone service,
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
