T1: Pre-SDN era: network trends in data centre networking



Similar documents
Pre$SDN era: network trends in data centre networking

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

TRILL Large Layer 2 Network Solution

VMDC 3.0 Design Overview

Simplify Your Data Center Network to Improve Performance and Decrease Costs

ProgrammableFlow for Open Virtualized Data Center Network

Analysis of Network Segmentation Techniques in Cloud Data Centers

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

Network Virtualization for Large-Scale Data Centers

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Agility has become a key initiative for business leaders. Companies need the capability

SDN and Data Center Networks

VXLAN: Scaling Data Center Capacity. White Paper

Network Virtualization Solutions

Technology Overview for Ethernet Switching Fabric

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Virtualization, SDN and NFV

TRILL for Data Center Networks

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Data Center Networking Designing Today s Data Center

Why Software Defined Networking (SDN)? Boyan Sotirov

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Virtual Machine in Data Center Switches Huawei Virtual System

Avaya VENA Fabric Connect

A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

NVO3: Network Virtualization Problem Statement. Thomas Narten IETF 83 Paris March, 2012

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Understanding Fundamental Issues with TRILL

Software Defined Network (SDN)

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Resilient Network Interconnect using Distributed Link Aggregation Version 2. Stephen Haddock August 31, 2010

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

Software-Defined Networks Powered by VellOS

Network Virtualization for the Enterprise Data Center. Guido Appenzeller Open Networking Summit October 2011

What is SDN all about?

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

Optimizing Data Center Networks for Cloud Computing

Scalable Approaches for Multitenant Cloud Data Centers

Network Virtualization

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Communication within Clouds: Open Standards and Proprietary Protocols for Data Center Networking

SOFTWARE DEFINED NETWORKING

Multitenancy Options in Brocade VCS Fabrics

Ten Things to Look for in an SDN Controller

SDN Applications in Today s Data Center

Virtualizing the SAN with Software Defined Storage Networks

Cloud Computing and the Internet. Conferenza GARR 2010

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang Nov 13, 2014

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre

NEC ProgrammableFlow:

SDN Use Cases: Leveraging Programmable Networks

How OpenFlow -Based SDN Transforms Private Cloud. ONF Solution Brief November 27, 2012

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Non-blocking Switching in the Cloud Computing Era

I D C M A R K E T S P O T L I G H T

CONTROL LEVEL NETWORK RESILIENCY USING RING TOPOLOGIES. Joseph C. Lee, Product Manager Jessica Forguites, Product Specialist

SDN. What's Software Defined Networking? Angelo Capossele

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

A Look at the New Converged Data Center

BUILDING A NEXT-GENERATION DATA CENTER

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

The Keys for Campus Networking: Integration, Integration, and Integration

software networking Jithesh TJ, Santhosh Karipur QuEST Global

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Cloud Networking: Framework and VPN Applicability. draft-bitar-datacenter-vpn-applicability-01.txt

Ethernet Fabrics: An Architecture for Cloud Networking

Extending Networking to Fit the Cloud

SOFTWARE-DEFINED NETWORKS

Bring your virtualized networking stack to the next level

Software Defined Network Application in Hospital

Leveraging SDN and NFV in the WAN

SIMPLE NETWORKING QUESTIONS?

Software-Defined Networking. Starla Wachsmann. University Of North Texas

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

The Software-Defined Data Center is Key to IT-as-a-Service

WHITE PAPER. Network Virtualization: A Data Plane Perspective

Software Defined Cloud Networking

Lecture 02b Cloud Computing II

Network Virtualization and its Application to M2M Business

From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.

Flexible SDN Transport Networks With Optical Circuit Switching

I D C M A R K E T S P O T L I G H T

Core and Pod Data Center Design

Enterprise Data Center Networks

Network Virtualization and Software-defined Networking. Chris Wright and Thomas Graf Red Hat June 14, 2013

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Transcription:

T1: Pre-SDN era: network trends in data centre networking Summary SDN support points SDN offers solution to current problems faced by STP and thus extends the network capacity use. Novel protocol solutions (overlays) or more radical changes can improve network operations and simplify mngm. There are many overlay technologies that advance already deployed protocols or rely on existing mngm solutions. Building out functionality allows for gradual change and step-wise adoption. SDN criticizing points Before using SDN in data centres we need proper understanding of the problems to be solved. What are the gains and losses associated with each novel solution? What problems get solved and what remain? How do novel solutions scale? Does it pay off to adopt new solution? How do different solutions compare with each other? Parallel evaluation is not often found or easily done. Compatibility between solutions is also an issue often no compatibility support. Why not use standards to support compatibility and clear definitions? Class discussion SDN support points Incremental evolution or step-wise adoption can motivate the adoption of radical changes, which tend to be expensive and dangerous to quickly adopt. Test networks are needed even if they do not fully reflect the production environment. Adoption in isolated part of the data centre. The choice for revolutional solutions is often driven by business decisions. When business success depends on the rapid adaptation of the serving infrastructure and data centre owners are more open to introducing radical changes. Enterprises with high reliability requirements are more cautious with SDN criticizing points What are the problems in data centres? Increasing complexity of networking Virtualization support Scalability of traffic (broadcast storms needs to be managed) Are the problems properly addressed? There are too many open questions on possible solutions -> solutions need to mature before wide scale adoption takes place. Trying to solve too many problems by having individual fixes leads to patchwork solutions. Either develop a solutions that covers

adapting a radical change and prefer for gradual introduction and testing. Infrastructures built in one go are a perfect venue to experiment with novel solutions. Given financial capability operators can customize the data centre from scratch adopting a new idea and avoid compatibility issues within the data centre. External compatibility should be still respected. Organically evolving data centres can adapt new solutions more difficult because of internal compatibility issues. most current problems or prioritize problems and offer one solution for the high priority ones. Can we properly evaluate possible solutions? Keeping solutions or aspects of them unclear makes evaluating their added value difficult. Equipment manufacturers profit from the lack of clear understanding on what an open source solution may do and push forward their agenda. There is no critical review comparing the pros and cons of different solutions especially not in practice. Don t we change one proprietary solution with another when looking at overlays? Why proprietary solutions still get chosen -> support and training by manufacturers, end point of contact for troubleshooting, explicitly address prominent problems. Open source solutions should be generally preferred by they often suffer from too basic functionality (trying to keep them open), no reliable support and unclear definitions. Critique Papas, Chris Communication within Clouds: Open Standards and Proprietary Protocols for Data Center Networking 1. Although it is a survey, the paper does not get a clear message through. Specifically, the paper is motivated by the novel requirements that arise for the network from cloud computing. These requirements are not covered by traditional network protocols and more precisely following problems are mentioned: i) limited resource pooling as the tree topology of STP reduces the aggregate available bandwidth, ii) the choice of suboptimal paths, and iii) complexity of recomputing paths in case of failure. The novel protocols presented are supposed to be solutions to these problems, however the focus is mainly or resource pooling through link aggregation and support for multipath communication at Layer 2. The other problems remain mostly untouched. 2. The layout and presentation of the paper is bad. Sections and subsections are not organized according to the points that the paper tries to make, confusing the reader when going from solutions of one layer to solutions of another layer. It is not clear that the drawbacks of Layer 3 solutions (the section is named L3 vs L2, this does not show up in the content) motivate solutions at Layer 2. Moreover, the paper claims that SPB addresses the same problems as TRILL, but these problems were never mentioned until the end of the SBB presentation. Also, each subsection is not well presented: figures are misleading, the difference between LAG and MC-LAG is not clarified. The protocols are

presented in detail, but not completely. The reader is left confused about the actual functioning of the protocols. When describing proprietary solutions it is not clear which parts of these solutions are secret and which not and unjustified claims are presented: "MAC learing technique in Fabricpath maybe more efficient than in TRILL". This sentence contains zero information. Data Center Overlay Technologies 3. This whitepaper from Cisco claims to present technologies used to realize overlay technologies together with their benefits and challenges. The paper reads mostly as a reference about these technologies, describing details not necessarily useful (eg packet headers) and omitting going into detail about the benefits and limitations. When describing some benefits and limitations there seem to be inconsistencies. There is a vague description of limitations for all solutions except for the ones proposed from Cisco. Also, Cisco FabricPath is presented before TRILL and it seems as if TRILL borrows from FabricPath, which is not true. Miladinovic, Djordje 1. The first paper, "Communication within clouds: open standards and proprietary protocols for data center networking", nicely explains what is the purpose of the paper in the "Abstract" section. The paper is supposed to point out the key requirements we have in cloud computing networks, and various approaches (implementations) proposed to address these requirements. In my opinion, the paper reaches its goal and explains both of these. However regarding the first part, where in the "Introduction" section authors talk about requirements and issues, I feel there should be a bit more structured way to represent this. Perhaps, clear points should be divided into sections so to a reader will be clear which exact problems/issues are the implementations described in the rest of the text addressing. So, even though the author explains the requirements, to my mind, they should be clearly separated as it is done for the implementations. The second paper gives an example how it should be done. 2. Regarding the second paper, "Data Center Overlay Technologies", the authors try to explain why do we want to introduce network overlays into data centers. It clearly explains what are the benefits we have if we use them, and which challenges we will face as a result of that. However, the authors missed to mention some of the lacks introduced by network overlays, e.g Additional header and packet processing, additional complexity due to extra layer etc. This does not mean that readers will necessary change his/her opinion, but they are important to mention in my opinion. 3. Also regarding the second paper, many technologies designed for data centers using network overlaying principle are described. This gives a reader nice overview of possible solutions. However, a reader might not have a proper idea which of these technologies are really used or will be used, and which are only of historical or theoretical importance. In other words, the authors could consider emphasizing why is each presented approach important, and why is good for a reader to understand it's logic. Schmid, Stefan 1. Overlay networks: increased complexity; virtual topology on top of physical network, complicates troubleshooting - is it worth the additional cost and effort? Does it scale? 2. Cisco FabricPath Trill: Why vendor specific extensions of standard (possible improvements) but no backward compatibility or interoperability? 3. Most of the vendors seem to implement their own versions of standardized (!) protocols (with minimal improvements) making interoperability of devices from different manufactures hard or even impossible. Why not stick to the standards?

Schinde, Pravin Criticism: 1. The exact problems introduced by using virtualization, migration and multi-tenancy for networking are not very clear 2. How exactly those problems will get solved by using newer standards is not clear 3. The paper describes few solutions at layer L3 but it is not clear how will they interact with L2 solutions. Are these mutually exclusive from all L2 solutions? Summary: The paper tries to explain the problem and new challenges in L2 routing in datacenter setup due to heavy use of virtualization. It explains the limitations of traditional Spanning Tree Protocol (STP), and then explains SPB, TRILL, SDN and openflow as possible solutions to these problems. - Challenges due to use of virtualization in Cloud data center - Networks must contend with huge numbers of attached devices (both physical and virtual), - Large numbers of isolated independent subnetworks, multi-tenancy - Automated creation, deletion, and migration of virtual machines - Following are important concerns for datacenter operators - Supporting virtualization - Supporting industry standards - Multi-vendor interoperability - Limitations of (Spanning Tree Protocol) STP - Reduces aggregate available bandwidth by blocking the redundant paths - STP can't be broken down into smaller STPs multi-tenancy and fault tolerance - The time taken to recompute STP is non-deterministic (seconds to mins) even though it is in the critical path - Multiple STP (MSTP) - Creates separate spanning tree for each VLAN - Blocks all but one alternate path between VLANs - Multi-Chassis Link Aggregation (MC-LAG) - Allows bonding of two more links into single logical link - STP views LAG as a single link - L3 solution - Fat tree design with Equal Cost Multiplexing (ECMP) - No links are blocked with this approach - This approach can scale upto 1K/2K ports - The VM migration is limited to servers within same VLAG subnet. - TRILL (Transparent Interconnection of Lots of Links) - Runs between Routing bridges - Runs a special protocol to find shortest paths between switches on hop-by-hop basis - Uses distribution trees for broadcasts and new multicasts - This includes recalculation of TRILL headers and MAC swapping - It is extensible - Shortest Path Bridging (SPB) - Provides Server identifier (I-SID) that is completely separate from backbone MAC and VLAN-IDs - I-SID abstracts the service from the network - VLANs can be mapped to I-SID - End hosts are aware of entire path

- Software Defined Networking (SDN) - Software overlay and a network controller which can be programmed by an attached server - SDN can - Simplify network control and management - automate network virtualization services - platform to build agile network services - SDN uses network virtualization overlays and openflow standard - Building block for Networking as a service (NaaS) - Openflow Specification - Developed by Open Networking Foundation (ONF) - Initial members where the users and not manufacturers - OpenFlow takes advantage of the fact that - OpenFlow switch consists - flow tables in the switch - remote controller on a server, - a secure communication channel between them. - OpenFlow standardizes a common set of functions that operate on these flows - Open Datacenter Inter-operable Network (ODIN) - Describes best practices to design datacenter network - Describe the evolution from traditional enterprise data networks into a multi-vendor environment optimized for cloud computing - It deals with TRILL, SDN, OpenFlow and similar standards Yu, Xinyuan 1. In my opinion, the main problem is that many of these protocols do not interoperate with each other. The next generation of protocols will be generated in the case that these protocols will support each other or a general protocol will appear. However, I am wondering about whether it is possible for the protocols supporting each other and how much it will cost. 2. There are some certain problems about Data Center Overlay Technologies which is mentioned in the paper. (1) It will decrease the visibility of fabric as a whole because network constructs that exist in the overlay network (2) it will increase the trouble shooting complexity. Network-based overlay networks may cause workload placement anywhere, simplified and automated workload provisioning and multitenancy at scale. The host-based overlay networks will benefit under the assumption that the underlying network is extremely reliable and trust worthy. And I am wondering about whether the assumption is practical. However, no experiment result is mentioned in the paper. Thus, I don t know if the benefits overwhelm the trade-offs. 3. The paper haven t mentioned anything about security. I am wondering about whether the changed technology will increase the security risk. 4. The paper s structure is first mentioned current problems, and then introduces overlay technologies which may solve the problems. However, it haven t mentioned that whether there are alternative solutions of these problems. I don t know if there are alternative solutions and whether the solution of overlay technology is better than the others. Van Gelden, Jasper Oppose Current networks with STP have scalability problems This is caused by not making optimal use of the network. In STP certain lines are disabled in order to prevent loops. But this means the infrastructure that is out their is not optimally used.

The papers are right when it comes to the flexibility of conventional network technologies. Take for example your laptop/mobile phone you walk around the ETH and expect the same connection possibilities every where. This causes a major strain on the conventional networks because you would have to serve the same network over all access points which would mean you need a huge broadcast domain for all these access points. It is even worse when you want to have the same connection possibilities on the other side of the globe. This can be solved with a private connection for a single person but what if you have 500 people there? Send all traffic over a single vpn? or loadbalance over multiple vpns? that does not sound efficient does it. Same goes for live migrations of servers you do not want to have to change all the routes in your network just because you migrate one server, you just want to tell the network on which node the server can be found. Network administration in conventional network is tedious work Right now the routes are static and inflexible With SDN it will become possible to define paths over nodes in a distributed manor and simply defining your network once and roll it out in one go. Criticize From these papers we can conclude that every manufacture is developing his own protocols again and their is no protocol that is fully supported by all manufacturers. So right now it is not the time to move in to SDN because I do not think the benefits will out weight the pain that might be caused in planning, designing and implementing a new infrastructure. When you want to use TRILL for example it means you are bound to the few manufactures that implement it which is directly also limiting your choice in the implementation. Also conflicts might arise when you have devices from different vendors in your network. The papers do not describe the complexity that these protocols add when it come to managing With the current technologies we can easily see which traffic goes where, but when we start stacking layers the complexity rapidly increases and things like security flaws might occur. In these papers so many protocols are described in order to benefit you must make a protocol choice I do not think it is wise to start combining all of these new technologies in your infrastructure they all have different trade-off and their own purpose yes, but when you start combining them you should not forget someone has to manage all of this. Defense Lee, Tae-Ho 1. "Data Center Overlay Technologies" white paper serves as a nice introduction to various overlay solutions that are used in data-centres, especially to people who are not familiar with such technologies. In particular, visual representation of header structures that are provided for each architecture makes it a lot easier to understand how the overlay solutions would work. 2. "Communication within Clouds" paper serves as a reference guide to get exposure on new technologies that are considered for data-center networks. Although the paper does not provide sufficient details for the technologies, this paper provides high-level perspectives into the technologies, such that an interested person can dig deeper into the technologies that s/he finds interesting.

3. Finding a good order to present the technologies can be difficult, and I think the paper did a good job.starting from the problems of STP, it talks about mature L2 technologies to allow redundant links. The paper, then, describes the traditional L3 ECMP routing that makes use of redundant links. Then the paper points out the limitations of L3 solutions to motivate the discussion for L2 ECMP solutions, such as TRILL and SPB. The paper also discusses other approaches, such as SDN. However, I think the presentation of the paper can be improved. For example, the transition to Section "Layer 3 v. Layer 2 Design for cloud computing," from the previous section seems very abrupt. Chang, Michael 1. Full Utilization of Resources: As elicited many times in the paper, STP only utilizes a fraction of the link bandwidth in a network. Since STP only selects one path to a particular endpoint, there is a strong possibility that in today's complex networks, there exist alternative paths that are not used at all. Furthermore, upon the loss of a particular network resource, STP needs to recalculate the spanning tree; this configuration time could exceed a minute in length. The protocols described by the paper (Cisco Fabric, TRILL, SPB)) circumvent this problem, as they are all use ECMP/multipath schemes in the data plane, enabling more of the available bandwidth to be used. 2. Leveraging Established Technologies: None of the protocols described in these papers are extremely radical. Primarily, they rely on IS-IS, a proven routing protocol that has been prevalent in the industry. Likewise, ECMP has some basis, as a version of it is already included in the IEEE802.1qbp. Similarly, Trill is being developed by the creator of the Spanning Tree algorithm (Radia Perlman), and the primary change that one would have to make to their data center is to add Routing Bridges to their network. As far as adoption within the industry, its clear that CIO's in the industry are more comfortable with incremental advancements (as opposed to reinventing the wheel like SDN). 3. Efficient Management of Network: As discussed in the Cisco paper, network overlays enable a clean separation of management between the network operators and the tenants. This would greatly simplify the work of the network operator and reduce the possibility of network administrator errors. In the Communication within Clouds paper, SDN is alluded to as an effective solution towards the goal of efficient network management within the data center (which coincidentally is the topic of this course...) Birkner, Rüdiger 1. MSTP, Rapid STP, and SPB are gradual developments. Therefore, they are enhancements of existing and tested protocols. Allow data center operators to build upon their existing infrastructure 2. Overlay Networks (TRILL, VXLAN, NVGRE) allow to create virtual networks: Administration of the underlying network and of the virtual tenant network can be separated. 3. SDN potentially allows to address many of the mentioned issues coming up due to a plethora of vms, network virtualisation, and vm mobility. Seems to be promising since many big players (Facebook, Google, Cisco, Juniper etc) support the development. (However the paper doesn't mention any actual ideas on how to address the issues.)