Revolution in Data Centers: Cloud Impact on Data Center Network

Similar documents
SDN and Open Ethernet Switches Empower Modern Data Center Networks

Pluribus Netvisor Solution Brief

Palo Alto Networks. Security Models in the Software Defined Data Center

SDN Services at the Customer Edge

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Virtualization, SDN and NFV

Ryu SDN Framework What weʼ ve learned Where weʼ ll go

Using SouthBound APIs to build an SDN Solution. Dan Mihai Dumitriu Midokura Feb 5 th, 2014

Compass Deploying and Monitoring a Software Defined Infrastructure

How Network Virtualization can improve your Data Center Security

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Dell Networking ARGOS 24/03/2016. Nicolas Roughol. Networking Sales Engineer. Tel : nicolas_roughol@dell.com

OPENFLOW, SDN, OPEN SOURCE AND BARE METAL SWITCHES. Guido Appenzeller (Not representing Anyone)

TUTORIAL: WHITE BOX/BARE METAL SWITCHES. Rob Sherwood CTO, Big Switch Networks Open Network User s Group: May, 2014

OpenFlow and Software Defined Networking presented by Greg Ferro. Software Defined Networking (SDN)

Designing Virtual Network Security Architectures Dave Shackleford

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

THE REVOLUTION TOWARDS SOFTWARE- DEFINED NETWORKING

Emerging Software Defined Networking & Open APIs Ecosystem

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre

SDN and Data Center Networks

Network Virtualization for the Enterprise Data Center. Guido Appenzeller Open Networking Summit October 2011

BROCADE NETWORKING: EXPLORING SOFTWARE-DEFINED NETWORK. Gustavo Barros Systems Engineer Brocade Brasil

JUNIPER. One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER. 1 Copyright 2010 Juniper Networks, Inc.

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Software Defined Environments

Core and Pod Data Center Design

Software-Defined Networks Powered by VellOS

Software Defined Networks Virtualized networks & SDN

May 13-14, Copyright 2015 Open Networking User Group. All Rights Reserved Not For

Restricted Document. Pulsant Technical Specification

Definition of a White Box. Benefits of White Boxes

What Can SDN Do for the Enterprise?

State of the Art Cloud Infrastructure

Telecom - The technology behind

BUILDING A NEXT-GENERATION DATA CENTER

CS244 Lecture 5 Architecture and Principles

Hitachi Unified Compute Platform (UCP) Pro for VMware vsphere

Switch Chip panel discussion. Moderator: Yoshihiro Nakajima (NTT)

HAWAII TECH TALK SDN. Paul Deakin Field Systems Engineer

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

The Road to SDN: Software-Based Networking and Security from Brocade

Scalable Network Monitoring with SDN-Based Ethernet Fabrics

NET ACCESS VOICE PRIVATE CLOUD

You can t build a new future on old technologies Juniper Networks. Enabling the Hi-IQ network of tomorrow

The promise of SDN. EU Future Internet Assembly March 18, Yanick Pouffary Chief Technologist HP Network Services

VIRTUALIZED SERVICES PLATFORM Software Defined Networking for enterprises and service providers

OpenStack Networking: Where to Next?

VMware and Brocade Network Virtualization Reference Whitepaper

OpenFlow & Software Defined Networking

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Brocade SDN 2015 NFV

VMware NSX A Perspective for Service Providers part 2

NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use.

Open Networking: Dell s Point of View on SDN A Dell White Paper

Scalable Network Monitoring with SDN-Based Ethernet Fabrics

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

SDN PARTNER INTEGRATION: SANDVINE

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

FOR LOWEST COST AND GREATEST AGILITY, CHOOSE SOFTWARE- DEFINED DATA CENTER ARCHITECTURES OVER TRADITIONAL HARDWARE-DEPENDENT DESIGNS

SDN software switch Lagopus and NFV enabled software node

SOFTWARE DEFINED NETWORKING

Taking the Open Path to Hybrid Cloud with Dell Networking and Private Cloud Solutions

Simplify IT. With Cisco Application Centric Infrastructure. Roberto Barrera VERSION May, 2015

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

VXLAN: Scaling Data Center Capacity. White Paper

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

How the Software-Defined Data Center Is Transforming End User Computing

Virtualizing Apache Hadoop. June, 2012

Why Software Defined Networking (SDN)? Boyan Sotirov

OnApp Cloud. The complete platform for cloud service providers. 114 Cores. 286 Cores / 400 Cores

The next step in Software-Defined Storage with Virtual SAN

SDN AND BARE METAL SWITCHES ARE LIKE PEANUT BUTTER AND JELLY: TWO GOOD THINGS THAT ARE GREAT TOGETHER!

Scaling the S in SDN at Azure. Albert Greenberg Distinguished Engineer & Director of Engineering Microsoft Azure Networking

Networking for Caribbean Development

SPEED your path to virtualization.

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

How do software-defined networks enhance the value of converged infrastructures?

Remote PC Guide Series - Volume 1

Cloud Optimize Your IT

NETWORK AUTOMATION AND ORCHESTRATION

SDN Software Defined Networks

Integration and Automation with Lenovo XClarity Administrator

MaxDeploy Hyper- Converged Reference Architecture Solution Brief

SOFTWARE-DEFINED NETWORKS

Solving Scale and Mobility in the Data Center A New Simplified Approach

VMware. NSX Network Virtualization Design Guide

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

Software-Defined Networking: The New Norm for Networks. ONF White Paper April 13, 2012

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

7 Ways OpenStack Enables Automation & Agility for KVM Environments

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Network Virtualization

Software defined networking. Your path to an agile hybrid cloud network

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Transcription:

WHITEPAPER Revolution in Data Centers: Cloud Impact on Data Center Network If you are running your in-house datacenter you probably heard about many new technologies which are on the way to make your datacenter more software based and cloudy. Hyper Converged, microsegmentation, Containers, Network Virtualization, SDN (Software Defined Networking), NFV (Network Function Virtualization), Clos, etc. are all new terms which we hear every day. But what is the story? The giant Web 2.0 cloud companies such as Amazon, Microsoft, Facebook and Google started deploying the concept of cloud orchestration platform. A platform with a web interface where you can login, create & kill virtual servers, spin them up, create a virtual private network between them, add a virtual firewall in between, etc. This infrastructure is a dream for every enterprise who runs a datacenter. A highly integrated self-serviced platform with zero dependency on hardware which can scale out. Server and Storage already evolved, now it s time for Network Server industry changed long time ago. There were a time where vendors were selling the server hardware bundled with their own proprietary operating system. A server is a fully integrated system, combined of motherboard, CPUs, RAM, disks and interfaces. There are thousands of vendors and models out there, but they run YOUR choice of Operating System. Either Windows or some flavor of Unix / Linux or a Hypervisor. Your application, runs on top of operating system and is not aware of what s happening in hardware. In storage world, the revolution happened just recent. The use of massive fiber channel central storage systems are declining. Storage is getting closer to the servers. With introduction of storage virtualization systems, every server with bunch of disks becomes a storage node and participates in virtual storage area network. Storage virtualization disrupted the storage market.

In fact storage virtualization was another example of separation of hardware and software. In doesn t matter what kind of storage hardware you have, A JBOD (Server with Just Bunch of Disks) or a fiber channel san storage with many DAEs, all you need to manage all is to access your virtual storage console. It manages your Disks, Raid Groups, Luns, Volumes, Replication, backups, etc. New things are loading the network. Need for more Speed, Bandwidth The explosion of mobile devices and rich media content, server virtualization, containers, Big Data, and IP Storage are among the trends driving the networking industry to re-examine the traditional network architectures. Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today s enterprise data centres. Some of the key trends driving the need for a new network paradigm as follows: Changing traffic patterns to east-west : Within the enterprise data centre, traffic patterns have changed significantly. In contrast to client-server applications where the bulk of the communication occurs between one client and one server, today s applications access different databases and servers, creating a flurry of east-west machine-to-machine traffic before returning data to the end user device in the classic north-south traffic pattern. Rise of Servers with high speed NIC cards (40G / 100G): 10G network adapters are being replaced by 40G and 100Gbps network adapters in servers. These cards are not expensive, a 100Gbps Mellanox ConnectX-4 can be purchased at $1,250 from Amazon.com. Your next compute refresh or additional servers might come with multiple 40Gbps or 100Gbps network adapters. Your network needs to be ready to support this tremendous amount of traffic generated by new servers. And yes, a server OS can really pump and fill the 100Gbps bandwidth based on new DPDK framework. Distributed Security, Rise of virtual firewalls, IDP, service chaining: Datacenters used to have high performance central firewalls, WAF and IPS for securing the traffic going to the servers and virtual machines. After changing traffic patterns to east-west, enterprises realized they need to secure the virtual-machine to virtual-machine communications as well. Therefore virtual firewalls started growing and thanks to SR-IOV technology they shown a good performance. Security is the most important factor in cloud deployments. Enterprises require a security platform which they can create simple security service policies and define which traffic should go to firewall before reaching to destination virtual machine, and more important the policies should stick to the virtual machine, regardless of the host location and where the virtual machine lives.

Big data is swimming over IP network: Handling today s big data or mega datasets requires massive parallel processing on thousands of servers, all of which need direct connections to each other. The rise of mega datasets is fuelling a constant demand for additional network capacity in the data centre. Operators of hyper scale data centre networks face the daunting task of scaling the network to previously unimaginable size, maintaining any-to-any connectivity without going broke. Hadoop clusters not only use HDFS over IP network but also all the work distribution and parallel processing also transfers over your Ethernet network. High definition content and videos are part of business: The rise of full HD, 2k, 4k recording and player devices have impacted the network. Business, marketing and users are all wanting to publish and stream high quality massive video files over the network. Hosting video collaboration and conferencing services is another heavy weight for network where it needs to satisfy the tight SLA and quick delivery and QoS for video conferencing sessions happening in conference servers. The rise of cloud services: Enterprises have enthusiastically embraced both public and private cloud services, resulting in unprecedented growth of these services. Enterprise business units now want the agility to access applications, infrastructure, and other IT resources on demand similar to experience of giant could companies such as Amazon, Microsoft and Google. The consumerization of IT : Users are increasingly employing mobile personal devices such as smartphones, watches, tablets, and notebooks to access the corporate network. IT is under pressure to accommodate these personal devices in a fine-grained manner while protecting corporate data and intellectual property and meeting compliance mandates. Need to have virtual isolated networks for VMs and Containers: Similar to public clouds, your users need to create isolated networks between their virtual machines and also the containers. Creating more and more vlans on all of your switches is a big task but it s not enough. You will go short on number of available vlans and your network configuration becomes complex and messy. You need to use new technologies for separating the traffics without utilizing old technologies.

Sorry Network, we have to shake you up. After all of these changes, people started realizing why we should not be able to control and configure the network from the cloud orchestration? Why we cannot have a network virtualization platform which is independent of datacenter leaf (edge) and spine (core) switches? Why are we stuck in individual switches CLIs? At the same time, the ODM (Original Design Manufacturer) companies who are producing network switch hardware for giant networking vendors started offering the same network switch hardware without any network switch software (it comes with a OS installer called ONIE, similar to PXE in servers). That was the birth of bare-metal or WhiteBox switches. Generic Bare Metal Switch Components Why using bare-metal switches? There are multiple reasons for using bare-metal switches in a datacenter. Some of the important ones are: 1- Economic & low TCO: bare-metal switches are extremely economic in both CAPEX and OPEX compared to black box switches from big vendors. The main reason is that the giant vendors have a very high premium charge for networking equipment which are forced to be paid off by end user. Premiums which scales from 50% to 200%. Also big vendors sell locked SFP optics which are also very highly priced and end user end up in paying huge amount of premium for optics as well. (A $70 10G SFP+ being sold for $400 after discounts). 2- High speed: bare metal switches are coming as 10G, 40G and 100G configurations. They use latest merchant silicon from big vendors such as Broadcom. 3- Choice of Network Operating System (NOS): End-user can load their choice of NOS on bare metal switches. (Similar to loading an OS on a server). The operating system can be changed and end-user doesn t need to replace the switch hardware. For example if you are using a L2-L3 NOS

and would like to deploy SDN Controller, you can easily change the NOS on the switches with an NOS which support SDN. 4- SDN compatibility: NOS for bare-metal support standard OpenFlow, NetConf and OVSDB protocols which makes the switch SDN compatible. If you are using an SDN controller, the controller can communicate and manage the bare-metal switch. 5- Support of cloud orchestration (VMware & OpenStack): Most of NOS support integration with cloud orchestration such as VMware (vcenter & NSX) and OpenStack. This allows the administrator to configure the network through the orchestration tools. For example after creating a vlan in VMware vcenter, the same vlan will be created on your datacenter switches and automatically configures the dot1q trunk interfaces. 6- Programmability & Automation: Most of NOS support network programmability through popular dev-ops tools such as Puppet, Chef and Ansible. You can create scripts to perform mass changes on large quantity of switches in single shot or integrate it with your cloud or even ITSM ticketing system. 7- Reliability: NOS are built on the rock-solid foundation of Linux. The processes are managed well in Linux and none of the processes can impact the performance and stability of the underlying ASIC and Linux kernel. 8- Multiple hardware vendors: you can mix and match different switch hardware from different vendors in your network as they will run your network operating system. (similar to have HP, Dell servers in your datacenter, but they both run VMware ESXi) Who are the bare-metal hardware vendors? OCP (http://www.opencompute.org), organizes and manages the standard designs for Compute, Storage and Network. They also provide OCP certification to products which has been followed the OCP design principals. The hardware manufacturers, design the switch motherboard for integration with switch main ASIC and also the peripherals such as fans, power supplies and most important one, the CPU card.

Vendor About Bare-Metal Switch Portfolio Agema A US based company leading provider of 1G RJ45 network switches 10G - RJ45 1/10/40/100G Popularity in market 1 Edge-Core (brand name of Accton) Dell One of the largest ODM providers in Taiwan, providing ODM service to many big networking vendors The switches are very popular between all network operating systems. One of the first vendors supported the baremetal and built branded bare metal switch. 1G POE RJ45 10G - RJ45 1/10/40/100G 10/40G HP One of the first vendors supported the baremetal and built branded bare metal switch. 10/40G Interface masters A US based company leading provider of high speed networking products. 1/10/40G SFP Inventec An ODM company established in Taiwan 10/40/100G Mellanox Well known brand in networking and HPC. Recently produced Spectrum range of switches which are bare-metal. 10/40/100G Netberg A production factory based in Taiwan producing network, servers, storage. 1/10/40/100G SFP Penguin Computing A US company producing enterprise HPC and network switches. 1G RJ45 10G - RJ45 1/10/40/100G SFP Quanta (QCT SuperMicro Established in 1988, Quanta Computer produces computing hardware for many big brands. A well-known brand, producing enterprise datacenter products. 1G RJ45 10G - RJ45 1/10/40G SFP 10/40/100G 1 Popularity measure is based on number of deployments, popularity of the unit in discussions and mailing lists, forums of NOS vendors.

What are the available NOS (Network Operating Systems) for bare-metal switches? There are multiple NOS which can be loaded on bare metal switches. Each NOW has its own HCL (Hardware Compatibility List) which states which switch hardware is supported. Below table describes the available NOS for bare metal switches: Name About Commercial/ free Highlight & Features One of the first companies Commercial to create NOS. Cumulus Linux Pica8 OcNOS Based on Debian Linux. Based in California. A NOS From the creators of Xorp (Linux routing software) Based in California. From creators of Zebra (Linux routing software) Licenses based on switch platform. 1G or 10G or 40G or 100G Commercial Licensed based on features. L2 / L3 / SDN Or bundle. Commercial L2/L3 features. Flexible, has Linux bash. Many modules for puppet and Ansible Native support for Python, Perl. Cumulus VX can be download for testing as a virtual appliance. Supports both OpenFlow and L2/L3. Has standard network CLI similar to JunOS. Trail version can be downloaded and limited to 4 switch ports. L2/L3 features. Supports MPLS SwitchLight OS From BigSwitch Networks Open Switch Open Network Linux DELL OS 10 From BigSwitch networks. One of the early companies in the market. It gets installed automatically by BigSwitch big cloud fabric. From HPe. Active project, has multiple developers working on. Supported by Big Switch Networks. Supports FaceBook FBOSS Under development from Dell. Its comes as OS 10 Base module and OS10 Apps which are mainly L2, L3 and SDN features Commercial. Part of BigSwitch BigCloudFabric Licenses based on number of switches. Open Source, Free Open Source, Free The base module is a Linux OS for the switch and comes Free. The OS10 apps are commercial. Complete SDN platform. Standard CLI, API and modern web interface. Integrates with VMware vcenter and OpenStack L2/L3 features A base operating system. User needs to configure the fabric agents such as OF-DPA, OpenNSL and SAI. Also supports Quagga, Bird and Azure Sonic. In March, Dell expects OS10 base module will begin shipping and Dell-developed application modules will enter beta testing for release later in the year.

How to use this opportunity for deploying bare-metal network? Depending on your current and planned status of your datacenter you can start planning for utilizing bare-metal and your choice of NOS. Most of the existing enterprise datacenters fall under one of below configurations: Data Center Data Center Data Center Data Center Running / Planning OpenStack Running / Planning Vmware SDDC, vrealize, NSX Traditional Server Virtualization with Vmware vsphere, vcenter Traditional Server Virtualization with other Hypervisors & Physical Servers Bare Metal 10/40/100G Fabric Clos Design for Datacenter or a POD Big Cloud Fabric P+V (Physical + Virtual) SDN Solution Integration with OpenStack Cumulus Linux Underlay Support VXLAN, VTEP Integration with VMware NSX & OpenStack Neutron Pica8 Underlay Support L2/L3 Or with SDN Controller Big Cloud Fabric Physical SDN Solution Integration with VMware vsphere and OpenStack 2016 ArpaWare Ltd. All rights reserved. The information contained herein is subject to change without notice. All brand names, vendors and product names whether acknowledged or not are understood to be the registered trademarks of their respective owners. Use of these trademarks in this document is for the purpose of referencing the technology or products of these companies and trademarks and not to make any other claims. Arpaware produces no hardware or software product and has only performed an analysis on technologies from vendors available in the market. Reference to words such as partner, vendor etc. are generic as used in the IT industry to denote promotion and use of products and technologies by manufacturers. ArpaWare is presenting the technical details in good faith as per standard industry practice and cannot guarantee the accuracy of any technical information provided. ARPAWARE MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE EQUIPMENT AND SPECIFICATIONS CONTAINED IN THIS DOCUMENT. ArpaWare shall not be liable for technical or editorial errors or omissions contained herein. Errors and Omissions if any are unintentional and regretted.