Optical Networks with ODIN in Smarter Data Centers: 2015 and Beyond



Similar documents
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 1: Transforming the Data Center Network Last update: May 2012

Data Center Network Evolution: Increase the Value of IT in Your Organization

The Coming Decade of Data Center Networking Discontinuities

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Ethernet Fabrics: An Architecture for Cloud Networking

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Data Center Networking Designing Today s Data Center

Panel : Future Data Center Networks

Brocade One Data Center Cloud-Optimized Networks

Virtualizing the SAN with Software Defined Storage Networks

Juniper Networks QFabric: Scaling for the Modern Data Center

Cloud Computing and the Internet. Conferenza GARR 2010

Software-Defined Networks Powered by VellOS

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Scalable Approaches for Multitenant Cloud Data Centers

Global Headquarters: 5 Speen Street Framingham, MA USA P F

VMDC 3.0 Design Overview

Data Center Convergence. Ahmad Zamer, Brocade

Intel s Vision for Cloud Computing

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Data Center Evolution and Network Convergence

David Lawler Vice President Server, Access & Virtualization Group

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

BUILDING A NEXT-GENERATION DATA CENTER

Data Center Evolution and Network Convergence

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

SummitStack in the Data Center

CON Software-Defined Networking in a Hybrid, Open Data Center

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX

Why Software Defined Networking (SDN)? Boyan Sotirov

Enterprise Data Center Networks

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

Optimizing Data Center Networks for Cloud Computing

Unified Computing Systems

A Platform Built for Server Virtualization: Cisco Unified Computing System

Blade Switches Don t Cut It in a 10 Gig Data Center

The Future of Cloud Networking. Idris T. Vasi

SummitStack in the Data Center

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Flexible SDN Transport Networks With Optical Circuit Switching

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

Switching Solution Creating the foundation for the next-generation data center

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Software Defined Networking Disruptive Technologies

Testing Challenges for Modern Networks Built Using SDN and OpenFlow

FIBRE CHANNEL OVER ETHERNET

FCoCEE* Enterprise Data Center Use Cases

Simplifying Data Center Network Architecture: Collapsing the Tiers

Block based, file-based, combination. Component based, solution based

A 10 GbE Network is the Backbone of the Virtual Data Center

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Network Technologies for Next-generation Data Centers

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

White Paper Solarflare High-Performance Computing (HPC) Applications

Optimizing Infrastructure Support For Storage Area Networks

Building Tomorrow s Data Center Network Today

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Technology Overview for Ethernet Switching Fabric

Cisco Unified Data Center

Evolving the Data Center Critical Cloud Success. A Light Reading Webinar Sponsored by

Cloud Networking: A Network Approach that Meets the Requirements of Cloud Computing CQ2 2011

Data Center Evolution without Revolution

State of the Art Cloud Infrastructure

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

SOFTWARE DEFINED NETWORKING

Cloud-ready network architecture

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Enterasys Data Center Fabric

Business Case for Open Data Center Architecture in Enterprise Private Cloud

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Extreme Networks: Public, Hybrid and Private Virtualized Multi-Tenant Cloud Data Center A SOLUTION WHITE PAPER

Simplify IT. With Cisco Application Centric Infrastructure. Barry Huang Nov 13, 2014

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

40 Gigabit Ethernet and Network Monitoring

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

CoIP (Cloud over IP): The Future of Hybrid Networking

Introduction to Cloud Design Four Design Principals For IaaS

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Stuart Berman, CEO Jeda Networks September, 2013

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Network Virtualization for Large-Scale Data Centers

Virtualization, SDN and NFV

VMware and Brocade Network Virtualization Reference Whitepaper

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Transcription:

Optical Networks with ODIN in Smarter Data Centers: 2015 and Beyond Dr. Casimer DeCusatis Distinguished Engineer & CTO, System Networking Strategic Alliances IBM Corporation, Poughkeepsie, NY 1

Overview Motivation & Framework for Enterprise Optical Data Center Networking Integrated, Automated, Optimized: Solving Problems in Traditional Data Center Networks Optical/Copper Link Applications: Cost and Power Tradeoffs Virtualization & Software-Defined Networking High Performance Computing Applications The Road to Exascale Computing Research Topics Silicon Photonics Optical PCBs Advanced Packaging Technologies Conclusions

The Need for Progress is Clear 30 percent Energy costs alone represent about 30% of an office building s total operating costs 18% Anticipated annual increase in energy costs 42 percent Worldwide, buildings consume 42% of all electricity up to 50% of which is wasted 20x Growth in density of technology during this decade. Energy costs higher than capital outlay 85% In distributed computing 85% of computing capacity sits idle 50%+ More than half of our clients have plans in place to build a new data center/network facilities as they are out of power, cooling and/or space

Landscape of Interconnect Technology From A. Benner, Optical interconnect opportunities in supercomputers and high end computing, OFC 2012

Data Center Interconnect

Transformations in the Modern Data Center Forecasted evolution of Ethernet (IEEE) FC and FCoE SAN Switch and Adapter Port Shipments Source: Dell Oro Ethernet Switch Report Five Year Forecast 2011-2015 4.0 Port Shipments in Millions 3.0 2.0 1.0 0.0 3Q09 4Q09 1Q10 4 Gbps 2Q10 3Q10 4Q10 1Q11 8 Gbps 2Q11 3Q11 >16 Gbps 4Q11 FCoE 1Q12 2Q12 Forecasted evolution of FibreChannel (Infornetics) Total FCoE Port Shipments and Revenue Revenue in $ Billions (Line) Source: Dell Oro Ethernet Switch Report Five Year Forecast 2011-2015 $3 $2 $1 $0 2008 2009 2010 2011 2012 2013 2014 9.0 6.0 3.0 0.0 Port Shipments in Millions (Bars)

the equivalent of over four days of business-class video for every person on Earth. More than 50 percent of computing workloads in data centers will be cloud-based by 2014, and global cloud traffic will grow more than 12 times by 2015, to 1.6 Zettabytes per year IBM System Networking Transformations in the Modern Data Center Forecasted evolution of Ethernet (IEEE) FC and FCoE SAN Switch and Adapter Port Shipments 4.0 3.0 2.0 1.0 0.0 4 Gbps 8 Gbps >16 Gbps FCoE Port Shipments in Millions 3Q09 4Q09 1Q10 2Q10 3Q10 4Q10 1Q11 2Q11 3Q11 4Q11 1Q12 2Q12 Forecasted evolution of FibreChannel (Infornetics) Total FCoE Port Shipments and Revenue $3 $2 $1 $0 9.0 6.0 3.0 Revenue in $ Billions (Line) Port Shipments in Millions (Bars) Port Shipments in Millions (Bars) 0.0 2008 2009 2010 2011 2012 2013 2014 According to the Cisco Cloud Index, January 2012

The Open Datacenter Interoperable Network (ODIN) Standards and best practices for data center networking Announced May 8 as part of InterOp 2012 Five technical briefs (8-10 pages each), 2 page white paper, Q&A http://www-03.ibm.com/systems/networking/solutions/odin.html Standards-based approach to data center network design, including descriptions of the standards that IBM and our partners agree upon IBM System Networking will publish additional marketing assets describing how our products support the ODIN recommendations Technical white papers and conference presentations describing how IBM products can be used in these reference architectures See IBM s Data Center Networking blog: https://www- 304.ibm.com/connections/blogs/DCN/entry/odin_sets_the_standard_for_open_netw orking21?lang=en_us And Twitter feed: https://twitter.com/#!/ibmcasimer 8

- Bringing speed and intelligence to the edge of the network Traditional Closed, Mostly Proprietary Data Center Network WAN Optional Additional Networking Tiers, dedicated connectivity for server clustering Oversubscribed Core Layer Dedicated Firewall, Security, Load Balancing, L4-7 Appliances Oversubscribed Aggregation Layer Traditional Layer 2/3 Boundary 1 Gbps EN links Oversubscribed Access Layer Various Types of Application Servers, Blade Chassis 2, 4, 8 Gbps FC links 1 Gbps iscsi / NAS links SAN 9 2011 IBM Corporation

- Bringing speed and intelligence to the edge of the network Traditional Data Center Networks: B.O. (Before ODIN) Historically, Ethernet was used to interconnect stations (dumb terminals), first through repeaters and hubs, eventually through switched topologies Not knowing better, we designed our data centers the same way The Ethernet campus network evolved into a structured network characterized by access, aggregation, services, and core layers, which could have 3, 4, or more tiers These networks are characterized by: Mostly north-south traffic patterns Oversubscription at all tiers Low virtualization, static network state Use of spanning tree protocol (STP) to prevent loops Layer 2 and 3 functions separated at the access layer Services (firewalls, load balancers, etc) dedicated to each application in a silo structure Network management centered in the switch operating system Complex, often proprietary features and functions 10 2011 IBM Corporation

- Bringing speed and intelligence to the edge of the network Problems with Traditional Networks Too many tiers Each tier adds latency (10-20 us or more); cumulative effect degrades performance Oversubscription (in an effort to reduce tiers) can result in lost packets Does not scale in a cost effective or performance effective manner Scaling requires adding more tiers, more physical switches, and more physical service appliances Management functions do not scale well STP restricts topologies and prevents full utilization of available bandwidth Physical network must be rewired to handle changes in application workload Manually configured SLAs and security prone to errors Potential shortages of IP Addresses Not optimized for new functions Most modern data center traffic is east-west Oversubscription / lossy network requires separate storage infrastructure Increasing use of virtualization means significantly more servers which can be dynamically created, modified, or destroyed Desire to migrate VMs for high availability and better utilization Multi-tenancy for cloud computing and other applications High Operating and Capital Expense Too many protocol specific network types Too many network, service, and storage managers Too many discrete components lowers reliability, poorly integrated Too much energy consumption / high cooling costs Sprawl of lightly utilized servers and storage Redundant networks required to insure disjoint multi-pathing for high availability Moving VMs to increase utilization limited by Layer 2 domain boundaries, low bandwidth links, & manual management issues Significant expense just to maintain current network, without deploying new resources 11 2011 IBM Corporation

- Bringing speed and intelligence to the edge of the network Open Datacenter with an Interoperable Network (ODIN) WAN MPLS/VPLS Enabled 40 100 Gbps links Core Layer Pooled, Virtual Appliances Link Aggregation and secure VLANs ONF 10 Gbps links Layer2 TOR/Access Layer w/ TRILL, stacked switches & lossless Ethernet Embedded Blade Switches & Blade Server Clusters w/embedded virtual switches FCoE Storage FCoE Gateway SAN OpenFlow controller 8 Gbps or higher FC links 10 Gbps ISCSI / NAS links 12 2011 IBM Corporation

- Bringing speed and intelligence to the edge of the network Modern Data Center Networks: A.O. (After ODIN) Modern data centers are characterized by: 2 tier designs (with embedded Blade switches and virtual switches within the servers) Lower latency and better performance Cost effective scale-out to 1000s of physical ports, 10,000 VMs (with lower TCO) Scaling without massive oversubscription Less moving parts higher availability and lower energy costs Simplified cabling within and between racks Enabled as an on-ramp for cloud computing, integrated PoDs, and end-to-end solutions Optimized for east-west traffic flow with efficient traffic forwarding Large Layer 2 domains and networks enable VM mobility across different physical servers VM Aware fabric; network state resides in vswitch, automated configuration & migration of port profiles Options to move VMs either through hypervisor Vswitch or external switch Wire once topologies with virtual, software-defined overlay networks Pools of service appliances shared across multi-tenant environments Arbitrary topologies (not constrained by STP) with numerous redundant paths, higher bandwidth utilization, switch stacking, and link aggregation Options to converge SAN (and other RDMA networks) into a common fabric with gateways to existing SAN, multi-hop FCoE, disjoint fabric paths, and other features Management functions are centralized, moving into the server, and require fewer instances with less manual intervention and more automation Less opportuity for human error in security and other configurations Based on open industry standards from the IEEE, IETF, ONF, and other groups, which are implemented by multiple vendors (lower TCO per Gartner Group report) ODIN Provides a Data Center Network Reference Design based on Open Standards 13 2011 IBM Corporation

Broad Ecosystem of Support for ODIN In order to contain both capital and operating expense, this network transformation should be based on open industry standards. ODIN facilitates the deployment of new technologies InterOp Webinar: How to Prepare Your Infrastructure for the Cloud Using Open Standards ODIN is a great example of how we need to maintain openness and interoperability preferred approach to solving Big Data and network bottleneck issues one of the fundamental change agents in the networking industry associated with encouraging creativity a nearly ideal approach is on its way to becoming industry bestpractice for transforming data-centers 14 the missing piece in the cloud computing puzzle National Science Foundation interop lab & Wall St. client engagement We are proud to work with industry leaders like IBM

How is the data center evolving? Integrated Integrated Platform Manager & SDN stack Virtualized Storage Pool Virtualized Network Resources Virtualized [Network Hypervisor] Compute Pool Virtual View Physical View Automated 6. Virtual Machine network state automation 7. Multi-tenant aware Network Hypervisor 8.Self-contained expandable infrastructure 9.Platform Manager & Software Defined Networking Stack vswitch COMPUTE Rack STORAGE Single, Scalable Fabric Seamless Elasticity vswitch COMPUTE STORAGE Optimized 1.Fabric managed as a single switch 2. Converged fabric 3. Scalable fabric 4. Flexible Bandwidth 5. Optimized Traffic

From A. Benner, Optical interconnect opportunites in supercomputers and high end computing, OFC 2012 Cost curve for copper links is divided into several distinctly shaped regions Cost curve for optical links is essentially flat with a higher startup cost At short distances, copper is less expensive; at long distances, optics is less expensive

From A. Benner, Optical interconnect opportunites in supercomputers and high end computing, OFC 2012 10G LOM tends to lower the entry point cost of optics (and thus moves the crossover length to shorter distances)

From A. Benner, Optical interconnect opportunites in supercomputers and high end computing, OFC 2012 copper & optics costs decline at about the same rate (crossover length at a given bit rate remains constant) At higher bit rates, crossover point moves to shorter distances At 2.5 Gbit/s, copper works up to 2-3 meters (within a rack, Blade chassis, or PoD) - 75% of the traffic

A conventional data center may contain thousands of servers, each capable of hosting tens of virtual machines using current technology; the number of network endpoints can easily reach tens to hundreds of thousands Data center square footage 700000 600000 500000 400000 300000 200000 100000 0 Data centers are growing in scale 1999 2004 2009 2011.and Power! In 2001 there were 1 or 2 data centers in the world which required 10 MW of power in 2011 there are dozens, and new data centers in the 60-65 MW range are under development There are unique problems in a highly virtualized environment: not just massive numbers of network endpoints dynamically create, modify, and destroy VMs at any time Multi-tenancy & large number of isolated, independent sub-networks High performance networks become very important High bandwidth links enable flexible placement of workloads & data for higher server utilization; this is a direct benefit to commercial enterprises

IBM Raleigh Leadership Data Center Reduces energy costs by $1.8 M USD/year (over 50%) Modern data centers require industrial-scale equipment The benefits of server virtualization & consolidation become clear

Energy for Data Transport is a major issue for Exascale Systems Energy involved in data transport dominates energy used in compute cycles Energy needed per floating point operation: 0.1 pj/bit Data transport on card (3-10 inches) is 200X higher Data transport across a large system is 2000x higher! Optics is a more power efficient technology: mw/gbps 300 250 200 150 100 50 Assuming $1M per MWatt per year, total lifetime cost of ownership for optical solutions is significantly less expensive 0 10G copper PHY Active copper VCSEL/MMF

OpenFlow and Software-Defined Networking events: flow, port, host, link get flow stats security policy enforcement QoS traffic management multi-tenant virtualization NETWORK OPERATING SYSTEM install flow rules OpenFlow controller pkt events get flow stats abstract network models NETWORK OPERATING SYSTEM OpenFlow controller single large switch ABSTRACT NETWORK INTERFACE other controllers annotated network graph native device interface tenant VM connectivity model mgmnt tools 22 OpenFlow provides an industry-standard API and protocol to program forwarding tables in switches usually done by the co-resident control processor and software alternatives exist like e/c-dfp and other vendor proprietary control protocols (e.g., Arista EoS) OpenFlow provides a base layer mechanism for the network operating system Much more is needed beyond OpenFlow to complete the NOS state distribution and dissemination other types of configuration beyond ACL state (std. forwarding, VLANs, port profiles, etc.) NOS layer supports more functions create global network view distribute and manage network state configure the physical network, including vswitches ANI: map control and configuration goals on abstract view to the global network view models provide simplified view of the network that makes it easy to specify goals via control programs

Multi-Tenant with Overlapping Address Spaces IBM System Networking Site A Virtual Machine Note, vswitches and vappliances are not shown. APP HTTP Database HTTP Site 10.0.5.7 00:23:45:67:00:04 10.0.3.1 00:23:45:67:00:01 10.0.0.4 00:23:45:67:00:25 10.0.3.42 00:23:45:67:00:01 DOVE Network Pepsi Overlay Network vappliance Coke Overlay Network 10.0.3.1 00:23:45:67:00:01 10.0.5.7 00:23:45:67:00:04 10.0.3.42 00:23:45:67:00:25 10.0.5.1 00:23:45:67:00:01 10.0.5.4 00:23:45:67:00:01 Database Server HTTP APP HTTP HTTP Server vappliance Multi-tenant, Cloud environments require multiple IP address spaces within same server, within a Data Center and across Data Centers (see above). Layer-3 Distributed Overlay Virtual Ethernet (DOVE) switches enable multi-tenancy all the way into the Server/Hypervisor, with overlapping IP Address spaces for the Virtual Machines.

Data Center Interconnect HPC

1 PetaFlop 72 BG/P Racks 2.5 MW $150 M ExaFlop System: 1 PetaFlop = 1/3 rack 20 MW $500 M 10 PetaFlop 100 BG/P Racks 5 MW $225 M Historical trend (top500.org and green500.org, also see A. Gana & J. Kash, OFC 2011) 10X performance, 4 years later, costs 1.5X more dollars, consumes 2X more power

Future System Interface Requirements 2012 2018-2020 Peak Performance (PFLop) 10 1000 (1 ExaFlop) Memory Bandwidth (TB/s) 0.05 3 15 (local 3D packaging) Bidirectional IO Bandwidth (TB/s) 0.02 5 (potential optical solution) Number of IO Pins (at 8 Gbps) 80 20,000 (potential optical solution) Number of 10G Optical Channels 2 x 10 6 16 x 10 8 Optical power consumption 25 mw/gbps 1 mw/gbps Optical cost per Tbps $1000 $25 (higher bit rate/channel may reduce this somewhat) Over time, increasing compute performance & bandwidth requirements will cause optical boundary to move closer to processor IBM Research has active programs in a variety of optical interconnect technologies: Silicon Photonics Optical PCBs Advanced Packaging, Transceivers, and Optical Vias

Data Center Interconnect HPC Research

Y. Vlasov et al, multiple papers can be downloaded at: http://www.ibm.research.com/photonics

I/O Bandwidth for various levels of packaging is critical

Optical printed circuit boards In collaboration with Varioprint Electrical SMA connector interfaces Optical connector interfaces Embedded waveguides Waveguide processing on large panels, 305 mm x 460 mm Finished optical board with optical and mechanical interfaces Top FR4 stack (with electrical lines) Polymer waveguide layer Connector interface 12 waveguides Alignment marker Bottom FR4 stack F. E. Doany et al, ECTC 2010, pp. 58 65 D. Jubin et al, Proc. SPIE, Vol. 7607, 76070K (2010) F. D. Doany et al, IEEE Trans. Adv. Packag., Vol. 32, No. 2, pp. 345-359, May 2009.

Conclusions IBM System Networking Accelerating change in enterprise data center networks Rising energy costs, under-utilized servers, limited scalability, dynamic workload management Need to automate, integrate, and optimize data center networks Over time, the boundary for optical interconnect is moving closer to the server Higher data rates accelerate this trend Virtualization, integrated blade switches, and SDN increase the need for flexible, high bandwidth links HPC performance is growing at an exponential rate, driving to exascale computing by 2018 2020 timeframe Increasing aggregate system performance will demand more optical links in the future; bandwidth is steadily increasing (higher channel rates, parallel channels) State of the art electronic packaging are engineering marvels but can be complex, difficult to test, and expensive. Integration of high bandwidth technologies into first and second level packaging will be key; density requirements for connectors become increasingly important as the number of links per system grows Single card systems may have integrated on-board optics within 5 years There are many enabling technologies on the horizon Including silicon photonics, high speed VCSELs and transceivers, optical vias and PCBs, and polymer waveguides (to name just a few)

Thank You! 32