ANI Network Testbed Update



Similar documents
Next-Generation Networking for Science

Extending SDN into the Transport Network. Nanog 59 Wayne Wauford Infinera

Software Defined Networking for big-data science

Software Defined Networking for big-data science

ESnet SDN Experiences. Roadmap to Operating SDN-based Networks Workshop July 14-16, 2015 Berkeley, CA C. Guok, B. Mah, I. Monga, E.

Software-Defined Networks (SDN): Bridging the application-network divide

Flexible SDN Transport Networks With Optical Circuit Switching

Deploying distributed network monitoring mesh

SDN Use Cases: Leveraging Programmable Networks

IPOP-TinCan: User-defined IP-over-P2P Virtual Private Networks

Agenda. NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion. Mauro Campanella Internet Festival, Pisa 9 Oct

Hybrid network traffic engineering system (HNTES)

Using High Availability Technologies Lesson 12

GLIF End to end architecture Green paper

Facility Usage Scenarios

Virtualized Converged Data Centers & Cloud how these trends are effecting Optical Networks

ESnet Support for WAN Data Movement

Open Flow in Europe: Linking Infrastructure and Applica:ons [OFELIA] Reza Nejaba) Mayur P Channegowda, Siamak Azadolmolky, Dimitra Simeounidou

SDN and NFV in the WAN

What is OpenFlow? What does OFELIA? An Introduction to OpenFlow and what OFELIA has to do with it

Getting started with O3 Project Achievement ~ Innovating Network Business through SDN WAN Technologies~

Tutorial: OpenFlow in GENI

Real-World Insights from an SDN Lab. Ron Milford Manager, InCNTRE SDN Lab Indiana University

Introduction to OpenFlow:

Deploying 10/40G InfiniBand Applications over the WAN

Recent Developments in Transport SDN

Transport SDN Toolkit: Framework and APIs. John McDonough OIF Vice President NEC BTE 2015

Transport SDN Directions. March 20, 2013 Lyndon Ong Ciena

SAVI/GENI Federation. Research Progress. Sushil Bhojwani, Andreas Bergen, Hausi A. Müller, Sudhakar Ganti University of Victoria.

Leveraging SDN and NFV in the WAN

Lightpath Planning and Monitoring

CoIP (Cloud over IP): The Future of Hybrid Networking

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Brocade Solution for EMC VSPEX Server Virtualization

Internet2 Network: Controlling a Slice of the Na6onal Network. Eric Boyd Senior Director of Strategic Projects

Introduction to computer networks and Cloud Computing

ViSION Status Update. Dan Savu Stefan Stancu. D. Savu - CERN openlab

SDN AND SECURITY: Why Take Over the Hosts When You Can Take Over the Network

The Next Generation Internet Program. Mari Maeda ITO

ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS)

Business Case for Cisco SDN for the WAN

JUNIPER TECHNOLOGY UPDATE. Debbie Montano Jan 31, 2011

Blue Planet. Introduction. Blue Planet Components. Benefits

Next Challenges in Optical Networking Research: Contribution from the CaON cluster for HORIZON 2020

Transport OIF. Hans-Martin Foisel Deutsche Telekom. OIF Carrier WG Chair. October 16, 2013

addition to upgrading connectivity between the PoPs to 100Gbps, GPN is pursuing additional collocation space in Kansas City and is in the pilot stage

How To Build A Network On A Network (Internet2)

Demonstrating the high performance and feature richness of the compact MX Series

Programming Assignments for Graduate Students using GENI

Campus Network Design Science DMZ

OpenFlow Technology Investigation Vendors Review on OpenFlow implementation

IP Storage On-The-Road Seminar Series

TECHNOLOGY WHITE PAPER. Correlating SDN overlays and the physical network with Nuage Networks Virtualized Services Assurance Platform

An Architecture for Application-Based Network Operations

Overview of OOO(O three) Open Innovation over Network Platform

A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Lecture 02b Cloud Computing II

Fibre Channel over Ethernet in the Data Center: An Introduction

Since 1998, AT&T invested more than $35 billion to support customer needs in data, Internet protocol (IP), local and global services.

Lustre Networking BY PETER J. BRAAM

Optimizing Data Center Networks for Cloud Computing

Stanford SDN-Based Private Cloud. Johan van Reijendam Stanford University

Cross-layer Optimisation and Traffic Control for Delivering Super High Definition Video

Experiences with Dynamic Circuit Creation in a Regional Network Testbed

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Research and Development of IP and Optical Networking

EVALUATING NETWORK BUFFER SIZE REQUIREMENTS

HOW SDN AND (NFV) WILL RADICALLY CHANGE DATA CENTRE ARCHITECTURES AND ENABLE NEXT GENERATION CLOUD SERVICES

Cloud Infrastructure Planning. Chapter Six

Virtualization, SDN and NFV

SOFTWARE DEFINED NETWORKING: A PATH TO PROGRAMMABLE NETWORKS. Jason Kleeh September 27, 2012

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

Testing Challenges for Modern Networks Built Using SDN and OpenFlow

Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe

Network Virtualization and SDN/OpenFlow for Optical Networks - EU Project OFELIA. Achim Autenrieth, Jörg-Peter Elbers ADVA Optical Networking SE

Multilayer Networks: An Architecture Framework

Transcription:

ANI Network Testbed Update Brian Tierney, ESnet, Joint Techs, Columbus OH, July, 2010

ANI: Advanced Network Initiative Project Start Date: September, 2009 Funded by ARRA for 3 years Designed, built, and operated by ESnet staff 1 of 3 ARRA Advanced Network Initiative (ANI) projects in the DOE ANI 100G Prototype ANI Network Testbeds 4 ANI research projects 2

DOE s Advanced Networking Initiative ANI Project scope ($66.8M): Build end-to-end 100 Gbps prototype network between DOE supercomputers and MANLAN Build a network testbed facility for researchers and industry Includes $5M in network research that will use the testbed facility Magellan: Separate DOE-funded ($32.8M) nationwide scientific mid-range distributed computing and data analysis testbed to explore whether cloud computing can help meet the overwhelming demand for scientific computing NERSC / LBNL & ALCF / ANL configured with multiple 10 s of teraflops and multiple petabytes of storage, as well as appropriate cloud software 7/12/10 Joint Techs, Summer 2010 3

ANI Project Goals Prototype network: Accelerate the deployment of 100 Gbps technologies Build a persistent infrastructure that will transition to the production network ~2012 - Key step toward DOE s vision of a 1-Terabit network linking DOE supercomputing centers and experimental facilities Not for production traffic Testbed: Build an experimental network research environment at sufficient scale to usefully test experimental approaches to next generation networks - Funded for 3 years, then roll into the ESnet program - Breakable, reserveable, configurable, resettable - Enable R&D at speeds up to 100 Gbps 7/12/10 Joint Techs, Summer 2010 4

Testbed Overview Progression Start out as a tabletop testbed Move to Long Island MAN when dark fiber is available Extend to WAN when 100 Gbps available Capabilities Ability to support end-to-end networking, middleware and application experiments, including interoperability testing of multi-vendor 100 Gbps network components Researchers get root access to all devices Use Virtual Machine technology to support custom environments Detailed monitoring so researchers will have access to all possible monitoring data 7/12/10 Joint Techs, Summer 2010 5

The Testbed Provides A rapidly reconfigurable high-performance network research environment that will enable researchers to accelerate the development and deployment of 100G networking through prototyping, testing, and validation of advanced networking concepts. An experimental network environment for vendors, ISPs, and carriers to carry out interoperability tests necessary to implement end-to-end heterogeneous networking components (currently at layer-2/3 only). Support for prototyping middleware and software stacks to enable the development and testing of 100G science applications. A network test environment where reproducible tests can be run. An experimental network environment that eliminates the need for network researchers to obtain funding to build their own network. 7/12/10 Joint Techs, Summer 2010 6

Sample Projects Examples of the types of projects the current testbed will support include the following: Path computation algorithms that incorporate information about hybrid layer 1, 2 and 3 paths, and support 'cut-through' routing New transport protocols for high speed networks Protection and recovery algorithms Automatic classification of large bulk data flows New routing protocols New network management techniques Novel packet processing algorithms High-throughput middleware and applications research 7/12/10 Joint Techs, Summer 2010 7

Network Testbed Components Table Network Testbed consists of: 6 DWDM devices (Layer 0-1) 4 Layer 2 switches supporting Openflow 2 Layer 3 Routers Test and measurement hosts - Virtual Machine based test environment - 4x10G test hosts initially Eventually 40G and 100G from Acadia 100G NIC project This configuration will evolve over time 8

Monitoring Host App Host NEC openflow-1 Tabletop Testbed Configuration NEC openflow-2 IO Tester Host Inventory: 2 application hosts 2 monitoring hosts 4 IO test hosts 2 file server/ webdav hosts 1 ssh gateway host IO Tester north-rt1 north-wdm1 north-wdm2 Prod. east-wdm1 To all devices Mgmt Switch gateway-1 2x 10GE NEC openflow-4 To other File Server File Server south-wdm1 Comments: - App Host : can be used for any researcher application, control plane Monitoring control software, etc. Can support up to 8 Host simultaneous VMs - File Server : Disk space for storing VMs, monitoring data, etc. NFS mounted everywhere, webdav to Internet - IO Tester Lawrence is capable Berkeley of 15 G disk-todisk or 35G National Laboratory App Host memory-to-memory south-wdm2 NEC openflow-3 south-rt1 east-wdm2 File Server Gateway router IO Tester 3x 10GE WDM Link 10GE Link IO Tester U.S. Department of Energy Office 1GE of Link Science To Internet

Sample Configuration: Multi-Domain Multi-Layer Protection Testing North Domain East Domain Openflow Switch IO Tester north-wdm1 north-wdm2 east-wdm1 IOTester Prod. South Domain IO tester Test inter-domain optical protection schemes south-wdm1 Openflow Switch south-wdm2 east-wdm2 Test inter-domain higher layer (> 1) protection schemes IOTester

Phase 2: move to Long Island MAN

100G Prototype Network Prod. LIMAN ANI Testbed Configuration (40G aggregate) Mon host App Host MX80 Router ssh gateway AofA File Server Testbed To Internet IO Tester Mon host MX80 Router App Host File Server Testbed Prod. Testbed IO Tester IO Tester IO Tester Testbed Prod. NEWY Prod. WDM Link 2 x 10 G Infinera 10 GE Link 1 GE Link BNL

Testbed will connect to the Nationwide 100G Prototype Network Chicago Magellan NERSC NYC ALCF / ANL Magellan Sunnyvale Nashville OLCF / ORNL 100 G 13

Testbed Status Tabletop Testbed available for researchers to log in as late June. Experiments just starting researchers are logging in, configuring VMs, etc Basic documentation mostly written Can reserve testbed components using Google calendar. A few remaining tasks (e.g.: router configuration save/restore) For Phase 2: RFP for the Long Island dark fiber ring has been signed and construction has started. 7/12/10 Joint Techs, Summer 2010 14

Current Projects Using the Testbed Project Summary Expected Results Archstone PI: Tom Lehman, ISI To dynamically create slices of resources across multiple network layers in a vertically integrated manner, so as to generate virtual network topologies. This requires a highly-advanced path computation element which extends the concept of simple path computation to multi-layer, multidimensional topologies. This project will utilize the ANI Testbed to determine design requirements, test alternatives, and evaluate performance of the developed technologies. FlowBench PI: Prasad Calyam, Ohio Supercomputer Center To set up different physical topologies in testbed using resources such as NEC Openflow switches, App Hosts, and Monitoring hosts. On these topologies, we will experiment with Openflow and benchmark performance of GridFTP file transfers with enhanced TCP/UDP variants. The Testbed will be used confirm that our developed technologies will operate as desired with production network equipment, topologies, and configurations. HNTES PI: Malathi Veeraraghavan, University of Virginia Hybrid Network Traffic Engineering Software: The purpose of HNTES is to leverage both an IP datagram network and a high-speed optical dynamic circuit network to best serve users' data communication needs. Experiments on the testbed will be conducted to determine whether flows can be redirected on-the-fly to newly established optical circuits without impacting TCP behavior, and user-perceived performance. 7/12/10 Joint Techs, Summer 2010 15

Testbed Access Proposal process to gain access described at: https://sites.google.com/a/lbl.gov/ani-testbed/ Currently there are 3 DOE-funded projects that have access to the testbed 4-5 more are waiting for 40G capability Testbed will be available to anyone: DOE researchers Other government agencies Industry Must submit a short proposal to the testbed review committee Committee will be made up of members from the R&E community and industry Our initial goal is to accept at least five proposals every review cycle 16

Acceptance Criteria The criteria for selecting the proposals will be based on: Quality of proposed research which includes: Clear, focused research topic Creative and original concept Test plan Qualifications of the team Potential impact of the research on field of networking and DOE SC mission Readiness: is the project ready to run experiments right away? Value of ANI testbed resources to the research Level of support required by ESnet staff 7/12/10 Joint Techs, Summer 2010 17

Testbed Proposal Timeline October 1, 2010: 1st Round of Proposals due January 10, 2011: Testbed awards announced February 1, 2011: Testbed available for use April 1, 2011: 2nd round of proposals due July 1, 2011: 2nd round awards announced October 1, 2011: 3rd round of proposals due January 10, 2012: 3rd round awards announced April 1, 2012: 4th round of proposals due July 1, 2012: 4th round awards announced 7/12/10 Joint Techs, Summer 2010 18

(not related to Testbed!) New ESnet performance Testing Service

New ESnet Diagnostic Tool: 10 Gbps IO Tester 16 disk raid array: capable of > 10 Gbps host to host, disk to disk Runs anonymous read-only GridFTP Accessible to anyone on any R&E network worldwide 1 deployed on now (west coast, USA) 2 more (midwest and east coast) by end of summer Already used to debug many problems Will soon be registered in perfsonar gls See: http://fasterdata.es.net/disk_pt.html

More Information http://sites.google.com/a/lbl.gov/ani-testbed/ http://100gbs.lbl.gov/ email: ani-testbed-proposal@es.net, BLTierney@es.net Let us know what we could add/change to make the testbed more useful to you!

Extra Slides