Report on International Data Exchange Requirements

Size: px
Start display at page:

Download "Report on International Data Exchange Requirements"

Transcription

1 Report on International Data Exchange Requirements Authors: Jill Gemmill Geoffrey Fox, Stephen Goff, Sara Graves, Mike L. Norman, Beth Plale, Brian Tierney Report Advisory Committee Jim Bottum, Clemson University Bill Clebsch Stanford University Cliff Jacobs, Cliff Jacobs LLC Geoffrey Fox, Indiana University Stephen Goff, University of Arizona Sara Graves, University of Alabama-Huntsville Steve Huter, University of Oregon Miron Livny, University of Wisconsin Marla Meehl, UCAR Ed Moynihan, Internet2 Mike Norman, San Diego Supercomputer Center Beth Plale, Indiana University Brian Tierney, ESnet NSF Support ACI December

2 1. EXECUTIVE SUMMARY The National Science Foundation (NSF) contributes to the support of U.S. international science policy and provides core cyberinfrastructure through the International Research Network Connections (IRNC) program. The annual investment in 2014 is estimated at $7M/year and provides international network services supporting U.S. research, scholarship and collaboration. Given the rapid growth of data driven science, new instrumentation, and multi-disciplinary collaboration, NSF funded a study to estimate the flow of data over the international networks in the year This estimation will inform the likely scale of network requirements and the level of investment needed to support international network requirements for data exchange in the next five years. FIGURE 1. INTERACTIVE WEB SITE AT Methods used to gather information to construct a likely scenario of the future needs included in person interviews with IRNC providers, participation in science domain conferences, review of available network utilization measurements, comparison to a commodity Internet prediction study, 2

3 survey of NSF funded principal investigators, input from science domain experts, on-line search, and advice from the study advisory committee. In addition to this written report, the study also produced the Catalog of International Big Data Science, an interactive and updatable website available at (Figure 1) KEY OBSERVATIONS 1) The IRNC networks are demonstrably critical for scientific research, education, and collaboration. NSF s $7M annual investment is highly leveraged by a factor of due to contributions of other nations to the global R&E network infrastructure. (Section 3) The IRNC networks are distinguished from the commodity Internet in having an extremely large data driven traffic flow; the commodity Internet is 79-90% video driven. (Section 5.3.2) 56% of NSF funded scientists (all disciplines funded by NSF) participate in international collaborations. (Appendix C, Question 1) IRNC network capacity (bandwidth available) has been keeping pace with national R&E backbone speeds and scientific requirements. (Sections and 5.3.2) IRNC has demonstrated throughput (sustained gigabits per second) and quality of service for certain applications that far exceed what is possible on the commodity Internet. (Sections 3.2 and 3.3) 2) IRNC traffic in 2014 will triple by From 2009 to 2013, IRNC traffic is estimated to have grown by a factor of 5.7. This growth is similar to or slightly higher than the growth of the global Internet over the same period of time. (Figure 11 and Section 5.3.2) o The growth rate beyond 2018 may be even greater as the Internet of Things (eg: sensor networks) develops. The number of devices connected to IP networks will be twice as high as the global population by 2018, accompanied by a growing number of machine to machine (M2M) applications. (Section 5.3.3) o A review of past internet traffic data as well as technology development in general indicates that the growth trend has been and will remain exponential. (Section 5.3.3) Known scientific data drivers for the IRNC in 2020 will include: o Single site instruments serving: (Sections and 5.3.1) a) the globe s astronomers and astrophysicists, such as The Large Synoptic Survey Telescope (LSST) [1], Attacama Largemillimeter/submillimeter 3

4 Array (ALMA) [2], Square Kilometer Array (SKA) [3], and James Web Space Telescope (JWST) [4]; b) the globe s physicists, such as the Large Hadron Collider (LHC [5]), International Thermonuclear Experimental Reactor (ITER) [6], and Belle detector collaborations [7]. o Thousands of highly distributed genomics and bioinformatics sites, including: (Sections 5.4.2) a) high throughput sequencing; b) medical images; c) integrated Light Detection and Ranging (LIDAR) [8]. d) Each site will produce as much data as a single site telescope or detector. As these communities gain expertise in creating, analyzing and sharing such data, the number of extremely large data sets transiting networks will increase by a 10 3 order of magnitude. o Climate data, aggregated data sources (including sensor networks) and bottom-up collaborations will drive increased data for the global geoscience community. (Sections and 5.4.3) o CISE researchers will be working with petabyte sized research data sets to explore improved search/recommendation algorithms, deep learning and social media, machine to machine (M2M) applications, military and heath applications. (Section 5.4.4) 3) The IRNC program benefits from a collaborative set of network providers, but could use better organization to maximize these benefits. (Section 5.1.5) A strength of this approach is the ability to try multiple approaches to a problem at the same time and develop solutions that cross boundaries. Limitations of this approach are added complexity in the absence of a central operating center (NOC), inconsistent reporting on activities and network measurements, and absence of global reports. 4) There is limited network monitoring and measurement available among IRNC providers, which makes it very difficult to assess link utilization beyond total bandwidth used. (Section 5.2.2) There is a high interest at NSF and among science domain researchers, among others, in use of the network by discipline, by country of origin or destination, or by type of application. However, such data is in general not readily available. A perceived host of legal, political, and cultural issues make it difficult to address the lack of monitoring. To date that discussion has been held mostly among network providers. 4

5 5) Most end-to-end performance issues, for IRNC and high performance R&E networks (ESnet, Internet2 (I2) Regional Optical Networks (RONs), etc. are due to problems in the campus network, building, or end station equipment. (Section and 5.3.1) 6) Many EPSCoR jurisdictions have fallen behind in their participation in international scientific data exchange. In 2009, EPSCoR jurisdictions had traffic on international R&E links that was comparable to many other regions within the U.S. Current utilization by EPSCoR jurisdictions is noticeably lower, reflecting uneven continued investment in regional and campus infrastructures. (Section and Figure 14) 7) The impact on IRNC activities of a trend towards cloud services and data centers as large content providers is unknown. In the global Internet, traffic patterns have shifted from a hierarchical pattern where big providers connect national and regional networks to a pattern of direct connections between large content providers, data centers, and consumer networks. The impact of this transition on R&E networks is unknown. (Section 5.3.2) 1.2. RECOMMENDATIONS The purpose of this report is to assist NSF in predicting the amount of scientific data that will be moving across international R&E networks by 2020, and also to discover special characteristics of that data such as time sensitivity or location. In addition, the study was to develop a method for conducting this analysis that could be repeated in the future. The key findings, listed in section 1.1, show that there will be a continued exponential increase in international scientific data over the next five years. The recommendations below are low hanging fruit that, if followed, will best capture the opportunities and mitigate the current and future challenges of operating international R&E networks supporting data-driven science. 1) Establish a consulting service or clearing house for information on the IRNC. The key service would be to facilitate discussions between scientists and network engineers regarding the characteristics and requirements of their data. The Department of Energy (DoE) and NSF Polar programs do this for their larger science programs. This approach could build a bridge to increase scientific productivity. This service could be a function of the new IRNC Network Operations Center (NOC) called for in NSF RFP Alternatively, this service could possibly be supported by making experts available on retainer to those who need assistance. For large NSF programs, once past the pre-proposal selection, NSF could assign this assistance to help at least large and medium scale science projects understand and plan for their international network capacity and impact on their requirements. 5

6 Domain specific workshops that include scientists, campus network staff, and backbone provider staff could be held to dig into application requirements details and learn from success stories; some of these are expected to result from the NSF CC*NIE and CI*Engineer program awards. 2) Establish a single Network Operations Center for US International network providers so that users and regional operators have a single place to contact. This service is likely to be a function of the new IRNC Network Operations Center (NOC) called for in NSF RFP This service would be a central point of contact for campus, regional, and national R&E network operators and staff to contact international R&E networks, both in the U.S. and elsewhere, regarding troubleshooting, special requirements, and other matters relevant to optimal end-to-end connections. This service would be report on the status of all international R&E links, such as up/down, current load, and service announcements. This service could provide uniform and comprehensive reporting on network traffic. 3) Establish global and uniform network measurement and reporting among IRNC providers, including more detailed utilization information and historical reporting. Move the dialog on this topic out of a network operators-only context. Establish/adopt a standard meta-data description for network traffic (eg: the schema developed for the GLORIAD Insight [9] monitoring system) to enable IRNC-wide reporting and achieve common reporting to the extent that policy allows. Implement the measurement recommendations made at two or more IRNC network monitoring meetings, ie: begin with passive Domain Name Service (DNS) [10;11] record reporting. Accessible packet loss reports are also of high interest. 4) Continue to support collaborative coordination among network providers, within the US and with external network partners. Foster organizations that build the community working together across international boundaries. Successful examples include the Global Lambda Integrated Facility (GLIF) [12] that works together to develop an international optical network infrastructure for science by identifying equipment, connection requirements, and necessary engineering functions and services. Another example is the R&E Open Exchange Points that support bi-lateral peering at all layers of the network. 5) Increase outreach and training to campus network staff in topics such as Border Gateway Protocol (BGP) [13;14], Software Defined Networks (SDN) [15], wide-area-networking, 6

7 how to debug last 100 feet issues, and how to talk with faculty about their application requirements. 6) Address the uneven development of cyberinfrastructure; it is a barrier to collaboration. ACI and scientists in EPSCoR jurisdictions should work with the EPSCoR program to address the growing network inequality gap Continue the Network Startup Resource Center that focuses on training for network operators in countries whose IP traffic will grow most rapidly from now to 2020 the Middle East and Africa. 7) Focus on the following in engineering IRNC networks: Continue to facilitate the transfer for extremely large data sets/streams; an international drop-box may be useful. Continue to push the envelope in supporting bi-directional audio/video at the highest resolutions. Prepare for the Internet of Things, extreme quantities of relatively small data transmissions (eg: social media, sensors) that may have delivery delay requirements Address busy hour traffic patterns, where average usage increases by 10-15%. 7

8 2. TABLE OF CONTENTS 1. EXECUTIVE SUMMARY Key Observations Recommendations TABLE OF CONTENTS INTRODUCTION TO THE IRNC NETWORK PROGRAM The IRNC Production Networks ACE (America Connects to Europe) AmLight (Americas Lightpaths) GLORIAD (Global Ring Network for Advanced Applications Development) TransLight/Pacific Wave (TL/PW) TransPAC The IRNC Experimental Network Current IRNC Capacity and Infrastructure IRNC Exchange Points Emerging Networks Emerging Network technologies Additional networks for International Science METHODS Survey of Network Providers Available Network Measurements IRNC Utilization Compared to ESnet and Global Internet Traffic Data Trends by Science Domain Catalog of International Big Data Science Programs FINDINGS Findings: SURVEY OF Network providers Current IRNC Infrastructure and Capacity Current Top Application Drivers Expected 2020 Application Drivers

9 Interaction of Network Operators with Researchers Current Challenges What are Future Needs of International Networks? Exchange Point Program FINDINGS: Network measurement GLORIAD s Insight Monitoring System The Angst over measurement and data sharing IRNC Network Traffic Compared to ESnet and the GLobal Internet Synopsis of data trends for the ESnet International Networks Global Internet Growth Rate Industry Internet Growth Forecast for Findings: Data Trends Synopsis of Data Trends in Astronomy and Astrophysics Data Trends in Bioinformatics and Genomics Data Trends in Earth, Ocean, and Space Sciences Data Trends in Computer Science Findings: Online Catalog of International Big Data Science Programs Large Hadron Collider : 15 PB/year (CERN) The Daniel K. Inouye Solar Telescope (DKIST): 15 PB/year Dark Energy Survey (DES): (1 GB image, 400 images per night, instrument steering) Large Square Kilometer Array (Australia & South Africa) (100 PB per day) Findings: Survey of NSF-funded PIs REFERENCES APPENDICES Appendix A: Interview with Network and Exchange Point Operators Appendix B: On-Line Survey for NSF-funded PIs Appendix C: Summary of Responses to NSF PI Survey Appendix D: List of Persons Interviewed Appendix E: Reports used for this study Appendix F: Scientific and Network Community Meetings attended for report input

10 3. INTRODUCTION TO THE IRNC NETWORK PROGRAM The International Research Network Connections (IRNC) network providers implement Research and Education (R&E) networks that implement policies and network operational procedures that are driven by the needs of international research and education programs. IRNC leverages existing commercial telecommunications providers investments in undersea communication cables and university and Regional Optical Network (RON) expertise in operating regional and national R&E networks. The U.S., through the NSF, invests approximately $7M/year in the IRNC program; this modest investment is highly leveraged by a factor of 10 to 15 via international partner investments supporting international R&E network links. IRNC networks are open to and used by the entire U.S. research and education community and operate invisibly to the vast majority of users. The IRNC networks support unique scientific and education application requirements that are not met by services across commercial backbones. In addition, the IRNC network providers are closely connected to researcher s needs and requirements and are attuned to meeting their needs as a primary motivator. Examples of such requirements include hybrid network services; low latency and real time services, and end-to-end performance management. In this regard, the IRNC extends the fabric of campus, regional, and national R&E networks across oceans and continents, a web of connections built in collaboration with international partner R&E networks that serve the growing number of international scientific collaborations. The IRNC program was most recently funded for years ; NSF is currently reviewing responses to solicitation The new awards will continue to provide production network connections and services to link U.S. research networks with peer networks in other parts of the world and leverage existing international network connectivity; support U.S. infrastructure and innovation of open network exchange points; provide a centralized facility for R&E Network Operation Centers (NOC) operations and innovation that will drive state-of-the-art capabilities; stimulate the development, application and use of advanced network measurement capabilities and services across international network paths; and support global R&E network engineering community engagement and coordination THE IRNC PRODUCTION NETWORKS IRNC network providers acquire, manage and operate network transport facilities across international boundaries for shared scientific use. Network providers make arrangements with owners of optical fiber, including undersea fiber cables, to use some portion of this installed infrastructure, using equipment and management practices dedicated to R&E traffic. All shared R&E international networks are funded by the NSF in cooperation with the governments of other countries. The independently managed networks exchange traffic at Exchange Points; there, operators can implement bi-lateral policies to receive traffic from other networks and send traffic to other networks; this includes passing traffic thru network B so that network A can reach network C. 10

11 Policies can derive from human policy, current traffic conditions, current costs, and so forth. Exchange points are the focus for policy and technical coherence. A map of the IRNC networks is shown in Figure 2 (map from the Center for Applied Data Analysis (CAIDA)[16]). A current limitation of this overview map, and some of the following regional maps, is that they rely on manual updates to static files so maps are therefore not likely to be current. Networks shown include five production networks: ACE; AmLight; GLORIAD; TransPAC3 and PacificWave, and one experimental network (TransLight) ACE (America Connects to Europe) FIGURE 2. MAP OF THE IRNC NETWORKS 2014 FIGURE 3. AMERICA CONNECTS TO EUROPE NETWORK MAP 11

12 ACE (NSF Award # ) [17] is led by Jennifer Schopf of Indiana University, in partnership with Delivery of Advanced Network Technology to Europe (DANTE) [18], Trans-European Research and Education Networking Association (TERENA) [19], New York State Education and Research Network (NYSERNet) [20], and Internet2 (I2) [21]. This project connects a community of more than 34 national R&E networks in Europe AmLight (Americas Lightpaths) AmLight (NSF Award # ) [22] is led by Julio Ibarra of Florida International University. This program ties together the major research networks of Canada, Brazil, Chile, Mexico, and the United States. In addition, this work enables interconnects between the United States and the Latin American Cooperation of Advanced Networks (RedCLARA) [23] that connects eighteen Latin FIGURE 4. AMERICAS LIGHTPATH NETWORK MAP 12

13 American national R&E networks. The Atlantic Wave and Pacific Wave Exchange Points provide peering for the North American backbone networks I2, the U.S. Department of Energy s Energy Sciences Network (ESnet) [24], and Canada's Advanced Research and Innovation Network (CANARIE) [25] GLORIAD (Global Ring Network for Advanced Applications Development) GLORIAD (NSF Award # ) [26] is led by Greg Cole at the University of Tennessee, Knoxville. It includes cooperative partnerships with partners in Russia (Kurchatov Institute) [27], Korea Institute of Science and Technology Information (KISTI) [28], China (Chinese Academy of Sciences) [29], the Netherlands (SURFnet) [30], the Nordic countries (NORDUnet [31] and IceLink [32]), Canada (CANARIE), and the Hong Kong Open Exchange Portal (HKOEP) [33]. In addition, new partnerships are being developed with Egypt (ENSTINet [34] and Telecomm Egypt [35]), India (Tata Communications [36] and the National Knowledge Network) [37], Singapore (SingAREN [38]), and Vietnam (VinAREN [39]) TransLight/Pacific Wave (TL/PW) FIGURE 5. GLORIAD NETWORK MAP FIGURE 6. TRANSLIGHT/PACIFIC WAVE NETWORK MAP 13

14 TL/PW (NSF Award # ) [40] is led by David Lassner, University of Hawaii. TL/PW presents a unified connectivity face toward the West for all U.S. R&E networks including I2 and Federal agency networks, enabling general and specific peerings with more than 15 international R&E links. This project not only provides a connection for Australia's R&E networking community but also provides connectivity for the world's premiere setting for astronomical observatories, the summit of Mauna Kea on the Big Island of Hawaii. The Mauna Kea observatories comprise over $1 billion of international investment by 13 countries in some of the most important cyberinfrastructure resources in the world TransPAC3 TransPAC3 (NSF Award # ) [41] is led by Jen Schopf at Indiana University. The R&E networks included in the TransPAC3 collaboration cover all of Asia excluding only North Korea, Brunei, Myanmar, and Mongolia. TransPAC3 collaborates with the Asia Pacific Advanced Network (APAN) [42], DANTE, Internet2 and other R&E networks. FIGURE 7. TRANSPAC3 NETWORK MAP 3.2. THE IRNC EXPERIMENTAL NETWORK TransLight/Starlight (NSF Award # ) [43] is led by Tom DeFanti at the University of California, San Diego. The award provides two connections between the U.S. and Europe for production science: a routed connection that connects the pan-european GEANT2 to the U.S. I2 and ESnet networks, and a switched connection that is part of the LambdaGrid fabric being created by participants of the GLIF. StarLight is a multi-100gb/s exchange facility, peering with 130 separate R&E networks. This network is unique among IRNC networks in that it is entirely dedicated to research traffic, and carries no educational commodity type traffic ( , web pages,etc.). Translight uses optical networking, a means of communication that uses signals encoded onto light, that can operate at distances from local and transoceanic and is capable of extremely high bandwidth. Optical networking can be used to partition optical fiber segments so that traffic is entirely segregated, with different policies or techniques applied to each segment. In collaboration with GLIF partners, Translight has been able to provide high bandwidth, low latency performance for interactive highdefinition visualization and other types of demanding applications. A limitation is that scheduling is required. 14

15 3.3. CURRENT IRNC CAPACITY AND INFRASTRUCTURE To provide context for the description of IRNC capacity, some background information on network engineering is helpful. Traditional Internet networking is based on the TCP/IP protocol suite [44]. TCP/IP is designed to be a best-effort delivery service; there may be significant variation in the amount of time it takes a data packet to be delivered and in the amount of delay between packets. Greater congestion in the network results in greater performance variation, including the possibility of delivery failure, especially in the case of very large files. As a general practice, internet providers achieve the desired network performance by arranging for an abundance of bandwidth; a network that operates at 50% capacity is considered well engineered since it has headroom to accommodate any sudden bursts of traffic. TCP/IP has built-in congestion algorithms that provide equitable use of the bandwidth based on current traffic conditions; this means end-users can use the Internet at their convenience without scheduling or busy signals. A consequence of this approach is that measures of throughput, latency and jitter for identical data traveling the same geographic path can vary significantly due to other traffic on the network when the measurement takes place. The IRNC production networks have been engineered to provide an abundance of bandwidth. The IRNC experimental network, in contrast, is engineered to deliberately use 100% of bandwidth, continuously; this design is possible because this network allows only pre-authorized traffic and makes direct use of the underlying optical network. Via optical networking, the StarWave experimental network can provide single 100Gbs sustained transfers over long periods of time, as well as to multiple sustained 1 Gbs/10Gbs flows. End-to-end network configuration and scheduling is now accomplished in an automated manner. In 2014, all networks except TransLight have at least one 100Gbps network path; the exception is due to the high cost of fiber crossing the Pacific Ocean (a factor of 5 higher than the Atlantic). TransLight provides 40Gbps total bandwidth to Australia and elsewhere in Asia. In 2014, a new 40Gbps direct route to New Zealand was accomplished. The Pacific routes are expected to be upgraded to 100Gbps in All networks have redundant paths across oceans, except for the trans-pacific connection due to cost. These performance numbers compare to regional and national backbone speeds, and to the I2 Innovation Platform. Campuses/facilities that have joined the I2 Science DMZ ( Demilitarized Zone 2 ) [45] have access to a SDN enabled, firewall-free, 100Gbps network path that allows them to experience the highest level of end-to-end performance. Thus, network performance across oceans and/or continents should ideally be impacted mostly by the distance involved and not by network bottlenecks in the path. 1 Conversation with David Lassner, September Developed by ESnet engineers, the Science DMZ model addresses common network performance problems encountered at research institutions by creating an environment that is tailored to the needs of high performance science applications, including high volume bulk data transfer, remote experiment control, and data visualization. 15

16 3.4. IRNC EXCHANGE POINTS Network exchange points for research and education flows have served a pivotal role over the last 20 years in extending network connectivity internationally, providing regional R&E networking leadership, and supporting experimental networking. Through years of operational experience combined with international peering relationships, engineering activities, and international networking forums, a set of guiding principles have emerged for successful approaches to an open exchange point. Exchange points support the homing of multiple international links and provide high capacity connectivity to I2 and ESnet. They also provide maximum flexibility in connectivity and peering, for example services at multiple layers of the network EMERGING NETWORKS The Network Startup Resource Center (NSRC) [46] develops and enhances network infrastructure for collaborative research, education, and international partnerships, while promoting teaching and training via the transfer of technology. This IRNC project focuses NSRC activities on cultivating cyberinfrastructure via technical exchange, engineering assistance, training, conveyance of networking equipment and technical reference materials, and related activities to promote network technology adoption, and enhanced connectivity in R&E sites around the world. The end goal is to enhance and enable international collaboration via the Internet between U.S. scientists and collaborators in developing countries. Active progress has occurred in National Research and Education Networks (NRENs) and Research Education Networks (RENs) in Southeast Asia, Africa and the Caribbean; this work will continue through NSF award # in the amount of $3.7M to Steve Huter, Dale Smith and Bill Allen for IRNC: ENgage: Building Network Expertise and Capacity for International Science Collaboration starting October 1, EMERGING NETWORK TECHNOLOGIES The StarLight experimental IRNC has had extensive experience with SDN and has supported multiple international demonstrations and research projects in this area. IRNC funded networks have also participated in the GLIF community that has developed the Network Service Interface (NSI) standard [47] through the Open Grid Forum (OGF) [48]. NSI describes a standardized interface for use at optical network exchange points, providing a foundation for automated scheduling and deploying of optical circuits across provider and technology boundaries. The production IRNCs had not yet deployed SDN, with the exception of some work begun in AmLight in summer 2014, leveraging accomplishments of the Global Environment for Network Innovations (GENI) program [49], I2 s Advanced Layer 2 Services (ALS2) [50] configuration tool, and some GENI funded work in Brazil. 16

17 3.7. ADDITIONAL NETWORKS FOR INTERNATIONAL SCIENCE In the past, research conducted at the North and South Poles, on board ships, in space, or using distributed sensors relied on workflows where data was stored on site at or in the instrument and then manually transported on some schedule to an analysis center. Due to the rapidly growing satellite and other non-terrestrial telecommunications infrastructure, workflows are shifting from periodic and manual to near real-time, using the Internet. Whether manual or by Internet, the data at some point becomes connected to the national and international research networks where data sharing and collaboration occur. In addition to the NSF-funded IRNC networks, international science relies on shipboard, satellite and space networks to capture and forward data. Examples of these networks include: The Global Telecommunications System (GTS) [51], global network for the transmission of meteorological data from weather stations, satellites and numerical weather prediction centers. HiSeasNet [52], a satellite communications network designed specifically to provide continuous Internet connectivity for oceanographic research ships and platforms. HiSeasNet plans to provide real-time transmission of data to shore-side collaborators; basic communications including videoconferencing, and tools for real-time classroom and other outreach activities. The NASA Space Network [53] consists of the on-orbit telecommunications Tracking and Data Relay Satellite (TDRS) satellites, placed in geosynchronous orbit, and the associated TDRS ground stations, located in White Sands, New Mexico and Guam. The TDRS constellation is capable of providing nearly continuous high bandwidth (S, Ku, and Ka band) telecommunications services for space research, including: the Hubble Space Telescope [54], the Earth Observing Fleet [55] and the International Space Station [56]. Certain applications such as the LHC and NSF Division of Polar Programs fund their own dedicated network circuits. LHC leverages the ESnet network. Polar traffic is limited by geographic locations. Several research programs use both ESnet and IRNC networks. 4. METHODS The primary purpose of this study was to project the amount of data being exchanged via IRNC networks in the year The initial study plan was to survey the IRNC network providers, review their annual reports, examine measured network traffic over the IRNC links, and conduct interviews with representative international science programs. This approach proved challenging because of wide variation among IRNC providers in their degree of participation in and knowledge of projects using their networks. Another challenge was that most of the IRNC networks were measuring/recording only total bandwidth utilization, an 17

18 approach providing limited information to analyze. providing more detailed network history information. The GLORIAD network was unique in A survey with detailed questions regarding file size, time to transfer requirements, type of file systems the files are stored on and so forth was prepared but after conducting several in-person interviews, it became apparent that few international science projects have detailed information about their current and future plans to produce, transport, store, analyze and share data at a level of detail that is useful for network capacity and service planning. ESnet and the NSF Polar Programs have addressed this challenge by organizing special meetings at which program scientists sat down with network engineers and spent a couple of days working through these details. This report s advisory committee recommended that as an alternative, domain experts be asked to provide a description of data trends in their fields, taking into consideration the following factors that would be likely sources for increased IRNC traffic: New instruments with higher data/transport requirements Scaling up of current activities (e.g. more people, more data or use of data) New areas of the world increasing their traffic through local/regional improvements (Africa, Pacific Islands) New technology that that reduces, by orders of magnitude, the cost of collecting/retaining data. Programs currently funding their own communications network (e.g. the NSF Polar programs) who may eliminate move to the R&E networks In addition, a survey for NSF PIs was carried out to further explore these same questions. All surveys used are included in the appendix section, along with a list of persons interviewed and reports referenced SURVEY OF NETWORK PROVIDERS The PI or PI-designated representative for each of the IRNC production and research networks and exchange point operators were interviewed over the period September 2013 January The survey used can be found in Appendix A. Questions were designed to collect data on current capabilities, data volume, user community needs, user support approach, upgrade strategies, and data projections. The questionnaire was also used with one regional network provider who connects to the IRNC. A summary of survey responses is available in section AVAILABLE NETWORK MEASUREMENTS The IRNC network providers were asked to provide measures of network performance for their networks. Most IRNC networks could provide some measure of bandwidth utilization over time. GLORIAD was the one network provider that had been maintaining records of IP flows (one flow 18

19 typically corresponds to one application) over its ten years of operation. The absence of more detailed information about network traffic on IRNC networks was explained by providers as resulting from (a) strict concerns among the European research community regarding privacy, (b) challenges in developing multiple bi-lateral policies among allowing such measurement, and (c) lower priority/lack of funding. A summary of available measurements is described in section IRNC UTILIZATION COMPARED TO ESnet AND GLOBAL INTERNET TRAFFIC Data representing IRNC traffic that was available for the period was compared to an analysis of traffic on the global Internet for the same period of time. The analysis is in section DATA TRENDS BY SCIENCE DOMAIN Science domain experts provided written descriptions of these trends. These contributions are available in section CATALOG OF INTERNATIONAL BIG DATA SCIENCE PROGRAMS A questionnaire was developed for interviewing science communities requiring international data exchange. The target science communities were identified by asking IRNC providers to identify science disciplines that produced the highest data volume both now and potentially in the future. In addition, scientists from large programs in those disciplines were asked to describe their current data collection and storage volume and needs, the resources utilized to transmit data, technical community interaction, and data projection strategies. Due to high variation in quality and depth of responses, the study moved toward broader exploration of international big science programs via Internet searches, attending a variety of science domain community meetings, and through responses to an on-line survey of NSF Principal Investigators. This survey (Appendix B) was designed to capture existing and planned international science collaborations, knowledge of new instrumentation, and extent of international collaboration within NSF funded programs. Using publicly available information from the NSF awards site, 30,897 PIs receiving NSF funding from FY2009 July 2014 were invited to respond to the survey. A total of 4,050 persons responded, a 13% response rate. At approximately the same time (summer 2013), Jim Williams at Indiana University and Internet2 began collecting The International Big Science List [57]. These efforts have been much-expanded and placed within a framework for describing scientific data. Called the Catalog of International Big Data Science, this interactive and updatable website is available at 19

20 5. FINDINGS 5.1. FINDINGS: SURVEY OF NETWORK PROVIDERS Current IRNC Infrastructure and Capacity The IRNC production networks operate using industry-standard TCP/IP networking, and IRNC providers described their services as state-of-the-art. The experimental StarLight network supports the use of specialized high performance protocols such as UDT [58]. The IRNC experimental network used optical networking technology and protocols developed by GLIF. StarLight has more than ten years of experience with programmable networking in support of many international Grid projects. Approximately seven years ago, StarLight began investigating SDN/OpenFlow [59;60] technologies. Subsequently, the StarLight community worked with GENI to design and implement a national wide distributed environment for network researchers based on SDN/OpenFlow, using a national mesoscale L2 network as a production facility. For over three years, with IRNC and GENI support, and with many international partners, StarLight has participated in the design, implementation and operation of the world's most extensive international SDN/OpenFlow testbeds, with over 40 sites in North American, South America, Europe and Asia. With support from GENI and the international network testbed community, a prototype Software Defined Networking Exchange (SDX) was designed and implemented at the StarLight Facility in November 2013 and used to demonstrate its potential to support international science projects. For over five years, the StarLight consortium has worked with the GLIF community and OGF to develop and implement an NSI Connection Service. More recently, StarLight has been supporting a project that is integrating NSI Connection Service 2.0 and SDN/OpenFlow. All IRNC networks except TransLight/PacificWave have at least one 100Gbps network path; this matches the transition of campus, regional and national R&E network providers to 100Gbps external network speeds. The high cost of crossing the Pacific Ocean (a cost factor of 5 higher than the Atlantic) presents a challenge. TransLight/Pacific Wave provides a 100Gbs path from LA to Hawaii, and a 40Gbps total bandwidth to Australia, New Zealand and elsewhere in Asia Current Top Application Drivers Applications currently having top bandwidth or other demanding network requirements that were named by IRNC providers included: The Large Hadron Collider ( Tier 1 transfer from CERN to Europe, US and Australia) Computational Genomics Radio telescopes Computational astrophysics Climatology Nanotechnology. Fusion energy data Light sources (synchrotrons) Astronomy 20

21 Expected 2020 Application Drivers Looking forward to 2020, IRNC network providers expected application drives to remain the same as in section 4.1.2, with the addition of: More visualization Astronomy moving away from shipping tapes/drives for near-real time reaction to events in order to verify the event and focus observation instruments. Video, live and uncompressed. Climate Science and Geology: collecting more LIDAR data as needed, combined with other data (eg: earthquake monitoring and response) Larger sensor networks, especially in portions of the globe where there is currently no weather data being collected. The Square Kilometer Array [61] being built in Australia and South Africa The catalog of life on this plant is growing larger and larger; it will be stored in many locations. What s complicated is the coordination needed to become a single data set; some type of federated model is needed. Data is currently concentrated in US but will be more global in nature; look at where new telescopes are located. Consider where population density is (China, India). Data will become global in nature in terms of where it needs to go and where it will rest Interaction of Network Operators with Researchers Network operators interacted most frequently with other people supporting R&E networks, and typically had infrequent interaction with scientists. We are often surprised and discouraged by the overall lack of interaction between researchers/scientists and their network operators, sometimes within the same institution. (NSRC interview) The AmLight and StarLight programs were exceptions to this pattern, and each reported the most detailed knowledge of end-user applications Current Challenges When asked to describe current challenges they face, the IRNC network providers identified the following: 1) Lack of wide area network knowledge on campuses IRNC providers experience is that most network problems reported were caused by the end system; for example, an underpowered machine having inadequate memory, slow hard disk, or slow network card. A misconfiguration in the campus Local Area Network was another example. IRNC providers view the role of campus network staff as sitting at the edge of the web of regional, national, and international network connections; they are responsible for connecting their campus to this web. In this role, campus network staff are an essential component of end-to-end support but, unfortunately, are frequently not knowledgeable about wide area technologies and therefore 21

22 cannot provide problem solving assistance in end-to-end problem solving without the IRNC (or regional or national network) providers assistance. Network path troubleshooting is still a very people intensive process. The person who can do this needs a well-rounded skill set: he/she must understand end-user requirements, storage, computation, and the application as well as networking. There is room for automation here. IRNC providers would like to see more training for campus network staff in in BGP, SDN, and wide area networking. 2) Inadequate and uneven campus infrastructure There are challenges getting the local network infrastructure (wiring/switches) ready to support an application. Campuses have multiple and perhaps conflicting demands for investment in the campus wiring and network electronics infrastructure. Getting wiring and electronics upgraded all the way to a specific end user s location in the heart of a campus may not be a high priority for the campus. 3) Poor coordination among network providers The regional, national, and international network web of connections are not well coordinated. As a result, it can sometimes be difficult to identify the right person to contact during end-to-end troubleshooting, and there may be inefficiencies in the investment. The network path crosses multiple organizations; the hard part is figuring out which network segment has the issue, then working with individual researchers to fix the local system or network. 4) Interoperability Challenges IRNC providers face interoperability challenges in connecting 10Gbs and 100Gbs circuits; optical networks and software defined networks; and the different implementations of SDN provide additional challenges. 5) Adoption of New Network Technologies IRNC providers expect rapid growth in optical networking and SDN in the next 3-4 years. They are concerned that science communities don t appear to know anything about this yet. They are also concerned about whether it will be easy enough for end-users to use What are Future Needs of International Networks? Most IRNC providers identified science communities requirements as the best drivers for future network directions. In general, scientists are always pushing the frontiers of advanced networking and thus encounter new problems. The network needs to be thought of as a global resource. It is important to work collaboratively to coordinate and provide solutions that cross boundaries. Organizations that foster working together across international boundaries are needed. The NSF Exchange Point program and GLIF were mentioned as two examples that are working well. West Coast providers were particularly concerned about funding. The US government funds only circuits into US, not the other way around. Other countries are now paying much more than the funding provided by NSF. And the focus on instruments of interest are shifting away from the US eg: LHC and large 22

23 square K array. The budget should shift by a factor of 10, particularly on the west coast where 10G across the Pacific is a factor of 5 higher cost than the Atlantic Exchange Point Program The International Exchange Point program is seen by IRNC network operators as a success: It is terrifically important and significant and successful. I think it s been very important & will be more so, later. More small countries are coming in with their own NRENS. Geological, climate and genomics are requiring information coming in from all over. US exchange points are very important without them, the US would not be as much in the center of this as it is. I worry about the long-term impacts of the global R&E community s reaction to the allegations about NSA. There are some communities that are spending their own money to get to US facilities, but are talking about going elsewhere as a result of this. From a US perspective, we must support these exchange points they are really, really important FINDINGS: NETWORK MEASUREMENT Measuring the characteristics of network traffic is the foundation for understanding the types of applications (eg: streaming video, large file transfer, ) on the network, frequency of these applications, the number and location of users, network performance, etc. Network traffic monitoring and the level of detail of any monitoring vary significantly across IRNC network providers. Recent efforts to standardize measurement have focused on universal installation of the perfsonar platform [62] that can be very useful in understanding actual network performance on each link of a network path when debugging end-to-end application issues. However, with the exception of the GLORIAD network monitoring, long-term performance measurement reporting is limited to bandwidth utilization. A typical bandwidth utilization report is represented in Figure 8, showing average use of 40Gbps links into and out of the PacificWave traffic router during Q Blue represents incoming traffic with an average utilization of 14.45Gbps; green represents outgoing traffic with an average utilization of 15.32Gbps. Each line in the graph is itself an average over the portion of the week represented. The bursty nature of network traffic is reflected in the shape of the graph; the spikes or peaks show that on occasion traffic can be double the average, thus the headroom requirement. It should be noted that the graph represents only the public portion of the PacificWave exchange and does not represent all of the traffic over the facility. There are private connections and CAVEwave [63] and CineGrid [64] traffic (part of the StarWave experimental network) that are not included in these numbers. 23

24 FIGURE 8. QUARTERLY PACIFICWAVE TRAFFIC FOR OCT, NOV,AND DEC 2013 Figure 9 demonstrates the rapid rate at which scientists discover and utilize available tools. The graph was derived from 62 inbound PacificWave quarterly graphs covering the time period January 2006-December The dark blue horizontal lines indicate link capacity; bandwidth was increased from 1Gbs to 10Gbs (210) and then 40Gbs (2012). The light blue vertical bars represent each quarter s average throughput for that period of time, and the black T shape above the quarter average indicates the peak throughput recorded for that period. FIGURE 9. EXAMPLE OF IRNC NETWORK UTILIZATION OVER 8 YEARS 24

25 GLORIAD s Insight Monitoring System The GLORIAD Insight system [65] provides flexible, interactive exploration and analysis for all GLORIAD backbone traffic since GLORIAD s beginning as MIRnet in Insight is open source software developed by GLORIAD in collaboration with the China Science and Technology Network (CSTnet) [66] and Korea's KISTI. Large IP flows are the units measured, and searchable information includes traffic volume, source/destination country, packet loss, source/destination by U.S. State, traffic volume by application type or scientific discipline, network VLAN or ASNUM, or network protocol. Both live and historical data is available. Figure 10 summarizes all GLORIAD traffic from by world region; Figure 11 provides an example snapshot of a packet loss incident. Total data stored to date comprises almost 2 billion records, with a million new records added each day. FIGURE 10. GLORIAD LARGE FLOW TOTAL TRAFFIC , BY WORLD REGION Packet loss is a measure indicating significant network congestion and/or interruptions in network service. Insight allows network operators to drill down into live traffic during packet loss to problem solve. The drill-down path can follow any of the flow s recorded attributes such as protocol, application, institution, etc. 25

NSF IRNC Program International Deployment and Experimental Efforts with SDN in 2013

NSF IRNC Program International Deployment and Experimental Efforts with SDN in 2013 NSF IRNC Program International Deployment and Experimental Efforts with SDN in 2013 The National Science Foundation (NSF) International Research Network Connections (IRNC) program is enabling the development

More information

NUIT Tech Talk: Trends in Research Data Mobility

NUIT Tech Talk: Trends in Research Data Mobility NUIT Tech Talk: Trends in Research Data Mobility Pascal Paschos NUIT Academic & Research Technologies, Research Computing Services Matt Wilson NUIT Cyberinfrastructure, Telecommunication and Network Services

More information

Networking and Cybersecurity Cluster Summary

Networking and Cybersecurity Cluster Summary Networking and Cybersecurity Cluster Summary v v Staff: Anita Nikolich: Cybersecurity & Kevin Thompson: Networking Networking Programs Ø CC*DNI (Campus Cyberinfrastructure Data, Networking, and Innovation)

More information

EMERGING AND ENABLING GLOBAL, NATIONAL, AND REGIONAL NETWORK INFRASTRUCTURE TO SUPPORT RESEARCH & EDUCATION

EMERGING AND ENABLING GLOBAL, NATIONAL, AND REGIONAL NETWORK INFRASTRUCTURE TO SUPPORT RESEARCH & EDUCATION EMERGING AND ENABLING GLOBAL, NATIONAL, AND REGIONAL NETWORK INFRASTRUCTURE TO SUPPORT RESEARCH & EDUCATION Dave Pokorney CTO, Director of Engineering Florida LambdaRail NOC UCF Research Computing Day

More information

Internet2 Focused Technical Workshop: International OpenFlow/SDN Testbeds Florida International University March 31 April 2, 2015

Internet2 Focused Technical Workshop: International OpenFlow/SDN Testbeds Florida International University March 31 April 2, 2015 OpenWave: Supporting International OpenFlow/SDN Testbeds for at-scale experimentation Internet2 Focused Technical Workshop: International OpenFlow/SDN Testbeds Florida International University March 31

More information

Hybrid Optical and Packet Infrastructure (HOPI) Project

Hybrid Optical and Packet Infrastructure (HOPI) Project Hybrid Optical and Packet Infrastructure (HOPI) Project Heather Boyles Director, International Relations, Internet2 Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 TERENA

More information

Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE)

Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure t April 25, 2012 Office of Cyberinfrastructure HPC/ACI Software Data Campus Bridging/Networking

More information

Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012

Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012 Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012 Office of Cyberinfrastructure HPC/ACI Software Data Campus Bridging/

More information

GLIF End to end architecture Green paper

GLIF End to end architecture Green paper GLIF End to end architecture Green paper Bill, Inder, Erik-Jan GLIF Tech Honolulu, HI, USA 17 Jan 2013 Green Paper EC uses the concept of a green paper: A green paper released by the European Commission

More information

SwitchOn Workshop São Paulo October 15-16, 2015

SwitchOn Workshop São Paulo October 15-16, 2015 Managing Data Intensive Challenges with a Science DMZ SwitchOn Workshop São Paulo October 15-16, 2015 Julio Ibarra Florida International University Data Intensive Challenges Many Disciplines Need Dedicated

More information

Use of Alternate Path WAN Circuits at Fermilab

Use of Alternate Path WAN Circuits at Fermilab Use of Alternate Path WAN Circuits at Fermilab Phil DeMar, Andrey Bobyshev, Matt Crawford, Vyto Grigaliunas Fermilab, PO BOX 500, Batavia, IL 60510, USA demar@fnal.gov Abstract. Fermilab hosts the American

More information

Next Generation Clouds, The Chameleon Cloud Testbed, and Software Defined Networking (SDN)

Next Generation Clouds, The Chameleon Cloud Testbed, and Software Defined Networking (SDN) Next Generation Clouds, The Chameleon Cloud Testbed, and Software Defined Networking (SDN) Joe Mambretti, Director, (j-mambretti@northwestern.edu) International Center for Advanced Internet Research (www.icair.org)

More information

Campus Network Design Science DMZ

Campus Network Design Science DMZ Campus Network Design Science DMZ Dale Smith Network Startup Resource Center dsmith@nsrc.org The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see

More information

The New Dynamism in Research and Education Networks

The New Dynamism in Research and Education Networks a s t r at egy paper fr om The New Dynamism in Research and Education Networks Software-defined networking technology delivers network capacity and flexibility for academic users brocade The New Dynamism

More information

Science DMZs Understanding their role in high-performance data transfers

Science DMZs Understanding their role in high-performance data transfers Science DMZs Understanding their role in high-performance data transfers Chris Tracy, Network Engineer Eli Dart, Network Engineer ESnet Engineering Group Overview Bulk Data Movement a common task Pieces

More information

Los Nettos and Translight/Pacific Wave. SC 08 November 16-20, 2008

Los Nettos and Translight/Pacific Wave. SC 08 November 16-20, 2008 Los Nettos and Translight/Pacific Wave SC 08 November 16-20, 2008 20 years of networking The Los Nettos Regional Network was founded in 1988 by leading internet visionaries, Los Nettos continues to provide

More information

perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case

perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case Fausto Vetter, Domenico Vicinanza DANTE TNC 2010, Vilnius, 2 June 2010 Agenda Large Hadron Collider Optical Private

More information

Intelligent Routing Platform White Paper

Intelligent Routing Platform White Paper White Paper Table of Contents 1. Executive Summary...3 2. The Challenge of a Multi-Homed Environment...4 3. Network Congestion and Blackouts...4 4. Intelligent Routing Platform...5 4.1 How It Works...5

More information

Region 10 Videoconference Network (R10VN)

Region 10 Videoconference Network (R10VN) Region 10 Videoconference Network (R10VN) Network Considerations & Guidelines 1 What Causes A Poor Video Call? There are several factors that can affect a videoconference call. The two biggest culprits

More information

The role of open exchanges in research networking

The role of open exchanges in research networking The role of open exchanges in research networking Project : GigaPort3 Project Year : 2011 Project Manager : Gerben van Malenstein Author(s) : Stratix Consulting Completion Date : 30-06-2011 Version : 1.0

More information

ESnet Support for WAN Data Movement

ESnet Support for WAN Data Movement ESnet Support for WAN Data Movement Eli Dart, Network Engineer ESnet Science Engagement Group Joint Facilities User Forum on Data Intensive Computing Oakland, CA June 16, 2014 Outline ESnet overview Support

More information

Computer Networking Networks

Computer Networking Networks Page 1 of 8 Computer Networking Networks 9.1 Local area network A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office

More information

LHCONE Site Connections

LHCONE Site Connections LHCONE Site Connections Michael O Connor moc@es.net ESnet Network Engineering Asia Tier Center Forum on Networking Daejeon, South Korea September 23, 2015 Outline Introduction ESnet LHCONE Traffic Volumes

More information

The Next Generation of Wide Area Networking

The Next Generation of Wide Area Networking The Next Generation of Wide Area Networking Introduction As pointed out in The 2014 State of the WAN Report 1, the vast majority of WAN traffic currently uses either the Internet or MPLS. Since the Internet

More information

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more

More information

Clouds and the Network! Martin Swany! Indiana University, Informatics and Computing, InCNTRE!

Clouds and the Network! Martin Swany! Indiana University, Informatics and Computing, InCNTRE! Clouds and the Network! Martin Swany! Indiana University, Informatics and Computing, InCNTRE! Cloud Computing! Computing resources offered as a service! Multics (precursor to UNIX) was created to support

More information

Overview of Internet2

Overview of Internet2 Southern Partnership in Advanced Networking Workshop 1: South Carolina and Georgia Donald F. (Rick) McMullen, Ph.D. Senior Director for Research Engagement and Development rmcmullen@internet2.edu 8 April

More information

Network Middleware Solutions

Network Middleware Solutions Network Middleware: Lambda Station, TeraPaths, Phoebus Matt Crawford GLIF Meeting; Seattle, Washington October 1-2, 2008 Lambda Station (I) Target: last-mile problem between local computing resources and

More information

addition to upgrading connectivity between the PoPs to 100Gbps, GPN is pursuing additional collocation space in Kansas City and is in the pilot stage

addition to upgrading connectivity between the PoPs to 100Gbps, GPN is pursuing additional collocation space in Kansas City and is in the pilot stage Cyberinfrastructure Plan GPN GPN connects research and education networks in 6 states to the Internet2 backbone and to ESnet. GPN provides redundant connectivity for several other affiliated campuses and

More information

Lightpath Planning and Monitoring

Lightpath Planning and Monitoring Lightpath Planning and Monitoring Ronald van der Pol 1, Andree Toonk 2 1 SARA, Kruislaan 415, Amsterdam, 1098 SJ, The Netherlands Tel: +31205928000, Fax: +31206683167, Email: rvdp@sara.nl 2 SARA, Kruislaan

More information

Agenda. NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion. Mauro Campanella Internet Festival, Pisa 9 Oct 2015 2

Agenda. NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion. Mauro Campanella Internet Festival, Pisa 9 Oct 2015 2 Agenda NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion 2 3 The Campus-NREN-GÉANT ecosystem CAMPUS networks NRENs GÉANT backbone. GÉANT Optical + switching platforms Multi-Domain environment

More information

Introduction to Computer Networks and Data Communications

Introduction to Computer Networks and Data Communications Introduction to Computer Networks and Data Communications Chapter 1 Learning Objectives After reading this chapter, you should be able to: Define the basic terminology of computer networks Recognize the

More information

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012

The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 The LHC Open Network Environment Kars Ohrenberg DESY Computing Seminar Hamburg, 10.12.2012 LHC Computing Infrastructure > WLCG in brief: 1 Tier-0, 11 Tier-1s, ~ 140 Tier-2s, O(300) Tier-3s worldwide Kars

More information

RNP Experiences and Expectations in Future Internet R&D

RNP Experiences and Expectations in Future Internet R&D RNP Experiences and Expectations in Future Internet R&D CPqD, Campinas 23 September 2009 Michael Stanton michael@rnp.br Director of Innovation Rede Nacional de Ensino e Pesquisa - RNP 2009 RNP Introduction

More information

Achieving the Science DMZ

Achieving the Science DMZ Achieving the Science DMZ Eli Dart, Network Engineer ESnet Network Engineering Group Joint Techs, Winter 2012 Baton Rouge, LA January 22, 2012 Outline of the Day Motivation Services Overview Science DMZ

More information

WHITEPAPER. VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter

WHITEPAPER. VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter WHITEPAPER VPLS for Any-to-Any Ethernet Connectivity: When Simplicity & Control Matter The Holy Grail: Achieving Simplicity and Control in the IT Infrastructure Today s Information Technology decision-makers

More information

Connecting UK Schools to JANET

Connecting UK Schools to JANET Connecting UK Schools to JANET Rob Symberlist Schools Networking Coordinator United Kingdom Education & Research Networking Association r.symberlist@ukerna.ac.uk 22 February 2005 TERENA Workshop on Connecting

More information

Chabot-Las Positas Community College District Bond Measure Technology Improvements Supplement to Capital Improvement Program

Chabot-Las Positas Community College District Bond Measure Technology Improvements Supplement to Capital Improvement Program Chabot-Las Positas Community College District Bond Measure Technology Improvements Supplement to Capital Improvement Program Guidelines for Technology Estimates: The Capital Improvement Program for both

More information

ANI Network Testbed Update

ANI Network Testbed Update ANI Network Testbed Update Brian Tierney, ESnet, Joint Techs, Columbus OH, July, 2010 ANI: Advanced Network Initiative Project Start Date: September, 2009 Funded by ARRA for 3 years Designed, built, and

More information

Preparing Your IP Network for High Definition Video Conferencing

Preparing Your IP Network for High Definition Video Conferencing WHITE PAPER Preparing Your IP Network for High Definition Video Conferencing Contents Overview...3 Video Conferencing Bandwidth Demand...3 Bandwidth and QoS...3 Bridge (MCU) Bandwidth Demand...4 Available

More information

How To Build A Lightpath Network For Multiple Lightpath Projects

How To Build A Lightpath Network For Multiple Lightpath Projects Building a LAN to Support Multiple Lightpath Projects Ronald van der Pol About SARA Computing and Networking services Houses and operates national supercomputer Huygens Houses and operates national

More information

Software Defined Exchange (SDX) and Software Defined Infrastructure Exchange (SDIX) Vision and Architecture

Software Defined Exchange (SDX) and Software Defined Infrastructure Exchange (SDIX) Vision and Architecture Software Defined Exchange (SDX) and Software Defined Infrastructure Exchange (SDIX) Vision and Architecture Tom Lehman, Brecht Vermeulen, Marshall Brinn, Niky Riga, Larry Landweber DRAFT DO NOT DISTRIBUTE

More information

Facility Usage Scenarios

Facility Usage Scenarios Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to

More information

Campus Research Network Overview

Campus Research Network Overview Campus Research Network Overview Chris Griffin Chief Network Architect University of Florida & Florida LambdaRail 5/6/2013 Agenda Research Networking at UF A brief history CRNv2 Florida LambdaRail What

More information

Sudden Impact: How Cloud Services Affect the Network and Drive Business Transformation. Monday, March 11, 2013 2:30-3:15 p.m.

Sudden Impact: How Cloud Services Affect the Network and Drive Business Transformation. Monday, March 11, 2013 2:30-3:15 p.m. Sudden Impact: How Cloud Services Affect the Network and Drive Business Transformation Monday, March 11, 2013 2:30-3:15 p.m. PLEASE SILENCE YOUR CELL PHONES THANK YOU! Moderator: Eric Clelland, CMO and

More information

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project Effects of Filler Traffic In IP Networks Adam Feldman April 5, 2001 Master s Project Abstract On the Internet, there is a well-documented requirement that much more bandwidth be available than is used

More information

Solving the Big Dilemma of Big Data

Solving the Big Dilemma of Big Data shaping tomorrow with you Our thirst for information and communication is, it seems, insatiable. This is particularly apparent in the online world, where data sets are getting steadily larger. These massive

More information

Rely on a Trusted Resource

Rely on a Trusted Resource Colocation Rely on a Trusted Resource» Highly secure environment to deploy your computing, network, storage and IT infrastructure» Helps reduce capital and operational expenses required to run mission-critical

More information

A Forrester Consulting Thought Leadership Paper Commissioned By Zebra Technologies. November 2014

A Forrester Consulting Thought Leadership Paper Commissioned By Zebra Technologies. November 2014 A Forrester Consulting Thought Leadership Paper Commissioned By Zebra Technologies November 2014 Internet-Of-Things Solution Deployment Gains Momentum Among Firms Globally Improved Customer Experience

More information

perfsonar MDM release 3.0 - Product Brief

perfsonar MDM release 3.0 - Product Brief perfsonar MDM release 3.0 - Product Brief In order to provide the fast, reliable and uninterrupted network communication that users of the GÉANT 2 research networks rely on, network administrators must

More information

Perspec'ves on SDN. Roadmap to SDN Workshop, LBL

Perspec'ves on SDN. Roadmap to SDN Workshop, LBL Perspec'ves on SDN Roadmap to SDN Workshop, LBL Philip Papadopoulos San Diego Supercomputer Center California Ins8tute for Telecommunica8ons and Informa8on Technology University of California, San Diego

More information

What is this Course All About

What is this Course All About Fundamentals of Computer Networks ECE 478/578 Lecture #1 Instructor: Loukas Lazos Dept of Electrical and Computer Engineering University of Arizona What is this Course All About Fundamental principles

More information

Connecting Australia s NBN Future to the Globe

Connecting Australia s NBN Future to the Globe Connecting Australia s NBN Future to the Globe Ross Pfeffer Whitepaper First published: January 2011 Abstract Is there sufficient capacity, market competition and network resilience to support Australia

More information

perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015

perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015 perfsonar Overview Jason Zurawski, ESnet zurawski@es.net Southern Partnerships for Advanced Networking November 3 rd 2015 This document is a result of work by the perfsonar Project (http://www.perfsonar.net)

More information

Analysis of the Global Active Monitoring Network Performance Market QoS and QoE Drive the Demand for Active Monitoring

Analysis of the Global Active Monitoring Network Performance Market QoS and QoE Drive the Demand for Active Monitoring Analysis of the Global Active Monitoring Network Performance Market QoS and QoE Drive the Demand for Active Monitoring July 2014 Contents Section Slide Numbers Executive Summary 4 Market Overview 7 Active

More information

SDN and NFV in the WAN

SDN and NFV in the WAN WHITE PAPER Hybrid Networking SDN and NFV in the WAN HOW THESE POWERFUL TECHNOLOGIES ARE DRIVING ENTERPRISE INNOVATION rev. 110615 Table of Contents Introduction 3 Software Defined Networking 3 Network

More information

Making the Case for Satellite: Ensuring Business Continuity and Beyond. July 2008

Making the Case for Satellite: Ensuring Business Continuity and Beyond. July 2008 Making the Case for Satellite: Ensuring Business Continuity and Beyond July 2008 Ensuring Business Continuity and Beyond Ensuring business continuity is a major concern of any company in today s technology

More information

Network Simulation Traffic, Paths and Impairment

Network Simulation Traffic, Paths and Impairment Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating

More information

How Network Operators Do Prepare for the Rise of the Machines

How Network Operators Do Prepare for the Rise of the Machines Internet of Things and the Impact on Transport Networks How Network Operators Do Prepare for the Rise of the Machines Telecommunication networks today were never designed having Inter of Things use cases

More information

LOLA (Low Latency) Project

LOLA (Low Latency) Project Enabling remote real time musical performances over advanced networks Description LOLA project aims to enable real time musical performances where musicians are physically located in remote sites, connected

More information

Benefits brought by the use of OpenFlow/SDN on the AmLight intercontinental research and education network

Benefits brought by the use of OpenFlow/SDN on the AmLight intercontinental research and education network Benefits brought by the use of OpenFlow/SDN on the AmLight intercontinental research and education network Julio Ibarra, Jeronimo Bezerra, Heidi Morgan, Luis Fernandez Lopez Florida International University

More information

Charting the Evolution of Campus Cyberinfrastructure: Where Do We Go From Here? 2015 National Science Foundation NSF CC*NIE/IIE/DNI Principal

Charting the Evolution of Campus Cyberinfrastructure: Where Do We Go From Here? 2015 National Science Foundation NSF CC*NIE/IIE/DNI Principal Jim Bottum Charting the Evolution of Campus Cyberinfrastructure: Where Do We Go From Here? 2015 National Science Foundation NSF CC*NIE/IIE/DNI Principal Investigators Meeting The CC* Mission Campuses today

More information

State of Texas. TEX-AN Next Generation. NNI Plan

State of Texas. TEX-AN Next Generation. NNI Plan State of Texas TEX-AN Next Generation NNI Plan Table of Contents 1. INTRODUCTION... 1 1.1. Purpose... 1 2. NNI APPROACH... 2 2.1. Proposed Interconnection Capacity... 2 2.2. Collocation Equipment Requirements...

More information

GÉANT Open Service Description. High Performance Interconnectivity to Support Advanced Research

GÉANT Open Service Description. High Performance Interconnectivity to Support Advanced Research GÉANT Open Service Description High Performance Interconnectivity to Support Advanced Research Issue Date: 20 July 2015 GÉANT Open Exchange Overview Facilitating collaboration has always been the cornerstone

More information

Microsoft s Cloud Networks

Microsoft s Cloud Networks Microsoft s Cloud Networks Page 1 Microsoft s Cloud Networks Microsoft s customers depend on fast and reliable connectivity to our cloud services. To ensure superior connectivity, Microsoft combines globally

More information

How To Write A Privacy Policy For Annet Network And Exchange Point (Nnet) Network (Netnet)

How To Write A Privacy Policy For Annet Network And Exchange Point (Nnet) Network (Netnet) Document name: Data and Privacy Policy Implications and Privacy Principles Author(s): James Williams and Dale Finkleson Contributor(s): GNA Technical Group Date: 26 October 2015 Version: 0.9P Data and

More information

DISASTER RECOVERY AND NETWORK REDUNDANCY WHITE PAPER

DISASTER RECOVERY AND NETWORK REDUNDANCY WHITE PAPER DISASTER RECOVERY AND NETWORK REDUNDANCY WHITE PAPER Disasters or accidents would cause great harm on network infrastructure. It is unavoidable and the operation of network would be hampered for a long

More information

Multi-protocol Label Switching

Multi-protocol Label Switching An INS White Paper Multi-protocol Label Switching An economic way to deliver integrated voice, video and data traffic March 2013 Run your business on one network Multi-protocol Label Switching (MPLS) is

More information

Leveraging SDN and NFV in the WAN

Leveraging SDN and NFV in the WAN Leveraging SDN and NFV in the WAN Introduction Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two of the key components of the overall movement towards software defined

More information

Software Defined Networking for big-data science

Software Defined Networking for big-data science Software Defined Networking for big-data science Eric Pouyoul Chin Guok Inder Monga (presenting) TERENA Network Architects meeting, Copenhagen November 21 st, 2012 ESnet: World s Leading Science Network

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

Research at LARC-USP E-Science, Cloud & Big Data Projects. Fernando Redigolo

Research at LARC-USP E-Science, Cloud & Big Data Projects. Fernando Redigolo Research at LARC-USP E-Science, Cloud & Big Data Projects Fernando Redigolo LARC USP Laboratory of Computer Architecture and Networks Department of Computer and Digital System Engineering USP University

More information

Communication Networks. MAP-TELE 2011/12 José Ruela

Communication Networks. MAP-TELE 2011/12 José Ruela Communication Networks MAP-TELE 2011/12 José Ruela Network basic mechanisms Introduction to Communications Networks Communications networks Communications networks are used to transport information (data)

More information

How To Provide Qos Based Routing In The Internet

How To Provide Qos Based Routing In The Internet CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

More information

Campus Network Best Practices: Core and Edge Networks

Campus Network Best Practices: Core and Edge Networks Campus Network Best Practices: Core and Edge Networks Dale Smith University of Oregon/NSRC dsmith@uoregon.edu This document is a result of work by the Network Startup Resource Center (NSRC at http://www.nsrc.org).

More information

FACT SHEET INTERNATIONAL DATA SERVICES GLOBAL IP VPN

FACT SHEET INTERNATIONAL DATA SERVICES GLOBAL IP VPN PUT OUR BACKBONE IN YOUR GLOBAL NETWORKS Telstra Wholesale s Global IP VPN solution allows our customers to offer their end users global networks for voice, video and data. With access to most major Asian

More information

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.

More information

Network Management and Monitoring Software

Network Management and Monitoring Software Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the

More information

Campus Network Best Practices: Core and Edge Networks

Campus Network Best Practices: Core and Edge Networks Campus Network Best Practices: Core and Edge Networks Dale Smith Network Startup Resource Center dsmith@nsrc.org This document is a result of work by the Network Startup Resource Center (NSRC at http://www.nsrc.org).

More information

Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER

Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER Transform Oil and Gas WHITE PAPER TABLE OF CONTENTS Overview Four Ways to Accelerate the Acquisition of Remote Sensing Data Maximize HPC Utilization Simplify and Optimize Data Distribution Improve Business

More information

Campus Cyber Infrastructure (CI) Plan. 1. Coherent campus-wide strategy and approach to CI

Campus Cyber Infrastructure (CI) Plan. 1. Coherent campus-wide strategy and approach to CI Campus Cyber Infrastructure (CI) Plan. Coherent campus-wide strategy and approach to CI Tulane University's Technology Services will provide appropriate IT infrastructure and services to different researchers

More information

Integration of Network Performance Monitoring Data at FTS3

Integration of Network Performance Monitoring Data at FTS3 Integration of Network Performance Monitoring Data at FTS3 July-August 2013 Author: Rocío Rama Ballesteros Supervisor(s): Michail Salichos Alejandro Álvarez CERN openlab Summer Student Report 2013 Project

More information

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements The Quality of Internet Service: AT&T s Global IP Network Performance Measurements In today's economy, corporations need to make the most of opportunities made possible by the Internet, while managing

More information

Network Considerations for IP Video

Network Considerations for IP Video Network Considerations for IP Video H.323 is an ITU standard for transmitting voice and video using Internet Protocol (IP). It differs from many other typical IP based applications in that it is a real-time

More information

Internet2 Network Services Community, Service and Business Overview

Internet2 Network Services Community, Service and Business Overview Internet2 Network Services Community, Service and Business Overview Executive Summary: For universities and colleges to succeed in the broader transformation of higher education, successful collaboration

More information

Enabling Modern Telecommunications Services via Internet Protocol and Satellite Technology Presented to PTC'04, Honolulu, Hawaii, USA

Enabling Modern Telecommunications Services via Internet Protocol and Satellite Technology Presented to PTC'04, Honolulu, Hawaii, USA CASE STUDY Enabling Modern Telecommunications Services via Internet Protocol and Satellite Technology Presented to PTC'04, Honolulu, Hawaii, USA Stephen Yablonski and Steven Spreizer Globecomm Systems,

More information

Agilent Technologies Performing Pre-VoIP Network Assessments. Application Note 1402

Agilent Technologies Performing Pre-VoIP Network Assessments. Application Note 1402 Agilent Technologies Performing Pre-VoIP Network Assessments Application Note 1402 Issues with VoIP Network Performance Voice is more than just an IP network application. It is a fundamental business and

More information

VIRTUALIZING THE EDGE

VIRTUALIZING THE EDGE VIRTUALIZING THE EDGE NFV adoption to transform telecommunications infrastructure Karthik Kailasam Director, Integrated Modular Solutions September 2015 Key Messages The transformation of telecom networks

More information

The Requirement for a New Type of Cloud Based CDN

The Requirement for a New Type of Cloud Based CDN The Requirement for a New Type of Cloud Based CDN Executive Summary The growing use of SaaS-based applications has highlighted some of the fundamental weaknesses of the Internet that significantly impact

More information

Three Key Design Considerations of IP Video Surveillance Systems

Three Key Design Considerations of IP Video Surveillance Systems Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All

More information

OBJECTIVE. National Knowledge Network (NKN) project is aimed at

OBJECTIVE. National Knowledge Network (NKN) project is aimed at OBJECTIVE NKN AIMS TO BRING TOGETHER ALL THE STAKEHOLDERS FROM SCIENCE, TECHNOLOGY, HIGHER EDUCATION, HEALTHCARE, AGRICULTURE AND GOVERNANCE TO A COMMON PLATFORM. NKN is a revolutionary step towards creating

More information

Submarine Networks in Asia, 2004-2013. Presentation at. February 2014

Submarine Networks in Asia, 2004-2013. Presentation at. February 2014 Submarine Networks in Asia, 2004-2013 Presentation at February 2014 Introductory Housekeeping TeleGeography focuses on international networks. Internet refers to public IP traffic. Bandwidth refers to

More information

The Importance of High Customer Experience

The Importance of High Customer Experience SoftLayer Investments Drive Growth and Improved Customer Experience A Neovise Vendor Perspective Report 2010 Neovise, LLC. All Rights Reserved. Executive Summary Hosting and datacenter services provider

More information

How To Create A Converged Network For Public Safety

How To Create A Converged Network For Public Safety IP/MPLS Whitepaper Benefits of Converged Networking in a Public Safety Environment Table of Contents Introduction... 2 Current Public Safety Network Landscape... 2 The Migration to All-IP... 3 MPLS Functionality

More information

Visibility in the Modern Data Center // Solution Overview

Visibility in the Modern Data Center // Solution Overview Introduction The past two decades have seen dramatic shifts in data center design. As application complexity grew, server sprawl pushed out the walls of the data center, expanding both the physical square

More information

Pacnet MPLS-Based IP VPN Keeping pace with your growth

Pacnet MPLS-Based IP VPN Keeping pace with your growth Products and Services PRIVATE NETWORKS Pacnet MPLS-Based IP VPN Keeping pace with your growth SCALABLE, FLEXIBLE, EXPANDING WITH YOUR BUSINESS Pacnet s IP VPN offers a unique proposition. With our own

More information

Glossary of Telco Terms

Glossary of Telco Terms Glossary of Telco Terms Access Generally refers to the connection between your business and the public phone network, or between your business and another dedicated location. A large portion of your business

More information

Performance Management for Next- Generation Networks

Performance Management for Next- Generation Networks Performance Management for Next- Generation Networks Definition Performance management for next-generation networks consists of two components. The first is a set of functions that evaluates and reports

More information

www.careercert.info Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

www.careercert.info Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 2007 Cisco Systems, Inc. All rights reserved. DESGN v2.0 3-11 Enterprise Campus and Data Center Design Review Analyze organizational requirements: Type of applications, traffic volume, and traffic pattern

More information

AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling

AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling AKAMAI WHITE PAPER Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling Delivering Dynamic Web Content in Cloud Computing Applications 1 Overview

More information