Next-Generation Networking for Science
|
|
|
- Jodie Logan
- 10 years ago
- Views:
Transcription
1 Next-Generation Networking for Science ASCAC Presentation March 23, 2011 Program Managers Richard Carlson Thomas Ndousse
2 Presentation Outline Program Mission Program Elements Science Drivers Budget Previous Accomplishments Current Portfolio Program Highlights Future Directions Conclusions
3 Next-generation Networks for Science Mission: The Goal of the program is to research, develop, test and deploy advanced network technologies critical in addressing networking capabilities unique to DOE s science mission. The program s portfolio consists of two main elements: High-Performance Networks High-Performance Middleware DOE ESnet 3
4 Program Elements High-Performance Networks Research and development of advanced technologies which include technologies for rapid provisioning of hybrid packet/circuit-switched networks, ultra high-speed transport protocols, high-speed data distribution tools and services, secure and scalable technologies for bandwidth and circuits reservation and scheduling, secure and scalable tools and services for monitoring and managing of federated networks. High-Performance Middleware research and development to support distributed high-end science applications and related distributed scientific research activities. These include advanced middleware to enable large-scale scientific collaborations; Secure and scalable software stacks to manage and distribute massive science data, software and services to seamlessly integrated science workflows to experiments and network infrastructures; cyber security systems and services to enable large-scale national and international scientific collaborations.
5 DOE s ESnet Capabilities Projection
6 DOE Distributed Science Complex
7 Data-Intensive Science: HPC performance to : Networking Performance Gap Flops E2E Bits/sec Computing Performance doubling time = 1.5 years Network throughput doubling time = 5 Years 1.E E+0 8 Performance Gap 40 GigE OC E+0 7 DS3 OC-12 OC GigE OC E+0 6 DS-0 DS-1 1.E+0 3
8 Next-Gen Funding Profile FY11 - $165M Next-Gen 9% Math 29% SciDAC 33% FY11 - $14.3M CS 29% Overhead 27% Multi-Institution 21% Reserves 26% Multi/Single Investigator 26%
9 Historical Perspective of Networking R&D at DOE 1989 Van Jacobson s congestion control algorithm at LBL helps avert an imminent Internet congestion collapse The search for efficient scientific collaboration environment services at ANL leads to grid computing (Globus & GridFTP) ORNL prototypes a working model of on-demand and dynamic circuit services over Ultra-Science Network testbed ESnet extends on-demand bandwidth services to provide production grade services to scientists DOE/ESnet initiates the deployment of a 100 Gbps end-to-end demonstration network prototype.
10 Next-Generation Research Contributions to Science and the Internet GridFTP The GridFTP data transfer protocol developed by ASCR network researchers delivers 100x throughput over traditional Internet based FTP in data transfer. GridFTP is now the de-facto data transfer protocol used in scientific and commercial grid computing world-wide for data movement. GridFTP is the primary data transfer protocol used to distribute the massive data generated by the LHC experiment and the global climate modeling communities. TCP/IP Congestion Using GridFTP to move mountains of data TCP/IP Congestion Control Van Jacobson, working at Lawrence Berkeley Nation Laboratory, developed the algorithm that solved the congestion problem of TCP protocol used in over 90% of Internet hosts today. This algorithm is credited for enabling the Internet to expand in size and support increasing speed demands. The algorithm helped the Internet survive a major traffic surge ( ) without collapsing. 10
11 Next-Generation Networking for Sustained Large-Scale, Global Science Leadership PEGrid PEgrid ASCR funded Globus and OSG middleware provides the fundamental building blocks for many grid infrastructure projects. PEgrid, a $45M collaboration of Texas universities, software companies, and oil companies leveraged these services to create an environment where students can be trained to use multi-dimensional supercomputer simulation models. 11
12 Current Portfolio 16% 40% Affiliation of our Researchers National Lab 39% University 42% in Industry 19% 44% National Labs Universities Industries All Awards made through open solicitations Portfolio distribution Long-term R&D 20% Short 60% Testbed activities 20% Project size 3 large multi-institution 15 single investigator 12
13 Portfolio Breakdown Data Movement Globus On-line Advanced Network Provisioning OSCARS, Terapaths, Lambda Station perfsonar based Network Performance Monitoring IP based network infrastructure Dynamic circuit infrastructure Large-scale Scientific Collaboration Earth System Grid Open Science Grid Advanced Network Concepts Hybrid optical networking Network virtualization
14 CEDPS - SaaS Data Management 4.1 Gbps 1.6 Gbps Moving 322 TeraBytes of data from ANL to each remote site 7.34 Days to NERSC Days to ORNL
15 Open Science Grid The Open Science Grid (OSG) promotes scientific discovery and collaboration in data-intensive research by providing a set of services that supports computation at multiple scales. Strong growth by several international physics communities highlights how OSG resources are used to convert raw instrument data into meaningful information that leads to scientific discoveries.
16 Earth System Grid ESG serves climate data to the world: The past successes of ESG enabled the CMIP3 to be wildly successful and a large contributor towards the success of the IPCC 4 th Assessment Report. It is hard for me to imagine an effort being more successful given where the field was at that time. The Earth System Grid is a crucial component towards the success of the data serving for CMIP5 and AR5. Ron Stouffer, NOAA Scientist & IPCC Author A major advance of this 4 th assessment of climate change projections compared with the 3 rd Assessment Report is the large number of simulations available from a broader range of models. Taken together with additional information from observations, these provided a quantitative basis for estimating likelihoods fro many aspects of future climate change. p.12 IPCC 4 th Assessment Report "For leadership. which led to a new era in climate system analysis and understanding." Award to PCMDI by American Meteorological Society, Jan The Earth System Grid is an essential component to tapping into the wealth of information about climate and climate models embedded within the CMIP archives. Gavin Schmidt, NASA Scientist 16
17 E2E On-Demand Bandwidth and Circuits
18 Advanced Diagnostics and Analysis S4 8 Since k = 2 and P 1,2 already has 2 blue links that explains why P 1,2 is red. So we color Link 2,3 green 7 9 S S2 Data collection based on perfsonar measurement infrastructure 6 S3 Research is focused on data analysis task
19 ARRA Activities Advanced Network Initiative - Nation-wide 100 Gbps network demonstration prototype FTP Gbps Network interface 100 Gbps experimental network facility Scaling ESG and OSG to 100 Gbps Magellan Cloud computing Exploring the capabilities of cloud computing for scientific application NERSC ANL
20 DOE ANI Testbed Notes: - App Host : can be used for researcher application, control plane control software, etc. Can support up to 8 simultaneous VMs - I/O Testers are capable of 15 G disk-to-disk or 35G memory-to-memory -Other infrastructure not shown: VPN Server, file server (NFS, webdav, svn, etc.)
21 ANI Funded Magellan Project
22 Themes and Portfolio Directions Challenges: Effectively identifying performance bottlenecks Routine movement of terabyte datasets Network/Middleware Core Research: Creating hybrid networks Managing large collaboration space Multi-layer hybrid network control systems Understanding complex network infrastructures Extreme collaborations with 100K+ participants Middleware libraries and APIs for Large Systems Massively parallel data streams Massive numbers of independent collaborations Multi-domain monitoring and measurement systems Riskinformed decisionmaking through modeling and simulation Grid infrastructures and data management services ESnet Traffic ESnet Backbone Capacity 20 PB 100 Gbps 400 Gbps 100 PB 1 EB 10 EB 1 Tbps 4 Tbps 10 Tbps 50 EB Research Activities: Fast data movement service 100 Gbps NICs Application performance analysis service Science driven on-demand circuits 100 GE LAN/MAN/WAN integration Comprehensive data mgt service Computational Science for ASCR Radical new network architecture/protocols Comprehensive scientific workflow services Scalable network middleware architectures, protocols, and services Federated scientific collaborations Emergent Area Research
23 Challenges for Next-Gen Program Develop a fundamental understanding about how DOE scientists use networks and how those networks behave Provide scientists with advanced technologies that simplify access to experimental facilities, Supercomputers and scientific data Provide dynamic and hybrid networking capabilities to support diverse types of high-end science applications at scale.
24 Future Planning Activities A series of targeted workshop to identify significant research challenges in networks and collaborations Terabit Network Workshop Feb 16-17, 2011 Solicitations will be release as funding becomes available (tentatively looking at) FY11 Focus on Terabit Network issues FY12 Focus on Data issues in DOE complex
25 Terabit Networks Workshop Workshop Objective To identify the major research challenges in developing, deploying, and operating federated terabit networks to support extreme-scale science activities Participants from industry, academia, and nation laboratories (50 attendees) Major technical areas of discussions included - Scaling network architectures and protocols by several orders of magnitude over today s network performance. - Terabit LANs, host systems, and storage - Advanced traffic engineering tools and services 25
26 Terabit Network Workshop 3 Breakout Sessions Advanced User Level Network-Aware Services Terabits Backbone Networking Challenges Terabits End Systems (LAN s, Storage, File, and Host Systems) Exascale Security Considerations (virtual session only) 1.5 days of in-depth discussions by each breakout group Numerous challenges identified by each group Report to be issued shortly
27 FY10 FOA: High-Capacity Optical Networking and Deeply Integrated Middleware Services FOA Topics: 1. Hybrid packet/circuit-switched networks 2. Multi-Layer Multi-Domain Dynamic Provisioning GE System-Level Network Components and Services 4. Multi-Layer Multi-Domain Network Measurement and Monitoring FOA Areas: a) High-Capacity Optical Networks b) Deeply Integrated Middleware
28 Submissions/Budget Total Submission - 31 High-capacity Networks: 16 Deeply Integrated Middleware: 15 Budget Total request: $31, Recommended Budget: a) $900k/yr - (6 Labs) b) $690k/yr - (2 Universities)
29 Summary Research conducted by this program has enabled ESnet to deliver outstanding performance to DOE scientists (winning 2 industry awards) On-demand bandwidth (OSCARS) service is used daily by ESnet, Internet2, and other R&E networks Large globally distributed science communities (OSG, ESG) directly benefit from ASCR research ARRA investment will have direct impact on current and future network infrastructure
30 Conclusions Focus is on E2E performance in large distributed scientific computing environments Integrated approach combining Network, Middleware, and Collaboratory activities Interact with ESnet operations staff to bridge the Valley of Death issue Work with community to identify new challenges and fundamental research topics that will align program with ASCR s strategic objectives and DOE s science mission
31 Supplemental slides
32 Portfolio Breakdown Large Multi-institution Collaborations Center for Enabling Distributed Petascale Science Earth Systems Grid Open Science Grid Multi-Investigator projects Network Weather and Performance Services ecenter Virtualized Network Control Integrating Storage Management with Dynamic Network Provisioning for Automated Data Transfers End Site Control Plane Subsystem (ESCS) Collaborative DOE Enterprise Network Monitoring Deployment
33 Portfolio Breakdown Single Investigator projects Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures to Better Serve Network Control and Management Towards a Scalable and Adaptive Support Platform for Large-Scale Distributed E- Sciences in High-Performance Network Environments Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High Performance Network Environments End Site Control Plane Subsystem (ESCPS) Resource Organization in Hybrid Core Networks with 10 G Systems Data Network Weather Service Reporting Dynamic Provisioning for Terascale Science Applications Using Hybrid Circuit/Packet Technologies and 100G Transmission Systems Dynamic Optimized Advanced Scheduling of Bandwith Demands for Large-Scale Science applications Assured Resource Sharing in Ad-Hoc Collaboration Orchestrating Distributed Resource Ensembles for Petascale Science Detection, Localization and Diagnosis of Performance Problems Using perfsonar COMMON: Coordinated Multi-Layer Multi-Domain Optical Network Framework for Large-Scale Science Applications VNOD: Virtual Network On-Demand for Distributed Scientific Applications
34 American Reinvestment and Recovery Act Magellan Scientific Cloud Computing Research ANI Projects Esnet upgrade and backbone infrastructure Testbed infrastructure Climate 100: Scaling the Earth System Grid to 100 Gbps Networks End-System Network Interface Controller for 100 Gb/s Wide Area Networks An Advanced Network and Distributed Storage Laboratory for Data Intensive Science 100G FTP: An Ultra-High Speed Data Transfer Service Over Next Generation 100 Gigabit Per Second Network
ANI Network Testbed Update
ANI Network Testbed Update Brian Tierney, ESnet, Joint Techs, Columbus OH, July, 2010 ANI: Advanced Network Initiative Project Start Date: September, 2009 Funded by ARRA for 3 years Designed, built, and
Software Defined Networking for Extreme- Scale Science: Data, Compute, and Instrument Facilities
ASCR Intelligent Network Infrastructure Software Defined Networking for Extreme- Scale Science: Data, Compute, and Instrument Facilities Report of DOE ASCR Intelligent Network Infrastructure Workshop August
Data Requirements from NERSC Requirements Reviews
Data Requirements from NERSC Requirements Reviews Richard Gerber and Katherine Yelick Lawrence Berkeley National Laboratory Summary Department of Energy Scientists represented by the NERSC user community
ESnet Support for WAN Data Movement
ESnet Support for WAN Data Movement Eli Dart, Network Engineer ESnet Science Engagement Group Joint Facilities User Forum on Data Intensive Computing Oakland, CA June 16, 2014 Outline ESnet overview Support
Next Generation Clouds, The Chameleon Cloud Testbed, and Software Defined Networking (SDN)
Next Generation Clouds, The Chameleon Cloud Testbed, and Software Defined Networking (SDN) Joe Mambretti, Director, ([email protected]) International Center for Advanced Internet Research (www.icair.org)
The New Dynamism in Research and Education Networks
a s t r at egy paper fr om The New Dynamism in Research and Education Networks Software-defined networking technology delivers network capacity and flexibility for academic users brocade The New Dynamism
Taking Big Data to the Cloud. Enabling cloud computing & storage for big data applications with on-demand, high-speed transport WHITE PAPER
Taking Big Data to the Cloud WHITE PAPER TABLE OF CONTENTS Introduction 2 The Cloud Promise 3 The Big Data Challenge 3 Aspera Solution 4 Delivering on the Promise 4 HIGHLIGHTS Challenges Transporting large
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
T a c k l i ng Big Data w i th High-Performance
Worldwide Headquarters: 211 North Union Street, Suite 105, Alexandria, VA 22314, USA P.571.296.8060 F.508.988.7881 www.idc-gi.com T a c k l i ng Big Data w i th High-Performance Computing W H I T E P A
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India Call for Papers Colossal Data Analysis and Networking has emerged as a de facto
CS 698: Special Topics in Big Data. Chapter 2. Computing Trends for Big Data
CS 698: Special Topics in Big Data Chapter 2. Computing Trends for Big Data Chase Wu Associate Professor Department of Computer Science New Jersey Institute of Technology [email protected] Collaborative
A Service for Data-Intensive Computations on Virtual Clusters
A Service for Data-Intensive Computations on Virtual Clusters Executing Preservation Strategies at Scale Rainer Schmidt, Christian Sadilek, and Ross King [email protected] Planets Project Permanent
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER
Transform Oil and Gas WHITE PAPER TABLE OF CONTENTS Overview Four Ways to Accelerate the Acquisition of Remote Sensing Data Maximize HPC Utilization Simplify and Optimize Data Distribution Improve Business
Exascale Computing Project (ECP) Update
Exascale Computing Project (ECP) Update Presented to ASCAC Paul Messina Project Director Stephen Lee Deputy Director Washington, D.C. April 4, 2016 ECP is in the process of making the transition from an
Mission Need Statement for the Next Generation High Performance Production Computing System Project (NERSC-8)
Mission Need Statement for the Next Generation High Performance Production Computing System Project () (Non-major acquisition project) Office of Advanced Scientific Computing Research Office of Science
Tier3 Network Issues. Richard Carlson May 19, 2009 [email protected]
Tier3 Network Issues Richard Carlson May 19, 2009 [email protected] Internet2 overview Member organization with a national backbone infrastructure Campus & Regional network members National and International
Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development
Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems
Make the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
OnTimeDetect: Offline and Online Network Anomaly Notification Tool
OnTimeDetect: Offline and Online Network Anomaly Notification Tool Prasad Calyam, Ph.D. [email protected] Other Team Members: Jialu Pu, Weiping Mandrawa Network Tools Tutorial Session, Internet2 Spring Member
Data Center Network Evolution: Increase the Value of IT in Your Organization
White Paper Data Center Network Evolution: Increase the Value of IT in Your Organization What You Will Learn New operating demands and technology trends are changing the role of IT and introducing new
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
http://www.projetogiga.org.br
http://www.projetogiga.org.br Marcos Rogerio Salvador, Ph.D. CPqD R&D Center for Information and Communications Technologies, Brazil MyFIRE Brazilian Workshop Futurecom. São Paulo,
FTP 100: An Ultra-High Speed Data Transfer Service over Next Generation 100 Gigabit Per Second Network
FTP 100: An Ultra-High Speed Data Transfer Service over Next Generation 100 Gigabit Per Second Network Award Number: 53662 - US Department o f Energy Aw ard Title: ARRA: TAS::89 0227:: TASRecovery A ct
UNCLASSIFIED. R-1 Program Element (Number/Name) PE 0603461A / High Performance Computing Modernization Program. Prior Years FY 2013 FY 2014 FY 2015
Exhibit R-2, RDT&E Budget Item Justification: PB 2015 Army Date: March 2014 2040: Research, Development, Test & Evaluation, Army / BA 3: Advanced Technology Development (ATD) COST ($ in Millions) Prior
Campus Network Design Science DMZ
Campus Network Design Science DMZ Dale Smith Network Startup Resource Center [email protected] The information in this document comes largely from work done by ESnet, the USA Energy Sciences Network see
globus online The Research Cloud Steve Tuecke Computation Institute University of Chicago and Argonne National Laboratory
globus online The Research Cloud Steve Tuecke Computation Institute University of Chicago and Argonne National Laboratory Cloud Layers Software-as-a-Service (SaaS) Platform-as-a-Service (PaaS) Infrastructure-as-a-Service
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
With DDN Big Data Storage
DDN Solution Brief Accelerate > ISR With DDN Big Data Storage The Way to Capture and Analyze the Growing Amount of Data Created by New Technologies 2012 DataDirect Networks. All Rights Reserved. The Big
A High-Performance Storage and Ultra-High-Speed File Transfer Solution
A High-Performance Storage and Ultra-High-Speed File Transfer Solution Storage Platforms with Aspera Abstract A growing number of organizations in media and entertainment, life sciences, high-performance
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
I D C T E C H N O L O G Y S P O T L I G H T
I D C T E C H N O L O G Y S P O T L I G H T AP M S a a S and An a l yt i c s S t e p U p t o Meet the N e e d s o f M odern Ap p l i c a t i o n s, M o b i le Users, a n d H yb r i d C l o ud Ar c h i
A Reliable and Fast Data Transfer for Grid Systems Using a Dynamic Firewall Configuration
A Reliable and Fast Data Transfer for Grid Systems Using a Dynamic Firewall Configuration Thomas Oistrez Research Centre Juelich Juelich Supercomputing Centre August 21, 2008 1 / 16 Overview 1 UNICORE
3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC-2016) March 10-11, 2016 VIT University, Chennai, India
3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC-2016) March 10-11, 2016 VIT University, Chennai, India Call for Papers Cloud computing has emerged as a de facto computing
How To Build A Network For Mining
GE Digital Energy Industrial Communication Solutions for the Mining Industry g imagination at work GE Industrial Communications For over twenty-five years, GE Industrial Communications has provided rugged
Next Generation Operating Systems
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015 The end of CPU scaling Future computing challenges Power efficiency Performance == parallelism Cisco Confidential 2 Paradox of the
Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck
Sockets vs. RDMA Interface over 1-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck Pavan Balaji Hemal V. Shah D. K. Panda Network Based Computing Lab Computer Science and Engineering
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Accelerating Innovation with Self- Service HPC
Accelerating Innovation with Self- Service HPC Thomas Goepel Director Product Management Hewlett-Packard BOEING is a trademark of Boeing Management Company Copyright 2014 Boeing. All rights reserved. Copyright
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
Alternative Deployment Models for Cloud Computing in HPC Applications. Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix
Alternative Deployment Models for Cloud Computing in HPC Applications Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix The case for Cloud in HPC Build it in house Assemble in the cloud?
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
DXi Accent Technical Background
TECHNOLOGY BRIEF NOTICE This Technology Brief contains information protected by copyright. Information in this Technology Brief is subject to change without notice and does not represent a commitment on
Alcatel-Lucent 1665 Data Multiplexer (DMX) for Service Providers
Alcatel-Lucent 1665 Data Multiplexer (DMX) for Service Providers Bridges the bandwidth gap between LANs and core backbone networks. Offers multiservice growth from traditional voice/private line services
Building a Scalable Big Data Infrastructure for Dynamic Workflows
Building a Scalable Big Data Infrastructure for Dynamic Workflows INTRODUCTION Organizations of all types and sizes are looking to big data to help them make faster, more intelligent decisions. Many efforts
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
WHITE PAPER. www.fusionstorm.com. Get Ready for Big Data:
WHitE PaPER: Easing the Way to the cloud: 1 WHITE PAPER Get Ready for Big Data: How Scale-Out NaS Delivers the Scalability, Performance, Resilience and manageability that Big Data Environments Demand 2
Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers
SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation
VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing
Journal of Information & Computational Science 9: 5 (2012) 1273 1280 Available at http://www.joics.com VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Yuan
Globus for Data Management
Globus for Data Management Computation Institute Rachana Ananthakrishnan ([email protected]) Data Management Challenges Transfers often take longer than expected based on available network capacities
Technology Consulting. Infrastructure Consulting: Next-Generation Data Center
Technology Consulting Infrastructure Consulting: Next-Generation Data Center Page Next-generation Heading data centers: Page Sub Title Provisioning IT services for high performance Elasticity is not the
Deploying distributed network monitoring mesh
Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites Phil DeMar, Maxim Grigoriev Fermilab Joe Metzger, Brian Tierney ESnet Martin Swany University of Delaware Jeff Boote, Eric
High Speed Transfers Using the Aspera Node API! Aspera Live Webinars November 6, 2012!
High Speed Transfers Using the Aspera Node API Aspera Live Webinars November 6, 2012 Mike Flathers Chief Technologist Aspera Developer Platform [email protected] Agenda Brief Aspera Technology Overview
Course 20533: Implementing Microsoft Azure Infrastructure Solutions
Course 20533: Implementing Microsoft Azure Infrastructure Solutions Overview About this course This course is aimed at experienced IT Professionals who currently administer their on-premises infrastructure.
