Trends in High Performance Computing and the Grid. Technology Trends: Microprocessor Capacity



Similar documents
High Performance Computing and the Computational Grid. Technology Trends: Microprocessor Capacity

GridSolve: : A Seamless Bridge Between the Standard Programming Interfaces and Remote Resources

BSC - Barcelona Supercomputer Center

Getting the Performance Out Of High Performance Computing. Getting the Performance Out Of High Performance Computing

Supercomputing Status und Trends (Conference Report) Peter Wegner

Trends in High-Performance Computing for Power Grid Applications

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

Current Trend of Supercomputer Architecture

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

Building a Top500-class Supercomputing Cluster at LNS-BUAP

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Parallel Programming Survey

High Performance Computing. Course Notes HPC Fundamentals

Cluster Computing in a College of Criminal Justice

Das TOP500-Projekt der Universitäten Mannheim und Tennessee zur Evaluierung des Supercomputer Marktes. Hans-Werner Meuer Universität Mannheim

Linux Cluster Computing An Administrator s Perspective

Performance Monitoring of Parallel Scientific Applications

ANALYSIS OF SUPERCOMPUTER DESIGN

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications


Sun Constellation System: The Open Petascale Computing Architecture

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

An Overview of High- Performance Computing and Challenges for the Future

PACE Predictive Analytics Center of San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

1 Bull, 2011 Bull Extreme Computing

September 25, Maya Gokhale Georgia Institute of Technology

The TOP500 Project: Looking Back over 15 Years of Supercomputing Experience

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Clusters: Mainstream Technology for CAE

Lecture 1: the anatomy of a supercomputer

Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez

PRIMERGY server-based High Performance Computing solutions

Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET

HPC Update: Engagement Model

Parallel Computing. Introduction

QUADRICS IN LINUX CLUSTERS

Petascale Software Challenges. Piyush Chaudhary High Performance Computing

Cluster, Grid, Cloud Concepts

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

HPC Software Requirements to Support an HPC Cluster Supercomputer

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Supercomputing and Big Data: Where are the Real Boundaries and Opportunities for Synergy?

Enabling Technologies for Distributed Computing

Ethernet in the World s Top500 Supercomputers

Lecture 2 Parallel Programming Platforms

InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice

A Flexible Cluster Infrastructure for Systems Research and Software Development

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Large Scale Simulation on Clusters using COMSOL 4.2

System Models for Distributed and Cloud Computing

Cluster Computing at HRI

OpenMP Programming on ScaleMP

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

Building Clusters for Gromacs and other HPC applications

- An Essential Building Block for Stable and Reliable Compute Clusters

A Framework For Application Performance Understanding and Prediction

LS DYNA Performance Benchmarks and Profiling. January 2009

The Fusion of Supercomputing and Big Data: The Role of Global Memory Architectures in Future Large Scale Data Analytics

Performance of the JMA NWP models on the PC cluster TSUBAME.

Scalability and Classifications

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste

Hadoop on the Gordon Data Intensive Cluster

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Enabling Technologies for Distributed and Cloud Computing

A Performance Monitor based on Virtual Global Time for Clusters of PCs

Developing a Computer Based Grid infrastructure

Building an Inexpensive Parallel Computer

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

QTP Computing Laboratory Strategy

Build an Energy Efficient Supercomputer from Items You can Find in Your Home (Sort of)!

Overview of HPC Resources at Vanderbilt

Condor Supercomputer

Cluster Implementation and Management; Scheduling

High Performance Computing

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

Principles and characteristics of distributed systems and environments

Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania)

SAND P Issued by Sandia National Laboratories for NNSA s Office of Advanced Simulation & Computing, NA-114.

The Assessment of Benchmarks Executed on Bare-Metal and Using Para-Virtualisation

A Very Brief History of High-Performance Computing

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

visperf: Monitoring Tool for Grid Computing

Software and Performance Engineering of the Community Atmosphere Model

64-Bit versus 32-Bit CPUs in Scientific Computing

CLUSTER APPROACH TO HIGH PERFORMANCE COMPUTING

- Behind The Cloud -

Managing Data Center Power and Cooling

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

Bulk Synchronous Programmers and Design

High Performance Applications over the Cloud: Gains and Losses

2: Computer Performance

Transcription:

Trends in High Performance Computing and the Grid Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 1 Technology Trends: Microprocessor Capacity Gordon Moore (co-founder of Intel) predicted in 1965 that the transistor density of semiconductor chips would double roughly every 18 months. 2X transistors/chip Every 1.5 years Called Moore s Law Microprocessors have become smaller, denser, and more powerful. Not just processors, bandwidth, storage, etc. 2X memory and processor speed and ½ size, cost, & power every 18 months. 2

1 PFlop/s Moore s s Law Super Scalar/Vector/Parallel Parallel Earth Simulator 1 TFlop/s TMC CM-5 ASCI Red Cray T3D ASCI White Pacific 1 GFlop/s Super Scalar Vector Cray 1 Cray 2 Cray X-MP TMC CM-2 1 MFlop/s 1 KFlop/s Scalar IBM 7090 UNIVAC 1 EDSAC 1 CDC 7600 CDC 6600 IBM 360/195 1941 1 (Floating Point operations / second, Flop/s) 1945 100 1949 1,000 (1 KiloFlop/s, KFlop/s) 1951 10,000 1961 100,000 1964 1,000,000 (1 MegaFlop/s, MFlop/s) 1968 10,000,000 1975 100,000,000 1987 1,000,000,000 (1 GigaFlop/s, GFlop/s) 1992 10,000,000,000 1993 100,000,000,000 1997 1,000,000,000,000 (1 TeraFlop/s, TFlop/s) 2000 10,000,000,000,000 2003 35,000,000,000,000 (35 TFlop/s) 1950 1960 1970 1980 1990 2000 2010 3 H. Meuer, H. Simon, E. Strohmaier, & JD - Listing of the 500 most powerful Computers in the World - Yardstick: Rmax from LINPACK MPP Ax=b, dense problem TPP performance - Updated twice a year Size SC xy in the States in November Meeting in Mannheim, Germany in June - All data available from www.top500.org Rate 4

A Tour de Force in Engineering Homogeneous, Centralized, Proprietary, Expensive! Target Application: CFD-Weather, Climate, Earthquakes 640 NEC SX/6 Nodes (mod) 5120 CPUs which have vector ops Each CPU 8 Gflop/s Peak 40 TFlop/s (peak) $1/2 Billion for machine & building Footprint of 4 tennis courts 7 MWatts Say 10 cent/kwhr - $16.8K/day = $6M/year! Expect to be on top of Top500 until 60-100 TFlop ASCI machine arrives From the Top500 (June 2003) Performance of ESC ΣNext Top 4 Computers ~ 10% of performance of all the Top500 machines 5 June 2003 Manufacturer Computer Rmax Installation Site Year # Proc Rpeak 1 NEC Earth-Simulator 35860 Earth Simulator Center Yokohama 2002 5120 40960 2 Hewlett- Packard ASCI Q - AlphaServer SC ES45/1.25 GHz 13880 Los Alamos National Laboratory Los Alamos 2002 8192 20480 3 Linux NetworX Quadrics MCR Linux Cluster Xeon 2.4 GHz - Quadrics 7634 Lawrence Livermore National Laboratory Livermore 2002 2304 11060 4 IBM ASCI White, SP Power3 375 MHz 7304 Lawrence Livermore National Laboratory Livermore 2000 8192 12288 5 IBM SP Power3 375 MHz 16 way 7304 NERSC/LBNL Berkeley 2002 6656 9984 6 IBM/Quadrics xseries Cluster Xeon 2.4 GHz - Quadrics 6586 Lawrence Livermore National Laboratory Livermore 2003 1920 9216 7 Fujitsu PRIMEPOWER HPC2500 (1.3 GHz) 5406 National Aerospace Lab Tokyo 2002 2304 11980 8 Hewlett- Packard rx2600 Itanium2 1 GHz Cluster - Quadrics 4881 Pacific Northwest National Laboratory Richland 2003 1540 6160 9 Hewlett- Packard AlphaServer SC ES45/1 GHz 4463 Pittsburgh Supercomputing Center Pittsburgh 2001 3016 6032 10 Hewlett- Packard AlphaServer SC ES45/1 GHz 3980 Commissariat a l'energie Atomique (CEA) Bruyeres-le-Chatel 2001 2560 5120 6

TOP500 Performance - June 2003 1 Pflop/s 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1 Gflop/s 100 Mflop/s 1.17 TF/s 59.7 GF/s 0.4 GF/s Fujitsu 'NWT' NAL Jun-93 Nov-93 Jun-94 N ov-94 Jun-95 N ov-95 Jun-96 N ov-96 SUM N=1 N=500 Intel ASCI Red Sandia Jun-97 N ov-97 Jun-98 N ov-98 Jun-99 Nov-99 Jun-00 N ov-00 Jun-01 N ov-01 Jun-02 N ov-02 374 TF/s 35.8 TF/s IBM ASCI White LLNL NEC ES 244 GF/s My Laptop Jun-03 7 Virginia Tech Big Mac G5 Cluster Apple G5 Cluster Dual 2.0 GHz IBM Power PC 970s 16 Gflop/s per node 2 CPUs * 2 fma units/cpu * 2 GHz * 2(mul-add)/cycle 1100 Nodes or 2200 Processors Theoretical peak 17.6 Tflop/s Infiniband 4X primary fabric Cisco Gigabit Ethernet secondary fabric Linpack Benchmark using 2112 processors Theoretical peak of 16.9 Tflop/s Achieved 9.555 Tflop/s Could be #3 on 11/03 Top500 Cost is $5.2 million which includes the system itself, memory, storage, and communication fabrics 8

Detail on the Virginia Tech Machine Dual Power PC 970 2GHz 4 GB DRAM. 160 GB serial ATA mass storage. 4.4 TB total main memory. 176 TB total mass storage. Primary communications backplane based on infiniband technology. Each node can communicate with the network at 20 Gb/s, full duplex, "ultra-low" latency. Switch consists of 24 96-port switches in fat-tree topology. Secondary Communications Network: Gigabit fast ethernet management backplane. Based on 5 Cisco 4500 switches, each with 240 ports. Software: Mac OSX. MPIch-2 C, C++ compilers - IBM xlc and gcc 3.3 Fortran 95/90/77 Compilers - IBM xlf and NAGWare 9 Top 5 Machines for the Linpack Benchmark 1 2 3 4 5 Earth Simulator Computer (Full Precision) ASCI Q AlphaServer EV-68 (1.25 GHz w/quadrics) Apple G5 dual IBM Power PC (2 GHz, 970s, w/infiniband 4X) HP RX2600 Itanium 2 (1.5GHz w/quadrics) Linux NetworX (2.4 GHz Pentium 4 Xeon w/quadrics) Number of Procs 5120 8160 2112 1936 2304 R max GFlop/s 35860 13880 9555 8633 7634 R Peak GFlop/s 40960 20480 16896 11616 11059 10

Performance Extrapolation 10 PFlop/s 1 PFlop/s 100 TFlop/s ASCI P 12,544 proc Blue Gene 130,000 proc 10 TFlop/sSum 1 TFlop/s 100 GFlop/s N=1 10 GFlop/s 1 GFlop/s N=500 100 MFlop/s Jun-93 Jun-94 Jun-95 Jun-96 Jun-97 Jun-98 Jun-99 Jun-00 Jun-01 Jun-02 Jun-03 TFlop/s To enter the list PFlop/s Computer Jun-04 Jun-05 Jun-06 Jun-07 Jun-08 Jun-09 Jun-10 11 Performance Extrapolation 10 PFlop/s 1 PFlop/s 100 TFlop/s ASCI P 12,544 proc Blue Gene 130,000 proc 10 TFlop/sSum 1 TFlop/s 100 GFlop/s N=1 10 GFlop/s 1 GFlop/s N=500 100 MFlop/s Jun-93 Jun-94 Jun-95 Jun-96 Jun-97 Jun-98 Jun-99 Jun-00 Jun-01 Jun-02 My Laptop Jun-03 Jun-04 Jun-05 Jun-06 Jun-07 Jun-08 Jun-09 Jun-10 12

To Exaflop/s (10 18 and Beyond) 10 Eflops 1 Eflops 100 Pflops 10 Pflops 1 Pflops 100 Tflops 10 Tflops 1 Tflops 100 Gflops 10 Gflops 1 Gflops 100 Mflops 1993 SUM My Laptop N=1 N=500 1995 2000 2005 2010 2015 2020 2023 13 ASCI Purple & IBM Blue Gene/L Announced 11/19/02 One of 2 machines for LLNL 360 TFlop/s 130,000 proc Linux FY 2005 Preliminary machine IBM Research BlueGene/L PowerPC 440, 500MHz w/custom proc/interconnect 512 Nodes (1024 processors) 1.435 Tflop/s (2.05 Tflop/s Peak) Plus ASCI Purple IBM Power 5 based 14 12K proc, 100 TFlop/s

Selected System Characteristics Earth Simulator Cray X1 ASCI Q MCR (NEC) (Cray) (HP ES45) (Dual Xeon) Year of Introduction 2002 2003 2003 2002 Node Architecture Vector Vector Alpha micro Xeon micro SMP SMP SMP SMP System Topology NEC single-stage 2D Torus Quadrics QsNet Quadrics QsNet Crossbar Interconnect Fat-tree Fat-tree Number of Nodes 640 32 2048 1152 Processors - per node 8 4 4 2 - system total 5120 128 8192 2304 Processor Speed 500 MHz 800 MHz 1.25 GHz 2.4 GHz Peak Speed - per processor 8 Gflops 12.8 Gflops 2.5 Gflops 4.8 Gflops - per node 64 Gflops 51.2 Gflops 10 Gflops 9.6 Gflops - system total 40 Tflops 1.6 Tflops 30 Tflops 10.8 Tflops Memory - per node 16 GB 8-64 GB 16 GB 16 GB - per processor 2 GB 2-16 GB 4 GB 2 GB - system total 10.24 TB 48 TB 4.6 TB Memory Bandwidth (peak) - L1 Cache N/A 76.8 GB/s 20 GB/s 20 GB/s - L2 Cache N/A 13 GB/s 1.5 GB/s Main (per proc) 32 GB/s 34.1 GB/s 2 GB/s 2 GB/s Inter-node MPI - Latency 8.6 µsec 8.6 µsec 5 µsec 4.75 µsec - Bandwidth 11.8 GB/s 11.9 GB/s 300 MB/s 315 MB/s Bytes/flop to main memory 4 3 0.8 0.4 Bytes/flop interconnect 1.5 1 0.12 0.07 15 HPCS Capability or Products Application Analysis Performance Assessment Industry Metrics Metrics, and Benchmarks Requirements and Metrics Concept Reviews System Design Review Early Software Tools Academia Research Platforms Technology Assessments PDR Phases I - III DDR Early Pilot Platforms Research Prototypes & Pilot Systems Products Phase II Readiness Reviews Phase III Readiness Review Fiscal Year 02 03 04 05 06 07 08 09 10 Reviews Industry Procurements Critical Program Milestones Phase I Industry Concept Study 5 companies $10M each Phase II R&D 3 companies ~$50M each Phase III Full Scale Development commercially ready in the 2007 to 2010 timeframe. $100M? 16

SETI@home: : Global Distributed Computing Running on 500,000 PCs, ~1300 CPU Years per Day 1.3M CPU Years so far Sophisticated Data & Signal Processing Analysis Distributes Datasets from Arecibo Radio Telescope 17 SETI@home Use thousands of Internetconnected PCs to help in the search for extraterrestrial intelligence. When their computer is idle or being wasted this software will download ~ half a MB chunk of data for analysis. Performs about 3 Tflops for each client in 15 hours. The results of this analysis are sent back to the SETI team, combined with thousands of other participants. Largest distributed computation project in existence Averaging 55 Tflop/s 1368 users 18

Google query attributes 150M queries/day (2000/second) 100 countries 3B documents in the index Data centers 15,000 Linux systems in 6 data centers 15 TFlop/s and 1000 TB total capability 40-80 1U/2U servers/cabinet 100 MB Ethernet switches/cabinet with gigabit Ethernet uplink growth from 4,000 systems (June 2000) Forward link Back links 18M queries then Performance and operation simple reissue of failed commands to new servers no performance debugging 19 problems are not reproducible Source: Monika Henzinger, Google & Cleve Moler How Google Works; You have to think big This is done offline Number of inlinks to a web page is a sign of the importance of the web page Generate an incidence matrix of links to and from web pages For each web page there s a row/column Matrix of order 3x10 9 Form a transition probability matrix of the Markov chain Matrix is not sparse, but it is a rank one modification of a sparse matrix Compute the eigenvector corresponding to the largest eigenvalue, which is 1. Solve Ax = x. Use the power method? (x=initial guess; iterate x Ax;) Each component of the vector x corresponds to a web page and represents the weight (importance) for that web page. This is the basis for the Page rank Create an inverted index of the web; word : web pages that contain that word Forward link are referred to in the rows Back links are referred to in the columns Eigenvalue problem n=3x10 9 (see: MathWorks When a query, set of words, comes in: Cleve s Corner) Go to the inverted index and get the corresponding web pages for the query Rank the resulting web pages by the Page rank and return pointers to those page in that order. 20 Source: Monika Henzinger, Google & Cleve Moler

QuickTime and a decompressor are needed to see this picture. Computational Science and the Grid Today s s computational and information infrastructure must address both science and technology trends Proliferation of resources Everyone has computers Multiple IP addresses per person Increased demand Immense amounts of data Applications increasingly Multi-scale Multi-disciplinary Information-driven Coordination/collaboration is a default mode of interaction The Internet Globalization, virtualization Open source movement The Global Grid 21 Grid Computing is About Resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations DATA ACQUISITION ADVANCED VISUALIZATION,ANALYSIS IMAGING INSTRUMENTS COMPUTATIONAL RESOURCES LARGE-SCALE DATABASES Telescience Grid, Courtesy of Mark Ellisman 22

The Grid 23 Benefits of the Grid 1. Higher Utilization of Distributed Resources E.g., Supercomputing Center Grid, Nationwide Virtual Supercomputers No increase in the overall pie 2. Higher Reliability and Upgradability of compute resources Same objectve as the Internet (or, ARPANet) 3. Collaboratory Science : tight collaboration of virtual organizations over the network E.g., EU DataGrid w/3000 worldwide high-energy physicists 4. Tight Integration of Data, Sensors, Human Resources VLBI (Astronomy) Project 5. Ultra-Scaling of Resources Distributed placement of (otherwise oversized) resources 6. Exploitation of Idle Cycles/Storage on non-dedicated, commodity resources Peer-to-Peer(P2P), Voluntary Computing 24

TeraGrid 2003 Prototype for a National Cyberinfrastructure 40 Gb/s 30 Gb/s 20 Gb/s 10 Gb/s 10 Gb/s 25 Atmospheric Sciences Grid Real time data Data Fusion General Circulation model Regional weather model Topography Topography Emissions Emissions Inventory Inventory Photo-chemical pollution model Particle dispersion model Bushfire model Vegetation Vegetation 26

Standard Implementation Change Models Data Fusion Real time data MPI General Circulation model MPI MPI Regional weather model MPI GASS/GridFTP/GRC Topography Topography Emissions Emissions Inventory Inventory GASS Photo-chemical pollution model Particle dispersion model Bushfire model MPI GASS Vegetation Vegetation 27 Some Grid Requirements User Perspective Single sign-on: authentication to any Grid resources authenticates for all others Single compute space: one scheduler for all Grid resources Single data space: can address files and data from any Grid resources Single development environment: Grid tools and libraries that work on all grid resources 28

NetSolve Grid Enabled Server NetSolve is an example of a Grid based hardware/software/data server. Based on a Remote Procedure Call model but with resource discovery, dynamic problem solving capabilities, load balancing, fault tolerance asynchronicity, security, Easy-of-use paramount Its about providing transparent access to resources. 29 NetSolve: The Big Picture Client Schedule AGENT(s) Matlab, Octave, Scilab Mathematica C, Fortran, Excel IBP Depot S1 S2 S3 S4 No knowledge of the grid required, RPC like. 30

NetSolve: The Big Picture Client Schedule AGENT(s) Matlab, Octave, Scilab Mathematica C, Fortran, Excel A, B IBP Depot S1 S2 S3 S4 No knowledge of the grid required, RPC like. 31 NetSolve: The Big Picture Client Schedule AGENT(s) Matlab, Octave, Scilab Mathematica C, Fortran, Excel Handle back IBP Depot S1 S2 S3 S4 No knowledge of the grid required, RPC like. 32

NetSolve: The Big Picture Client Matlab, Octave, Scilab Mathematica C, Fortran, Excel S1 A, B OP, handle Answer (C) Request S2! IBP Depot Op(C, A, B) S2 Schedule AGENT(s) S3 S4 No knowledge of the grid required, RPC like. 33 Agent User Request Agent Selects HWServer & SWServer Call NetSolve( ) Client SWS, User s Request Passed to HW Server Results Back to User Software Description Passed to Repository Software Returned Hardware Server (HWS) TRUSTED ENVIRONMENT Software Server (SWS) Client 34

Hiding the Parallel Processing User maybe unaware of parallel processing NetSolve takes care of the starting the message passing system, data distribution, and returning the results. (Using LFC software) 35 Netsolve and SCIRun SCIRun torso defibrillator application Chris Johnson, U of Utah 36

Basic Usage Scenarios Grid based numerical library routines User doesn t have to have software library on their machine, LAPACK, SuperLU, ScaLAPACK, PETSc, AZTEC, ARPACK Task farming applications Pleasantly parallel execution eg Parameter studies Remote application execution Complete applications with user specifying input parameters and receiving output Blue Collar Grid Based Computing Does not require deep knowledge of network programming Level of expressiveness right for many users User can set things up, no su required In use today, up to 200 servers in 9 countries Can plug into Globus, Condor, NINF, 37 University of Tennessee Deployment: Scalable Intracampus Research Grid: SInRG The Knoxville Campus has two DS-3 commodity Internet connections and one DS-3 Internet2/Abilene connection. An OC-3 ATM link routes IP traffic between the Knoxville campus, National Transportation Research Center, and Oak Ridge National Laboratory. UT participates in several national networking initiatives including Internet2 (I2), Abilene, the federal Next Generation Internet (NGI) initiative, Southern Universities Research Association (SURA) Regional Information Infrastructure (RII), and Southern Crossroads (SoX). The UT campus consists of a meshed ATM OC-12 being migrated over to switched Gigabit by early 2002. Federated Ownership: CS, Chem Eng., Medical School, Computational Ecology, El. Eng. Real applications, middleware development, logistical networking 38

New Features for NetSolve 2.0 New version available! New easy to use Interface Definition Language Simplified PDF Dynamic servers Add/delete problems without restarting servers New bindings for GridRPC Octave Condor-G Separate hardware/software servers Support for Mac OS X & Windows 2K/XP Web based monitoring Allow user to specify server Allow user to abort execution 39 GridRPC - Introduction Attempting to provide: Simple API upon which higher-level services could be implemented Low burden on programmer attempting to transition code to the Grid Provide standardized, portable, and simple programming interface for Remote Procedure Call Attempt to unify client access to existing grid computing systems (such as NetSolve and Ninf-G) Working towards standardization through GGF WG Initially standardize API; later deal with protocol Standardize only minimal set of features; higher-level features can be built on top Provide several reference implementations Not attempting to dictate any implementation details 40

GridRPC - Features Medium to coarsegrained calls (due to communication overhead) Asynchronous taskparallel programming Dynamic resource discovery and scheduling Jack Dongarra & Keith Seymour University of Tennessee Hidemoto Nakada National Institute of Advanced Industrial Science and Technology (AIST) Tokyo Institute of Technology Satoshi Matsuoka Tokyo Institute of Technology National Institute of Informatics Craig Lee The Aerospace Corporation Henri Casanova San Diego Supercomputer Center UCSD 41 NetSolve- Things Not Touched On Integration with other NMI tools Globus, Condor, Network Weather Service Security Using Kerberos V5 for authentication. Separate Server Characteristics Hardware and Software servers Monitor NetSolve Network Track and monitor usage Fault Tolerance Local / Global Configurations Dynamic Nature of Servers Automated Adaptive Algorithm Selection Dynamic determine the best algorithm based on system status and nature of user problem NetSolve evolving into GridRPC Being worked on under GGF with joint with NINF 42

The Computing Continuum Loosely Coupled Special Purpose SETI / Google Grids Clusters Highly Parallel Tightly Coupled Each strikes a different balance computation/communication coupling Implications for execution efficiency Applications for diverse needs computing is only one part of the story! 43 Grids vs. Capability vs. Cluster Computing Not an either/or question Each addresses different needs Each are part of an integrated solution Grid strengths Coupling necessarily distributed resources instruments, software, hardware, archives, and people Eliminating time and space barriers remote resource access and capacity computing Grids are not a cheap substitute for capability HPC Capability computing strengths Supporting foundational computations terascale and petascale nation scale problems Engaging tightly coupled computations and teams Clusters Low cost, group solution Potential hidden costs Key is easy access to resources in a transparent way 44

Collaborators / Support TOP500 H. Meuer, Mannheim U H. Simon, NERSC E. Strohmaier, NERSC NetSolve Sudesh Agrawal, UTK Henri Casanova, UCSD Kiran Sagi, UTK Keith Seymour, UTK Sathish Vadhiyar, UTK 45