The ENEA-EGEE site: Access to non-standard platforms
|
|
|
- Sabina Cooper
- 10 years ago
- Views:
Transcription
1 V INFNGrid Workshop Padova, Italy December The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda ENEA INFO, C.R. ENEA Frascati V. E. Fermi 45 Frascati ROMA (Italy) (*) CASPUR, Roma Italy, (**) Esse3Esse, Roma, Italy
2 Outline ENEA-GRID Implementation of ENEA-INFO EGEE site: Access to AIX resources Middleware integration: gssklog and lcmaps Status and Discussion Documentation Work in progress...
3 ENEA [Italian National Agency for New Technologies, Energy and Environment] 12 Research sites and a Central Computer and Network Service (ENEA-INFO) with 6 computer centres managing multi-platform resources for serial & parallel computation and graphical post processing.
4 ENEA GRID mission [started 1999]: provide a unified user environment and an homogeneous access method for all ENEA researchers, irrespective of their location. optimize the utilization of the available resources GRID functionalities (unique authentication, authorization, resource access and resource discovery) are provided using mature, multi-platform components: Distributed File System: OpenAFS Resource Manager: LSF Multicluster [ Unified user interface: Java & Citrix Technologies These components constitute the ENEA-GRID Middleware. ENEA GRID
5 ENEA EGEE site implementation (1) ENEA has implemented an EGEE site as a funded partner in the project Our implementation requirements: provide access to the main ENEA-GRID resourses EGEE users => mapped to some standard ENEA-GRID AFS users Access to AIX platform => NO Middleware on Worker Nodes OpenAFS: minimize local installation LSF resources must be used whenever possible Batch submission: bsub Information: lsload, bhosts, lshosts, bqueues,... Prompt job execution: lsrun Batch queues dedicated to EGEE GRID users
6 ENEA EGEE site implementation (2) Implementation architecture: UI & SE: Linux, Standard EGEE configuration CE: Linux, modifications to implement AFS pool-account A Transmitter linux element with WN packages installed WN: any platform/os with support for AFS/LSF share user homes with CE and Transmitter using AFS delegate all grid commands concerning file transfer to/from GRID to the Transmitter by means of the LSF lsrun command. AFS security and quota management guarantee reliability
7 ENEA EGEE site implementation (3) How: Any WN middleware command concerning file transfer is wrapped with a lsrun script. The wrapped WN middleware is located in AFS. The relevant PATHs in the environment of the EGEE job on the WN are modified so that the wrapped middleware is used LSF job-starter script for the EGEE dedicated queues e.g. /afs/enea.it/grd_egee/system/eneaaddon/opt/globus/bin/globus-url-copy globus-url-copy -> wrapper wrapper: lsrun -m transmitter globus-url-copy $* EGEE GRID users require an AFS token: gssklogd modified to be compatible with EGEE middleware EGEE middleware on the CE must properly call gssklog.
8 EGEE Standard site layout Resource Broker User Interface Storage Element Computing Element: accepts jobs sent by the RB and submits them to the local batch system (LSF) Worker Nodes: they perform the computation and the WN middleware sends back job results to RB User Interface LSF Computing Element Worker Node Worker Node Worker Node gridftp WAN Storage Element DMZ LAN ENEA-GRID
9 EGEE ENEA site implementation (4) Resource Broker The Transmitter Linux WN has two functions: 1) A complete linux WN where wrapped commands are executed. 2) A proxy system to sends back job results to RB. e.g. /afs/enea.it/.../globus-url-copy-> wrapper wrapper: lsrun -m transmitter globus-url-copy $* Computing Element Linux LSF AFS Transmitter Linux WN WAN DMZ LAN e.g. gridftp inside lsrun wrappers ENEA-GRID
10 EGEE ENEA site implementation (5) Resource Broker GRID users are AFS users so a modified gssklogd server is required to obtain an AFS token from the X509 certificate. LSF Computing Element Linux Transmitter Linux WN WAN DMZ LAN server gssklogd AFS ENEA-GRID
11 Gssklog package has been developed by Doug Engert (ANL) to obtain an AFS token using a GSS [Generic Security Service] implementation. It has been used together with Globus Toolkit GSSAPI to obtain an AFS token from a X509 certificate.ftp://achilles.ctd.anl.gov/pub/dee The package include a daemon: [gssklogd] and a client [gssklog]. gssklogd has been modified to provide tokens for pool account users on the basis of the content of the VO information contained in the user certificate. The daemon requires now also the VO certificate. The afsgrid-mapfile syntax has been extended, e.g. for ENEA VO: "/VO=enea".enea/020/ gssklogd/gssklog The client gssklog provides always the principal name: gssklog -server server_address -principal enea004
12 lcmaps_afs LCAS [Local Centre Authorization Service] and LCMAPS [Local Credential Mapping Service] are the EGEE services that make the connection between GRID users certificates and local UNIX userids on the local site. LCMAPS [Local Credential Mapping Service] is a modular service. A lcmaps_afs module exists with the purpose to obtain an AFS tokens for the local mapped user using gssklog but the standard module is not compatible with VOMS pool account A modified module has been prepared to achieve compatibility: gssklog is invoked always with -principal parameter
13 EGEE ENEA site implementation (6) Resource Broker GRID users are AFS users so a modified gssklogd server is required to obtain an AFS token from the X509 certificate. AFS can be used to provide to the gssklogd daemon the information about VO and CA These information are automatically updated by the CE VO & CA informations server gssklogd LSF Computing Element Linux AFS Transmitter Linux WN WAN DMZ LAN ENEA-GRID
14 Status and discussion ENEA Linux site: Glite 3.0 certified CE/SE: egce.frascati.enea.it / egse.frascati.enea.it WN: 14 Linux P4, 1.8 Ghz, 1 GB ENEA AIX site: Glite 3.0 certification in progress CE/SE: egceaix.frascati.enea.it / egseaix.frascati.enea.it WN: 96 cpu SP4, 1.9 Ghz BENEFITS of the gateway approach: Any Platform/OS with AFS/LSF LIMITS of the gateway approach Not a completely standard site [but EGEE Certification job runs well] GRID API are not available [ How to Flag it?] Some WN monitoring components are unavailable OS flag not yet implemented [VO tag defined, but not sufficient!]
15 Documentation An EGEE Technical Note (EGEE-TR ) is available : Title: "AFS Pool Account Users - GSSKLOG and LCMAPS extension to support AFS users as EGEE pool account users" [ The patched source of lcmaps_afs module [lcmaps_afs.c] is available and submitted to lcmaps package developers. A patch to the standard gssklog-0.11 package is available. The patch to gssklog package has been presented to the AFS Best Practices Workshop 2006 [ and submitted to Doug Engert.
16 CE Glite Work in progress CE already installed. Porting of the solution adopted in LCG CE not yet completed MPI Job support ENEA is strongly interested in transparent MPI support by EGEE middleware MPI support must take advantage of the benefits of a shared file system for user HOMEs (as OpenAFS) Access to other platform: ALTIX clusters (32 cpu; 32 Gb shared memory) Tests succesfully performed also on: MacOSX, Irix
AFS in a GRID context
Rome 28-30 September 2009, Università Roma Tre European AFS meeting 2009 http://www.dia.uniroma3.it/~afscon09 AFS in a GRID context G. Bracco, S.Migliori, S. Podda, P. D'Angelo A. Santoro, A. Rocchi, C.
Cloud Computing in ENEA-GRID: Virtual Machines, Roaming Profile, and Online Storage
Workshop GARR Calcolo e Storage Distribuito November 29-30, 2012 - Rome Cloud Computing in ENEA-GRID: Virtual Machines, Roaming Profile, and Online Storage Ing. Giovanni Ponti, Ph.D. ENEA UTICT-HPC [email protected]
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine
Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Sun Grid Engine, a new scheduler for EGEE
Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Grid Computing With FreeBSD
Grid Computing With FreeBSD USENIX ATC '04: UseBSD SIG Boston, MA, June 29 th 2004 Brooks Davis, Craig Lee The Aerospace Corporation El Segundo, CA {brooks,lee}aero.org http://people.freebsd.org/~brooks/papers/usebsd2004/
GridWay: Open Source Meta-scheduling Technology for Grid Computing
: Open Source Meta-scheduling Technology for Grid Computing Ruben S. Montero dsa-research.org Open Source Grid & Cluster Oakland CA, May 2008 Contents Introduction What is? Architecture & Components Scheduling
www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009
SEE-GRID-SCI Virtualization and Grid Computing with XEN www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 Milan Potocnik University
Recommendations for Static Firewall Configuration in D-Grid
D-Grid Integrationsprojekt (DGI-2) Fachgebiet 3-3 Firewalls Recommendations for Static Firewall Configuration in D-Grid Version 1.5, 21. Mai 2008 D-Grid Integrationsprojekt (DGI-2) Autoren: Gian Luca Volpato
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
IGI Portal architecture and interaction with a CA- online
IGI Portal architecture and interaction with a CA- online Abstract In the framework of the Italian Grid Infrastructure, we are designing a web portal for the grid and cloud services provisioning. In following
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
GigaSpaces XAP 10.0 Administration Training ADMINISTRATION, MONITORING AND TROUBLESHOOTING GIGASPACES XAP DISTRIBUTED SYSTEMS
GigaSpaces XAP 10.0 Administration Training ADMINISTRATION, MONITORING AND TROUBLESHOOTING GIGASPACES XAP DISTRIBUTED SYSTEMS Learn about GigaSpaces XAP internal protocols, its configuration, monitoring
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
Scheduling in SAS 9.3
Scheduling in SAS 9.3 SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc 2011. Scheduling in SAS 9.3. Cary, NC: SAS Institute Inc. Scheduling in SAS 9.3
SEE-GRID-SCI. www.see-grid-sci.eu. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009
SEE-GRID-SCI Grid Site Monitoring tools developed and used at SCL www.see-grid-sci.eu SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009 V. Slavnić, B. Acković, D. Vudragović, A. Balaž,
24/08/2004. Introductory User Guide
24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided
ARC Computing Element
NORDUGRID NORDUGRID-MANUAL-20 15/7/2015 ARC Computing Element System Administrator Guide F. Paganelli, Zs. Nagy, O. Smirnova, and various contributions from all ARC developers Contents 1 Overview 9 1.1
Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC
Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing
Sun Grid Engine, a new scheduler for EGEE middleware
Sun Grid Engine, a new scheduler for EGEE middleware G. Borges 1, M. David 1, J. Gomes 1, J. Lopez 2, P. Rey 2, A. Simon 2, C. Fernandez 2, D. Kant 3, K. M. Sephton 4 1 Laboratório de Instrumentação em
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
MATLAB Distributed Computing Server System Administrator's Guide
MATLAB Distributed Computing Server System Administrator's Guide R2015b How to Contact MathWorks Latest news: www.mathworks.com Sales and services: www.mathworks.com/sales_and_services User community:
SAS Grid: Grid Scheduling Policy and Resource Allocation Adam H. Diaz, IBM Platform Computing, Research Triangle Park, NC
Paper BI222012 SAS Grid: Grid Scheduling Policy and Resource Allocation Adam H. Diaz, IBM Platform Computing, Research Triangle Park, NC ABSTRACT This paper will discuss at a high level some of the options
The glite File Transfer Service
Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
Running Jobs with Platform LSF. Platform LSF Version 8.0 June 2011
Running Jobs with Platform LSF Platform LSF Version 8.0 June 2011 Copyright 1994-2011 Platform Computing Corporation. Although the information in this document has been carefully reviewed, Platform Computing
NorduGrid ARC Tutorial
NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23
XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
vsphere Upgrade vsphere 6.0 EN-001721-03
vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,
automates system administration for homogeneous and heterogeneous networks
IT SERVICES SOLUTIONS SOFTWARE IT Services CONSULTING Operational Concepts Security Solutions Linux Cluster Computing automates system administration for homogeneous and heterogeneous networks System Management
Grids Computing and Collaboration
Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
Automated deployment of virtualization-based research models of distributed computer systems
Automated deployment of virtualization-based research models of distributed computer systems Andrey Zenzinov Mechanics and mathematics department, Moscow State University Institute of mechanics, Moscow
The glite Workload Management System
Consorzio COMETA - Progetto PI2S2 FESR The glite Workload Management System Annamaria Muoio INFN Catania Italy [email protected] Tutorial per utenti e sviluppo di applicazioni in Grid 16-20 July
Grid Computing in SAS 9.4 Third Edition
Grid Computing in SAS 9.4 Third Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. Grid Computing in SAS 9.4, Third Edition. Cary, NC:
EUROPEAN MIDDLEWARE INITIATIVE
EUROPEAN MIDDLEWARE INITIATIVE GLEXEC SERVICE REFERENCE CARD Document version: 1.0 EMI Component Version: glexec 0.6 0.9 Date: May 24, 2012 GLExec Service Reference Card GLExec on WN, CE and anywhere else
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Interoperability between Sun Grid Engine and the Windows Compute Cluster
Interoperability between Sun Grid Engine and the Windows Compute Cluster Steven Newhouse Program Manager, Windows HPC Team [email protected] 1 Computer Cluster Roadmap Mainstream HPC Mainstream
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
Interwise Connect. Working with Reverse Proxy Version 7.x
Working with Reverse Proxy Version 7.x Table of Contents BACKGROUND...3 Single Sign On (SSO)... 3 Interwise Connect... 3 INTERWISE CONNECT WORKING WITH REVERSE PROXY...4 Architecture... 4 Interwise Web
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
System Requirements Table of contents
Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
Scheduling in SAS 9.4 Second Edition
Scheduling in SAS 9.4 Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. Scheduling in SAS 9.4, Second Edition. Cary, NC: SAS Institute
MATLAB Distributed Computing Server System Administrator s Guide. R2013b
MATLAB Distributed Computing Server System Administrator s Guide R2013b How to Contact MathWorks www.mathworks.com Web comp.soft-sys.matlab Newsgroup www.mathworks.com/contact_ts.html Technical Support
Instruments in Grid: the New Instrument Element
Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative
Anwendungsintegration und Workflows mit UNICORE 6
Mitglied der Helmholtz-Gemeinschaft Anwendungsintegration und Workflows mit UNICORE 6 Bernd Schuller und UNICORE-Team Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH 26. November 2009 D-Grid
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
HPC and Grid Concepts
HPC and Grid Concepts Divya MG ([email protected]) CDAC Knowledge Park, Bangalore 16 th Feb 2012 GBC@PRL Ahmedabad 1 Presentation Overview What is HPC Need for HPC HPC Tools Grid Concepts GARUDA Overview
THE CCLRC DATA PORTAL
THE CCLRC DATA PORTAL Glen Drinkwater, Shoaib Sufi CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire, WA4 4AD, UK. E-mail: [email protected], [email protected] Abstract: The project aims
