Sun Grid Engine, a new scheduler for EGEE
|
|
|
- Raymond Bates
- 10 years ago
- Views:
Transcription
1 Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May 2007
2 EGEE back to basics The EGEE project The Infrastructure and the glite Middleware Outline EGEE Local Resource Management Systems (LRMS) LSF, Torque/Maui, Condor and Sun Grid Engine Sun Grid Engine glite integration (for the lcg-ce) JobManager Accounting Information Information plug-in YAIM Integration Conclusions and Future Work IBERGRID Conference, Santiago de Compostela 2
3 Enabling Grids for E-Science fundamental goal EGEE Project Deployment of a Grid Infrastructure for all fields of science EGEE infrastructure Resources are glued together by a set of agreed services provided and supported by the EGEE comunity EGEE proposes glite as the appropriate middleware to support the necessary grid services for multi-science aplications EGEE Services are divided in two different sets: Core Services: Only installed in some RCs but used by all users Resource Broker, Top-BDII, File Catalogues, VOMS servers,... Local Services: Deployed and Maintained by each participating site Computing Element, Storage Element, MonBox, User Interface,... [email protected] IBERGRID Conference, Santiago de Compostela 3
4 The CE may be used by a generic client An end-user which interacts directly with it The Computing Element The Workload Manager (RB) which submits a given job to it after going through by all the matchmaking process It is THE SERVICE representing the computing resources Authentication and Autorization Has to interact with the Local Resource Management System Job management (Job submission, Job control, Job canceling,...) Provide information describing itself This information is published in the Information Service Used by the match making engine which matches available resources to queued jobs. [email protected] IBERGRID Conference, Santiago de Compostela 4
5 Computing Element and the LRMS Enabling Grids for E-sciencE The CE is the entry point from a Grid job into the LRMS Gatekeeper Service for Authentication, Authorization and Globus Job Submission. GRIS Service (InfoService) for publishing Local Resource Usage and Characteristics infoservice gatekeeper Grid Gate node glite must implement proper tools (Virtual Layers) to Use LRMS specific cmds for Job Management (translate RSL requests; feed the L&B Service) Query Resource Usage (feed the CE GRIS Service) Process the Accounting Information generated by the LRMS and feed it to the central Accounting Registry? Batch server LRMS [email protected] IBERGRID Conference, Santiago de Compostela 5
6 The Local Resource Management System (LRMS) is the Cluster Component which Manages the execution of Users Applications Allows to optimize the Cluster Resource Usage Enables to fullfil a broadrage of Usage Policies Easies the Cluster Administration Tasks LRMS Each EGEE Cluster Admin should be allowed to choose the LRMS he thinks its best for their needs Most of the times, EGEE clusters are shared with Local Farms However, only Torque/Maui and LSF are fully supported in EGEE glite should be able to cope with a much wider set of LRMS Easies the integration of clusters already in operation Better inter-operability The wider the glite offer, more appealing it becomes [email protected] IBERGRID Conference, Santiago de Compostela 6
7 Basic LRMS Comparison LRMS Pros Cons LSF Torque/ Maui Condor Flexible Job Scheduling Policies Advance Resource Management Checkpointing & Job Migration, Load Balacing Good Graphical Interfaces to monitor Cluster functionalities Integrable with Grids Good integration of parallel libraries Able to start parallel jobs using LRMS services Full control of parallel processes Flexible Job Scheduling Policies Fair Share Policies, Backfilling, Resource Reservations CPU harvesting Special ClassAds language Dynamic check-pointing and migration Mechanisms for Globus Interface Coherent with glite MD Expensive comercial product Not suitable for small computing clusters Configurations done through the command line A non user friendly GUI Software development uncertain Bad documentation Not optimal to parallel aplications Check-pointing only works for batch jobs Complex configuration [email protected] IBERGRID Conference, Santiago de Compostela 7
8 Sun Grid Engine SGE, an open source job management system supported by Sun Queues are located in server nodes and have attributes which caracterize the properties of the different servers A user may request at submission time certain execution features Memory, execution speed, available software licences, etc Submitted jobs wait in a holding area where its requirements/priorities are determined It only runs if there are queues (servers) matching the job requests Some Important Features Supports Check-pointing and Migration... Although some additional programming could be needed Tight integration of parallel libraries Supported through a SGE specific version of rsh, called qrsh Flexible Scheduling Polices Implements Calendars Fluctuating Resources Intuitive Graphic Interface Used by users to manage jobs and by admins to configure and monitor their cluster Good Documentation Still Work in Progress Observed flaws maybe addressed to dedicated teams and support is assureb by dedicated staff [email protected] IBERGRID Conference, Santiago de Compostela 8
9 The JM is the core service of the Globus GRAM Service Submits jobs to SGE based on Globus requests and through a jobwrapper script Intermediary to query the status of jobs and to cancel them SGE command client tools (qstat, qsub, qdel) have to be available in the CE Even if the Qmaster machine is installed in another machine Doesn t require shared homes But home dirs must have the same path on the CE and WNs LCGSGE JobManager (1) lcgsge.conf gatekeeper Grid Gate node Lcgsge JobManager SGE Qmaster L&B Globus RLS Requests SGE cmds (qstat, qsub,...) The SGE JM is based on the LCGPBS JM Requires XML::Simple.pm [email protected] IBERGRID Conference, Santiago de Compostela 9
10 SGE JM re-implements the following functions: Submit: Checks Globus RSL arguments returning a Globus error if the arguments are not valid or if there are no resources Submit_to_batch_system: Submits jobs to SGE, after building the jobwrapper script, by getting the necessary information from the RSL variables Poll: Links the present status of jobs running in SGE with the Globus appropriate message LCGSGE JobManager (2) lcgsge.conf gatekeeper Grid Gate node Lcgsge JobManager SGE Qmaster L&B Globus RLS Requests SGE cmds (qstat, qsub,...) Poll_batch_system: Allows to know the status of running jobs parsing the qstat SGE output. Cancel_in_batch_system: Cancels jobs running in SGE using qdel [email protected] IBERGRID Conference, Santiago de Compostela 10
11 The solution implemented for SGE does not currently use the generic EGEE scripts lcg-info info-dynamic-sge A standalone Information plugin script that examines SGE queuing system state SGE Information Plugin (1) ldif static file Cluster.state, Info-reporter.conf, vqueues.conf InfoSystem Grid Gate node Site BDII LDAP 2135 Information expected to be reported is based on queues SGE does not assign a job to a queue until execution time. ``virtual queues'' are used lcg-info-dynamic-sge SGE cmds (qstat, qsub,...) SGE Qmaster The info reporter reads A copy of a static ldif file with details of all ``virtual queues' Config files specifying how virtual queues map into a list of resource requirements [email protected] IBERGRID Conference, Santiago de Compostela 11
12 The dynamic information single call to SGE's ``qstat'' The system determines which virtual queues the job should be associated with Each virtual queue is considered to count up Nb of job slots, Nb of pending/running jobs Total amount of runtime left on all of the jobs assuming that they will run for their max duration SGE Information Plugin (2) ldif static file Cluster.state, Info-reporter.conf, vqueues.conf InfoSystem Grid Gate node lcg-info-dynamic-sge SGE Qmaster Site BDII LDAP 2135 SGE cmds (qstat, qsub,...) The state of the batch queues can change quite fast Option to capture a copy of all information provider input data, which can be replayed to the information provider [email protected] IBERGRID Conference, Santiago de Compostela 12
13 APEL SGE plug-in is a log processing application Used to produce CPU job accounting records Interprets gatekeeper & batch system logs Requires the JM to add ``gridinfo'' records in the log file Standard Globus JMs do not log them but LCG JMs do it apel-sge sge-log-parser parses the SGE accounting log file This information, together with the gridinfo mappings from the JobManager are joined together to form accounting records Published using R-GMA R to an accounting database. SGE Accounting Plugin log file Accounting file Grid Gate node Apel-sge-log-parser SGE Qmaster MON R-GMA [email protected] IBERGRID Conference, Santiago de Compostela 13
14 YAIM (Yet Another Installation Method) SGE YAIM Integration (1) Separates the instalation process from the configuration one Based on a library of bash functions called by a configuration script Functions needed by each node are defined in node-info.def file The grid site topology is totally encapsulated on the site-info.def file Development of two integration rpms lcgce-yaimtosge i386.rpm glitewn-yaimtosge i386.rpm Requirements SGE installed (we presently made SGE rpms to install it) lcg-ce and glite-wn glite-yaim (>= ), perl-xml-simple (>= ), openmotif (>= ) and xorg-x11-xauth (>= ) [email protected] IBERGRID Conference, Santiago de Compostela 14
15 SGE YAIM Integration (2) $SGE_ROOT software dir must be set to /usr/local/sge/pro May be changed by the site admin in a future release The SGE Qmaster can only be installed in the CE May be installed in another machine in a future release Three new variables must be set in the site-info.def info.def SGE_QMASTER, DEFAULT_DOMAIN, ADMIN_ The integration rpms do... Change the node-info.def file to include two new node types CE_sge and WN_sge Run the same functions as the CE and WN nodes, plus at the end Config_sge_server and Config_sge_client [email protected] IBERGRID Conference, Santiago de Compostela 15
16 The Config_sge_server SGE YAIM Integration (3) Uses an auxiliary perl script (configure_sge_server.pm) Builds all the default SGE directory structure Configures environment setting files, sets the global SGE configuration file, the SGE scheduler configuration file and SGE complex attributes Defines one cluster queue for each VO Deploys the lcgsge JM and builds b its configuration files Deploys SGE Information plug-in and builts its configuration files Accounting is not properly integrated but will be soon... The Config_sge_client Uses an auxiliary perl scrip (configure_sge_client.pm configure_sge_client.pm) Builds all the default SGE directory structure in the client [email protected] IBERGRID Conference, Santiago de Compostela 16
17 SGE YAIM Integration (4) /opt/glite/yaim/bin/yaim c s s site-info.def info.def n n CE_sge [email protected] IBERGRID Conference, Santiago de Compostela 17
18 Future Work SGE is working on a lcg-ce although additional work is required YAIM SGE integration More flexible allowing site admins to dynamically set a broader range of options Separate Qmaster from the CE Fully integrate the SGE Accounting SGE Information Provider needs to improve its flexibility and take into account overlapping cluster queues / virtual queues definitions Started on integrating support for BLAHP, running on glite-ce Will be used within glite-ce and CREAM to interface with the LRMS Expected to share the configuration files and concept of virtual queues with the information provider. Other local middleware elements (GIIS, YAIM) basically remain unchanged for this glite-ce flavour. Still missing GridICE sensors for SGE [email protected] IBERGRID Conference, Santiago de Compostela 18
Sun Grid Engine, a new scheduler for EGEE middleware
Sun Grid Engine, a new scheduler for EGEE middleware G. Borges 1, M. David 1, J. Gomes 1, J. Lopez 2, P. Rey 2, A. Simon 2, C. Fernandez 2, D. Kant 3, K. M. Sephton 4 1 Laboratório de Instrumentação em
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
GT 6.0 GRAM5 Key Concepts
GT 6.0 GRAM5 Key Concepts GT 6.0 GRAM5 Key Concepts Overview The Globus Toolkit provides GRAM5: a service to submit, monitor, and cancel jobs on Grid computing resources. In GRAM, a job consists of a computation
Streamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
HTCondor within the European Grid & in the Cloud
HTCondor within the European Grid & in the Cloud Andrew Lahiff STFC Rutherford Appleton Laboratory HEPiX 2015 Spring Workshop, Oxford The Grid Introduction Computing element requirements Job submission
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
CMS Dashboard of Grid Activity
Enabling Grids for E-sciencE CMS Dashboard of Grid Activity Julia Andreeva, Juha Herrala, CERN LCG ARDA Project, EGEE NA4 EGEE User Forum Geneva, Switzerland March 1-3, 2006 http://arda.cern.ch ARDA and
Introduction to Sun Grid Engine (SGE)
Introduction to Sun Grid Engine (SGE) What is SGE? Sun Grid Engine (SGE) is an open source community effort to facilitate the adoption of distributed computing solutions. Sponsored by Sun Microsystems
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015
Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians
The ENEA-EGEE site: Access to non-standard platforms
V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda
Grid Engine 6. Troubleshooting. BioTeam Inc. [email protected]
Grid Engine 6 Troubleshooting BioTeam Inc. [email protected] Grid Engine Troubleshooting There are two core problem types Job Level Cluster seems OK, example scripts work fine Some user jobs/apps fail Cluster
GridWay: Open Source Meta-scheduling Technology for Grid Computing
: Open Source Meta-scheduling Technology for Grid Computing Ruben S. Montero dsa-research.org Open Source Grid & Cluster Oakland CA, May 2008 Contents Introduction What is? Architecture & Components Scheduling
Roberto Barbera. Centralized bookkeeping and monitoring in ALICE
Centralized bookkeeping and monitoring in ALICE CHEP INFN 2000, GRID 10.02.2000 WP6, 24.07.2001 Roberto 1 Barbera ALICE and the GRID Phase I: AliRoot production The GRID Powered by ROOT 2 How did we get
Grid 101. Grid 101. Josh Hegie. [email protected] http://hpc.unr.edu
Grid 101 Josh Hegie [email protected] http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging
Resource Management on Computational Grids
Univeristà Ca Foscari, Venezia http://www.dsi.unive.it Resource Management on Computational Grids Paolo Palmerini Dottorato di ricerca di Informatica (anno I, ciclo II) email: [email protected] 1/29
The dashboard Grid monitoring framework
The dashboard Grid monitoring framework Benjamin Gaidioz on behalf of the ARDA dashboard team (CERN/EGEE) ISGC 2007 conference The dashboard Grid monitoring framework p. 1 introduction/outline goals of
Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine
Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture
Recommendations for Static Firewall Configuration in D-Grid
D-Grid Integrationsprojekt (DGI-2) Fachgebiet 3-3 Firewalls Recommendations for Static Firewall Configuration in D-Grid Version 1.5, 21. Mai 2008 D-Grid Integrationsprojekt (DGI-2) Autoren: Gian Luca Volpato
SGE Roll: Users Guide. Version @VERSION@ Edition
SGE Roll: Users Guide Version @VERSION@ Edition SGE Roll: Users Guide : Version @VERSION@ Edition Published Aug 2006 Copyright 2006 UC Regents, Scalable Systems Table of Contents Preface...i 1. Requirements...1
Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)
Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing
ARC Computing Element
NORDUGRID NORDUGRID-MANUAL-20 15/7/2015 ARC Computing Element System Administrator Guide F. Paganelli, Zs. Nagy, O. Smirnova, and various contributions from all ARC developers Contents 1 Overview 9 1.1
Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform
Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4
Anar Manafov, GSI Darmstadt. GSI Palaver, 2010-03-09
Anar Manafov, GSI Darmstadt HEP Data Analysis Implement algorithm Run over data set Make improvements Typical HEP analysis needs a continuous algorithm refinement cycle 2 PROOF Storage File Catalog Query
Grid Engine Training Introduction
Grid Engine Training Jordi Blasco ([email protected]) 26-03-2012 Agenda 1 How it works? 2 History Current status future About the Grid Engine version of this training Documentation 3 Grid Engine internals
Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource
PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)
TUTORIAL. Rebecca Breu, Bastian Demuth, André Giesler, Bastian Tweddell (FZ Jülich) {r.breu, b.demuth, a.giesler, b.tweddell}@fz-juelich.
TUTORIAL Rebecca Breu, Bastian Demuth, André Giesler, Bastian Tweddell (FZ Jülich) {r.breu, b.demuth, a.giesler, b.tweddell}@fz-juelich.de September 2006 Outline Motivation & History Production UNICORE
Cluster Computing With R
Cluster Computing With R Stowers Institute for Medical Research R/Bioconductor Discussion Group Earl F. Glynn Scientific Programmer 18 December 2007 1 Cluster Computing With R Accessing Linux Boxes from
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
LSKA 2010 Survey Report Job Scheduler
LSKA 2010 Survey Report Job Scheduler Graduate Institute of Communication Engineering {r98942067, r98942112}@ntu.edu.tw March 31, 2010 1. Motivation Recently, the computing becomes much more complex. However,
The SUN ONE Grid Engine BATCH SYSTEM
The SUN ONE Grid Engine BATCH SYSTEM Juan Luis Chaves Sanabria Centro Nacional de Cálculo Científico (CeCalCULA) Latin American School in HPC on Linux Cluster October 27 November 07 2003 What is SGE? Is
Instruments in Grid: the New Instrument Element
Instruments in Grid: the New Instrument Element C. Vuerli (1,2), G. Taffoni (1,2), I. Coretti (1), F. Pasian (1,2), P. Santin (1), M. Pucillo (1) (1) INAF Astronomical Observatory of Trieste (2) INAF Informative
ITG Software Engineering
IBM WebSphere Administration 8.5 Course ID: Page 1 Last Updated 12/15/2014 WebSphere Administration 8.5 Course Overview: This 5 Day course will cover the administration and configuration of WebSphere 8.5.
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
SEE-GRID-SCI. www.see-grid-sci.eu. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009
SEE-GRID-SCI Grid Site Monitoring tools developed and used at SCL www.see-grid-sci.eu SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul 09-10 December, 2009 V. Slavnić, B. Acković, D. Vudragović, A. Balaž,
Vulnerability Assessment for Middleware
Vulnerability Assessment for Middleware Elisa Heymann, Eduardo Cesar Universitat Autònoma de Barcelona, Spain Jim Kupsch, Barton Miller University of Wisconsin-Madison Barcelona, September 21st 2009 Key
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
Beyond Windows: Using the Linux Servers and the Grid
Beyond Windows: Using the Linux Servers and the Grid Topics Linux Overview How to Login & Remote Access Passwords Staying Up-To-Date Network Drives Server List The Grid Useful Commands Linux Overview Linux
Provisioning and Resource Management at Large Scale (Kadeploy and OAR)
Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Olivier Richard Laboratoire d Informatique de Grenoble (LIG) Projet INRIA Mescal 31 octobre 2007 Olivier Richard ( Laboratoire d Informatique
HTCondor at the RAL Tier-1
HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core
NorduGrid ARC Tutorial
NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23
Using the Yale HPC Clusters
Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: [email protected] Read documentation at: http://research.computing.yale.edu/hpc-support
Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer. Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA)
Grid Engine experience in Finis Terrae, large Itanium cluster supercomputer Pablo Rey Mayo Systems Technician, Galicia Supercomputing Centre (CESGA) Agenda Introducing CESGA Finis Terrae Architecture Grid
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment Arshad Ali 3, Ashiq Anjum 3, Atif Mehmood 3, Richard McClatchey 2, Ian Willers 2, Julian Bunn
Grid Computing in SAS 9.4 Third Edition
Grid Computing in SAS 9.4 Third Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. Grid Computing in SAS 9.4, Third Edition. Cary, NC:
Job Scheduling with Moab Cluster Suite
Job Scheduling with Moab Cluster Suite IBM High Performance Computing February 2010 Y. Joanna Wong, Ph.D. [email protected] 2/22/2010 Workload Manager Torque Source: Adaptive Computing 2 Some terminology..
Auto-administration of glite-based
2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Microsoft HPC. V 1.0 José M. Cámara ([email protected])
Microsoft HPC V 1.0 José M. Cámara ([email protected]) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Job scheduler details
Job scheduler details Advanced Computing Center for Research & Education (ACCRE) Job scheduler details 1 / 25 Outline 1 Batch queue system overview 2 Torque and Moab 3 Submitting jobs (ACCRE) Job scheduler
The GENIUS Grid Portal
The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Grid Engine Administration. Overview
Grid Engine Administration Overview This module covers Grid Problem Types How it works Distributed Resource Management Grid Engine 6 Variants Grid Engine Scheduling Grid Engine 6 Architecture Grid Problem
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
A High Performance Computing Scheduling and Resource Management Primer
LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was
Grid Engine. Application Integration
Grid Engine Application Integration Getting Stuff Done. Batch Interactive - Terminal Interactive - X11/GUI Licensed Applications Parallel Jobs DRMAA Batch Jobs Most common What is run: Shell Scripts Binaries
System Requirements. Version 2015.0
System Requirements Version 2015.0 Copyright Copyright 2000-2015, NICE s.r.l. All right reserved. We'd Like to Hear from You You can help us make this document better by telling us what you think of the
High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda
High Performance Computing with Sun Grid Engine on the HPSCC cluster Fernando J. Pineda HPSCC High Performance Scientific Computing Center (HPSCC) " The Johns Hopkins Service Center in the Dept. of Biostatistics
Sun Grid Engine Package for OSCAR A Google SoC 2005 Project
Sun Grid Engine Package for OSCAR A Google SoC 2005 Project Babu Sundaram, Barbara Chapman University of Houston Bernard Li, Mark Mayo, Asim Siddiqui, Steven Jones Canada s Michael Smith Genome Sciences
PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007
PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit
Clusters in the Cloud
Clusters in the Cloud Dr. Paul Coddington, Deputy Director Dr. Shunde Zhang, Compu:ng Specialist eresearch SA October 2014 Use Cases Make the cloud easier to use for compute jobs Par:cularly for users
Chapter 2: Getting Started
Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand
The glite Workload Management System
Consorzio COMETA - Progetto PI2S2 FESR The glite Workload Management System Annamaria Muoio INFN Catania Italy [email protected] Tutorial per utenti e sviluppo di applicazioni in Grid 16-20 July
