Production HPC on Commodity Linux Clusters: The Role of Infrastructural Software



Similar documents
The ENEA-EGEE site: Access to non-standard platforms

Scheduling and Resource Management in Computational Mini-Grids

SAS Grid: Grid Scheduling Policy and Resource Allocation Adam H. Diaz, IBM Platform Computing, Research Triangle Park, NC

The RWTH Compute Cluster Environment

Cloud Computing for HPC

Program Grid and HPC5+ workshop

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Provisioning and Resource Management at Large Scale (Kadeploy and OAR)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

National Facility Job Management System

A High Performance Computing Scheduling and Resource Management Primer

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland

supercomputing. simplified.

Job Management System Extension To Support SLAAC-1V Reconfigurable Hardware

Storage Virtualization from clusters to grid

A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment

Optimizing Shared Resource Contention in HPC Clusters

System Software for High Performance Computing. Joe Izraelevitz

benchmarking Amazon EC2 for high-performance scientific computing

HPC and Grid Concepts

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist

Windows HPC Server 2008 R2 Service Pack 3 (V3 SP3)

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

enanos: Coordinated Scheduling in Grid Environments

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

The ENEA gateway approach providing EGEE/gLite access to unsupported platforms and operating systems

"Charting the Course to Your Success!" MOC A Understanding and Administering Windows HPC Server Course Summary

Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation

HPC performance applications on Virtual Clusters

Ten Reasons to Switch from Maui Cluster Scheduler to Moab HPC Suite Comparison Brief

Benchmark Report: Univa Grid Engine, Nextflow, and Docker for running Genomic Analysis Workflows

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

Cluster, Grid, Cloud Concepts

Resource Models: Batch Scheduling

Grid Engine Administration. Overview

CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensive Computing. University of Florida, CISE Department Prof.

Job Scheduling on a Large UV Chad Vizino SGI User Group Conference May Pittsburgh Supercomputing Center

Interoperability between Sun Grid Engine and the Windows Compute Cluster

Cloud Computing through Virtualization and HPC technologies

Installing and running COMSOL on a Linux cluster

Parallel Processing using the LOTUS cluster

HPC Wales Skills Academy Course Catalogue 2015

Supercomputing on Windows. Microsoft (Thailand) Limited

OpenMP Programming on ScaleMP

Cobalt: An Open Source Platform for HPC System Software Research

Building a Linux Cluster

Technical Computing Suite Job Management Software

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

GRID Computing and Networks

Integrated Communication Systems

Multicore Parallel Computing with OpenMP

GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems

On-Demand Supercomputing Multiplies the Possibilities

MPI / ClusterTools Update and Plans

Cluster Implementation and Management; Scheduling

So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell

Altix Usage and Application Programming. Welcome and Introduction

24/08/2004. Introductory User Guide

High Performance Computing

Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network

LSKA 2010 Survey Report Job Scheduler

BLM 413E - Parallel Programming Lecture 3

Installing on UNIX and Linux

General Overview. Slurm Training15. Alfred Gil & Jordi Blasco (HPCNow!)

Architecting Low Latency Cloud Networks

Seminarbeschreibung. Windows Server 2008 Developing High-performance Applications using Microsoft Windows HPC Server 2008.

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

Technical Guide to ULGrid

1 Bull, 2011 Bull Extreme Computing

High Performance Computing (HPC)

STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING VISUALISATION GPU COMPUTING

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd.

A Case Study - Scaling Legacy Code on Next Generation Platforms

Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Building an energy dashboard. Energy measurement and visualization in current HPC systems

A very short Intro to Hadoop

Hadoop and Map-Reduce. Swati Gore

New Issues and New Capabilities in HPC Scheduling with the Maui Scheduler

Alternative Deployment Models for Cloud Computing in HPC Applications. Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix

About the Author About the Technical Contributors About the Technical Reviewers Acknowledgments. How to Use This Book

Red Hat Enterprprise Linux - Renewals DETAILS SUPPORTED ARCHITECTURE

Auto-Tunning of Data Communication on Heterogeneous Systems

HPC-Nutzer Informationsaustausch. The Workload Management System LSF

Bibliography. University of Applied Sciences Fulda, Prof. Dr. S. Groß

Batch System Deployment on a Production Terascale Cluster

Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o

LoadLeveler Overview. January 30-31, IBM Storage & Technology Group. IBM HPC Developer TIFR, Mumbai

Cisco Unified Data Center Solutions for MapR: Deliver Automated, High-Performance Hadoop Workloads

HPC Software Requirements to Support an HPC Cluster Supercomputer

Comparative Study of Distributed Resource Management Systems SGE, LSF, PBS Pro, and LoadLeveler

Scaling Across the Supercomputer Performance Spectrum

Denis Caromel, CEO Ac.veEon. Orchestrate and Accelerate Applica.ons. Open Source Cloud Solu.ons Hybrid Cloud: Private with Burst Capacity

Distributed Operating Systems. Cluster Systems

Sun Grid Engine, a new scheduler for EGEE middleware

PERFORMANCE CONSIDERATIONS FOR NETWORK SWITCH FABRICS ON LINUX CLUSTERS

A Performance Data Storage and Analysis Tool

Transcription:

Production HPC on Commodity Linux Clusters: The Role of Infrastructural Software Ian Lumb 11 th ECMWF Workshop Use of HPC in Meteorology Reading, UK October 28, 2004

Outline Introduction Real-World Examples Topology Awareness Task Geometry Summary 2

Introduction

IA-32/64 CPUs Low-latency, high-bandwidth interconnects Potential for progress on Grand Challenge problems MPI 4

5

Infrastructural Software 6

Infrastructural Software 7

Infrastructural Software Platform LSF MultiCluster Globus Toolkit + CSF User Space Kernel Space cpuset Platform LSF HPC RMS Other Linux i.e., an ecosystem of resource management 8

Topology-Aware Scheduling

Topology-Aware, Bound-Processor Scheduling on Linux/NUMA Web Application Topology Master Host User Job Submission API SBD 1 6 MBD 4 MBSCHD Backfill MAUI Serial/Parallel Control Job Starvation Fairshare Preemption Advance Reservation Other Scheduler Modules 5 Altix cpuset 2 Execution Host Task PAM Accounting, auditing and control information 3 LIM 10

Topology-Aware, Bound-Processor Scheduling on Linux/NUMA bsub <bsub options> -n <number of processors> -R -ext "SGI_CPUSET[cpuset_options]" pam -mpi -auto_place a.out CPUSET_TYPE=static; CPUSET_NAME=<static cpuset name> or CPUSET_TYPE=dynamic; [MAX_RADIUS=<radius>;] [RESUME_OPTION=ORIG_CPUS]; [CPU_LIST=<cpu_ID_list>;] [CPUSET_OPTIONS=<SGI cpuset option>;] [MAX_CPU_PER_NODE=max_num_cpus] or 11 CPUSET_TYPE=none RSL = Resource Specification Language

Topology-Aware Scheduling on Linux/RMS bsub n # -ext "alloc_type[; nodes=# \ ptile=cpus_per_node base=base_node_name] \ [; rails=# railmask=bitmask]" where alloc_type is: RMS_SNODE sorted node order as returned by RMS, gaps are allowed in the allocation RMS_MCONT contiguous allocation from left to right taking into consideration RMS ordering, no gaps are allowed in the allocation RMS_SLOAD sorted node order as returned by LSF, gaps are allowed in the allocation 12

Task Geometry

Task Geometry (TG) Addresses the locality of related tasks Job submission requires some effort but the rest is transparent to the end user Step 1 Set LSB_PJL_TASK_GEOMETRY environment variables Step 2 Use bsub -n and -R "span[ptile=]" to make sure LSF selects appropriate hosts to run the job 14

Task Geometry: Step 1 Set LSB_PJL_TASK_GEOMETRY environment variables e.g., LSB_PJL_TASK_GEOMETRY="{(2,5,7)(0,6)(1,3)(4)}" This job spawns 8 tasks and span 4 nodes Tasks 2,5, and 7 will run on the same node the first node Tasks 0 and 6 will run on the same node the second node Tasks 1 and 3 will run on the same node the third node Task 4 will run on one node alone the fourth node 15

Task Geometry: Step 2 Use bsub -n and -R "span[ptile=]" to make sure LSF selects appropriate hosts to run the job This functionality guarantees the geometry but not the host order Job submission directives must ensure each host selected by LSF can run any group of tasks specified in LSB_PJL_TASK_GEOMETRY Over-book CPUs to achieve this bsub -n x -R "span[ptile=y]" a mpich_gm myjob x = y * (the # of nodes) y = the maxinum # of tasks in one group in LSB_PJL_TASK_GEOMETRY e.g., bsub -n 12 -R "span[ptile=3]" a mpich_gm myjob 16

TG Example: Community Climate System Model (CCSM) Multiple Process, Multiple Data i.e., MPMD application: OpenMP + MPI 17 http://www.ccsm.ucar.edu/models/ccsm3.0

Summary

Summary Infrastructural software enables production HPC Infrastructure needs to be factored into HPC productivity assessments Use economic measures Skill in assimilation and reanalysis efforts Evaluate time-to-production Increase parallel computing ROI Abstract complexity via RSL, programming language or some combination Comply with standards/certifications Linux Standards Base Common Criteria Certification The Open Grid Services Architecture http://www.darpa.mil/ipto/programs/hpcs 19 http://www.scimag.com October 2004 issue dedicated to HPC

Thank you.