Linux Cluster Computing An Administrator s Perspective



Similar documents
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Parallel Computing. Introduction

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Visit to the National University for Defense Technology Changsha, China. Jack Dongarra. University of Tennessee. Oak Ridge National Laboratory

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building Clusters for Gromacs and other HPC applications

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

The CNMS Computer Cluster

Introduction to Supercomputing with Janus

Sun Constellation System: The Open Petascale Computing Architecture

Cluster Implementation and Management; Scheduling

Overview of HPC systems and software available within

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist

Cluster Computing in a College of Criminal Justice

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

PRIMERGY server-based High Performance Computing solutions

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

Cluster Computing at HRI

Clusters: Mainstream Technology for CAE

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

A Very Brief History of High-Performance Computing

HPC Software Requirements to Support an HPC Cluster Supercomputer

Cloud Computing Architecture with OpenNebula HPC Cloud Use Cases

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

openmosix Live free() or die() A short intro to HPC Kris Buytaert buytaert@stone-it.be First Prev Next Last Go Back Full Screen Close Quit

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Linux clustering. Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University

Simple Introduction to Clusters

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

SURFsara HPC Cloud Workshop

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

Overview of HPC Resources at Vanderbilt

New Storage System Solutions

Cloud Computing through Virtualization and HPC technologies

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Supercomputing Status und Trends (Conference Report) Peter Wegner

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Grid 101. Grid 101. Josh Hegie.

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Summit and Sierra Supercomputers:

Stanford HPC Conference. Panasas Storage System Integration into a Cluster

BSC - Barcelona Supercomputer Center

Parallel Analysis and Visualization on Cray Compute Node Linux

Introduction to Sun Grid Engine (SGE)

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University

OpenMP Programming on ScaleMP

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

Using NeSI HPC Resources. NeSI Computational Science Team

1 Bull, 2011 Bull Extreme Computing

Introduction to MSI* for PubH 8403

Hadoop on the Gordon Data Intensive Cluster

SR-IOV: Performance Benefits for Virtualized Interconnects!

System Software for High Performance Computing. Joe Izraelevitz

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

ABAQUS High Performance Computing Environment at Nokia

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

HPC-related R&D in 863 Program

High Performance Computing

Hack the Gibson. John Fitzpatrick Luke Jennings. Exploiting Supercomputers. 44Con Edition September Public EXTERNAL

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

HPC Update: Engagement Model

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

Maurice Askinazi Ofer Rind Tony Wong. Cornell Nov. 2, 2010 Storage at BNL

An Affordable Commodity Network Attached Storage Solution for Biological Research Environments.

Scientific Computing Data Management Visions

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Barry Bolding, Ph.D. VP, Cray Product Division

Hank Childs, University of Oregon

LSKA 2010 Survey Report Job Scheduler


Jezelf Groen Rekenen met Supercomputers

Power Efficiency Metrics for the Top500. Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab

Trends in High-Performance Computing for Power Grid Applications

The Operating System Lock Down Solution for Linux

Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing

NEC HPC-Linux-Cluster

Manual for using Super Computing Resources

The RWTH Compute Cluster Environment

High Performance Computing with Sun Grid Engine on the HPSCC cluster. Fernando J. Pineda

Using the Windows Cluster

SR-IOV In High Performance Computing

INF-110. GPFS Installation

Transcription:

Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14

Supercomputing Where do we stand today? Linux is nearly universally accepted in the Supercomputing space. Linux runs 485 (97%) of the top 500 supercomputers Unix runs 13 (2.6%) Windows runs 1 (0.2%) and one system is classified as Mixed Source: www.top500.org (November 2014) Page 1

Supercomputer performance Source: www.top500.org Page 2

Top ranked Supercomputers 2013: 33 PFlops Tianhe-2, Guangzhou, China 2012: 17 (27) PFlops Cray Titan, Oak Ridge National Labs, Tennessee 2010: 2.5 Pflops Tianhe-1A, Tainjin, China 2009: 1.7 Pflops Cray Jaguar, Oak Ridge, Tennessee Planned for 2017: 100+ PFlops IBM Summit, Oak Ridge, Tennessee Page 3

TITAN Fastest Supercomputer in the USA Page 4

Early Supercomputing Architecture 1964: CDC 6600 1975: Cray-1 Vector machine, 64 bit 12 pipelines Over 80 sold Performance: 80 MFlops Power: 115 kw Limited by distance (light speed) cooling In comparison: RPi 2: 1GFlop Page 5

Distributed Architecture 2004: IBM Blue Gene 70 TFlops Small processors In very large numbers With fast networking Led to PFlop systems Page 6

Linux in Early Cluster Computing 1994: Beowulf cluster project Thomas Sterling and Donald Becker at NASA A High Performance Computing (HPC) Cluster using commodity off the shelf systems network connected message passing between nodes shared file system (NFS) open source software each system is complete usually with identical operating system configurations Quickly spread through NASA and academic/research organizations HPC was now affordable for scientific computing Page 7

Elements of a Cluster Compute nodes (the more the better) Networking Ethernet 100/1000 Mb/s (inexpensive) Infiniband 23 GB/s in Summit Shared /home directory usually using NFS Message passing facilities openmpi mpich2 Page 8

Typical HPC Cluster (ETSU Johnson City TN) Page 9

Inside View Page 10

Cluster-oriented Linux Distributions Kylin Linux (used in China) ClusterKnoppix openmosix Scyld Quantian in house adaptation of a mainline distribution various commercial Linux distributions (IBM, Cray) Rocks Cluster Distribution Page 11

Rocks Cluster Distribution Intended specifically for HPC Clusters 2000: Created at San Diego Supercomputer Center (SDSC) Open source Runs clusters ranging from 10 to 8000+ nodes actively maintained (version 6.2 released 2015-May-10) based on CentOS (RHEL) but adds cluster essentials MPI Message passing interface Cluster-aware installer Kickstart integration Monitoring (Ganglia) Scheduling (SGE) Page 12

Administration essentials Scripting or Point & Click? Keeping 1000 s of nodes in sync User community, training and support Quickstart documentation Job Scheduling and Load Balancing Uptime Benchmarking Storage strategies Power strategies Third party software administration Software updates Administration team Page 13

Scripting or Point & Click If you have a task that you will do only once then an intuitive graphical environment is convenient But what if you need to do the task twice? or 50 times? or in the case of Supercomputing 500,000 times? Then scripted task automation rules the game This is why we don t find Windows on Supercomputers and Clusters Windows was designed first as a graphical environment with scripting added on top Linux was designed first as a scriptable environment with graphical added on top Page 14

How to make 100 s or 1000 s of nodes look the same Doing it by hand only works for a handful of nodes Rocks/RedHat solution: RPM/Kickstart and for ad-hoc admin tasks: pdsh is your friend pdsh w compute 0 [0 49] uptime compute 0 0: 1 3 : 1 6 : 4 4 up 5 days, 1 0 : 2 9, 9 u s e r s, l o a d a v e r a g e : <s n i p... > compute 0 49: 1 3 : 1 6 : 4 4 up 5 days, 1 0 : 3 2, 9 u s e r s, l o a d a v e r a g e : pdsh w p i @ r p i [1 4] uptime r p i 1 : 1 5 : 5 5 : 4 9 up 2 days, 2 2 : 2 0, 0 u s e r s, l o a d a v e r a g e : 0. 0 0, 0 r p i 3 : 1 5 : 5 5 : 4 9 up 2 days, 2 2 : 2 0, 0 u s e r s, l o a d a v e r a g e : 0. 0 2, 0 r p i 4 : 1 5 : 5 5 : 4 9 up 2 days, 2 2 : 2 0, 0 u s e r s, l o a d a v e r a g e : 0. 0 1, 0 r p i 2 : 1 5 : 5 5 : 4 9 up 2 days, 2 2 : 2 0, 0 u s e r s, l o a d a v e r a g e : 0. 0 1, 0 Page 15

Building community communicating Cluster Wiki users support users Wiki pages communicate version update plans to users Code exchange Best practices Hello world quickstarts in each language Page 16

Sample quickstart python # l a u n c h w i t h : mpirun np 50 python m p i t e s t. py from mpi4py i m p o r t MPI i m p o r t numpy as np i m p o r t p l a t f o r m Page 17 comm = MPI.COMM WORLD s i z e = comm. G e t s i z e ( ) rank = comm. G e t r a n k ( ) node = p l a t f o r m. node ( ) i f rank== 0 : data = np. a r a n g e ( 1 0 0, dtype=np. f l o a t ) data [ 0 ] = 1. 0 f o r cn i n r a n g e ( 1, s i z e ) : comm. Send ( data, d e s t=cn, tag =13) i f rank!= 0 : data = np. empty ( 1 0 0, dtype=np. f l o a t ) comm. Recv ( data, s o u r c e =0, tag =13) p r i n t %s, rank : %d s i z e : %d data : %f \ % ( node, rank, s i z e, data [ 0 ] np. p i )

Sample quickstart python $ mpirun np 50 python m p i t e s t. py compute 0 0, rank : 0 s i z e : 50 data : 3.141593 <s n i p... > compute 0 49, rank : 49 s i z e : 50 data : 3.141593 Page 18

Job Scheduling and Load Balancing How to control the computing hogs Scheduler Grid Engine or SGE: an open source solution Allocate resources to applications and users Balance those resources with a fair strategy Page 19

Uptime Monitoring and text alerts ganglia SNMP KVM remote access IPMI remote power control for node restarts Node kickstarts concurrent with operations (SGE) Page 20

Benchmarking Linpack for benchmarking computational performance used in top500 ranking bonnie++ for disk I/O V e r s i o n 1. 9 6 S e q u e n t i a l Output S e q u e n t i a l Input Random C o n c u r r e n c y 1 Per Chr Block Rewrite Per Chr Block Seeks Machine S i z e K/ s e c %CP K/ s e c %CP K/ s e c %CP K/ s e c %CP K/ s e c %CP / s e c %CP l o c a l h o s t. lo 15792M 733 99 149729 8 156620 10 3931 98 608665 18 9518 126 Page 21

Power strategies Backup power on frontend systems and storage Mains on nodes Typical power configuration 208V three phase Titan saved $1M in copper cost going from 208V to 480V power Page 22

Third party software administration module RPMs for everything swtools (ORNL) smithy Page 23

Administration Team staying on the same page Admin journal: /root/admin.log System configuration files checked in using mercurial version control cd / e t c hg i n i t hg commit Am i n i t i a l commit Page 24

What s next? Hybrid CPU/GPU system Summit to be delivered to ORNL in 2017, in production 2018 3400 nodes each with multiple CPUs and multiple GPUs 512GB of high bandwidth memory addressable from all nodes 5X faster than Titan 120 PB disk capacity with 1TB/s bandwidth Operating system: IBM Linux Software: openmpi, openacc, LSF scheduler Compilers: PGI, GCC, XL, LLVM Power: 10 MW Page 25

If you work with one of these... Page 26

If not maybe you could build one of these... Page 27

Questions? These slides are located here: http://lxer.com/pub/self2015_clusters.pdf Contact: robert@whitinger.org Creative commons copyright Page 28