High Performance Computing in Aachen

Similar documents
Using the Windows Cluster

Experiences with HPC on Windows

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Sun Constellation System: The Open Petascale Computing Architecture

The RWTH HPC-Cluster User's Guide Version 7.0

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

The RWTH HPC-Cluster User's Guide Version 8.2.2

Parallel Programming Survey

Altix Usage and Application Programming. Welcome and Introduction

OpenMP Programming on ScaleMP

Windows HPC 2008 Cluster Launch

The Asterope compute cluster

Debugging with TotalView

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

MPI / ClusterTools Update and Plans

Cloud Computing through Virtualization and HPC technologies

High Productivity Computing With Windows

Building a Top500-class Supercomputing Cluster at LNS-BUAP

LANL Computing Environment for PSAAP Partners

HPC Update: Engagement Model

The RWTH Compute Cluster Environment

PRIMERGY server-based High Performance Computing solutions

Performance Characteristics of Large SMP Machines

An Introduction to High Performance Computing in the Department

- An Essential Building Block for Stable and Reliable Compute Clusters

SUN HPC SOFTWARE CLUSTERING MADE EASY

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Microsoft Windows Compute Cluster Server 2003 Evaluation

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

The CNMS Computer Cluster

RWTH GPU Cluster. Sandra Wienke November Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Parallel Processing using the LOTUS cluster

Improved LS-DYNA Performance on Sun Servers

Parallel application development

CAS2K5. Jim Tuccillo

FLOW-3D Performance Benchmark and Profiling. September 2012

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

PLGrid Infrastructure Solutions For Computational Chemistry

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

benchmarking Amazon EC2 for high-performance scientific computing

Virtualization of a Cluster Batch System

Building Clusters for Gromacs and other HPC applications

Informationsaustausch für Nutzer des Aachener HPC Clusters

Getting Started with HPC

How to Run Parallel Jobs Efficiently

Deploying and managing a Visualization Onera

1 Bull, 2011 Bull Extreme Computing

HPC Wales Skills Academy Course Catalogue 2015

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

Cluster Implementation and Management; Scheduling

Clusters: Mainstream Technology for CAE

INTEL PARALLEL STUDIO XE EVALUATION GUIDE

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

Current Status of FEFS for the K computer

Overview of HPC systems and software available within

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

A Brief Survery of Linux Performance Engineering. Philip J. Mucci University of Tennessee, Knoxville

High Performance Computing within the AHRP

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Estonian Scientific Computing Infrastructure (ETAIS)

High Performance Computing Infrastructure at DESY

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

User s Manual

Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

ABAQUS High Performance Computing Environment at Nokia

High Performance Computing

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Using NeSI HPC Resources. NeSI Computational Science Team

Legal Notices Introduction... 3

Virtual Compute Appliance Frequently Asked Questions

Part I Courses Syllabus

wu.cloud: Insights Gained from Operating a Private Cloud System

Virtuoso and Database Scalability

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.

Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma

Multi-Threading Performance on Commodity Multi-Core Processors

High Performance Computing in CST STUDIO SUITE

Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ)

Overview of HPC Resources at Vanderbilt

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

SLURM Workload Manager

Introduction to Supercomputing with Janus

ANSYS Computing Platform Support. July 2013

Fujitsu HPC Cluster Suite

Transcription:

High Performance Computing in Aachen Samuel Sarholz sarholz@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University HPC unter Linux Sep 15, RWTH Aachen

Agenda o Hardware o Development Tools and Software o Usage o Support 2

The RZ Compute Cluster History o since 1958: Vector and other super computers o 1994: The Unix Cluster started with IBM machines o 2001 2007: main compute power in UltraSparc III IV CPUs o 2004: 64 Opteron nodes mainly with Linux o 2006: first Windows compute nodes o 2008: Intermediate Xeon Cluster o 2009 2010: new large procurement 3

The RZ Compute Cluster #nodes accumulated performance [TFLOPS] accumulated memory [TB] processor clock memory model type #procs #cores #threads [MHz] [GB] network 2 SF E25K UltraSPARC IV 72 144 144 1050 288 1GE 0,60 0,58 20 SF T5120 UltraSPARC T2 1 8 64 1400 32 1 GE 0,45 0,64 64 SF V40z Opteron 848 4 4 4 2200 8 1 GE 1,13 0,51 4 SF V40z Opteron 875 4 8 8 2200 16 1 GE + IB 0,14 0,06 2 SF X4600 Opteron885 8 16 16 2600 32 1GE 0,17 0,06 60 FuSi Rx200 S4 Xeon X7350 2 8 8 3000 16 1 GE + IB 5,76 0,96 2 FuSi Rx600 S4 Xeon E5450 4 16 16 3000 64 1 GE + IB 0,38 0,13 156 sum 564 1228 2348 8,80 3,01 4

Harpertown based InfiniBand Cluster o Recently installed Cluster: Fujitsu Siemens Primergy Rx 200 S4 servers 2x Intel Xeon 5450 (quad core, 3.0 GHz) 16 / 32 GB memory per node 4x DDR InfiniBand: MPI latency: 3.6 us MPI bandwidth: 1250 MB/s 5 o 270 machines; RZ + other hosted machines (from RWTH institutes) o On place 100 in the Top500 list

Integrative Hosting o Machines are installed as the RZ Cluster Linux, Solaris, (Windows) Filesystems are shared o Resources preferably will be shared with other clusters Less idling machines, institutes can use more machines if needed E.g. the Institute for Scientific Computing cluster o Login and batch jobs may be restricted to certain user groups 6

Some Hosted Machines Vendor Machine Type Machine Count Processor Sockets Cores Fujitsu Simens Rx200 232 Xeon 2 8 Sun X2200 36 Opteron 2 4 Sun X4100 18 Opteron 2 4 Dell 1950 8 Xeon 2 4 Sun V20z 10 Opteorn 2 2 Hosted compute power about 4x more than the RZ cluster 7

Institutes we host for 8 o Lehr und Forschungsgebiet Theoretische Chemie (IPC), FB 1 Prof. Arne Lüchow o Lehrstuhl für Mathematik ( CCES), FB 1 Prof. Dieter Bothe o Lehrstuhl für Theoretische Physik C und Institut für Theoretische Physik (ThPhC), FB 1 Prof. Ulrich Schollwöck o Lehrstuhl für Werkstoffchemie (MCh), FB 1 Prof. Jochen Schneider o Lehrstuhl für Informatik 12 (Hochleistungsrechnen) (SC), FB 1 Prof. Christian Bischof o Ultra High Speed Mobile Information and Communication (UMIC) Exzellenzcluster o Aachen Institute for advanced study in Computational Engineering Science (AICES) Graduiertenschule im Rahmen der Exzellenzinitiative o Tailor Made Fuels from Biomass (TMFB) Exzellenzcluster

Agenda o Hardware o Development Tools and Software o Login o Support 9

Cluster Environment 10 o Processors / operating systems 4 platforms : 1. Opteron/Linux and Xeon/Linux 2. Xeon/Windows 3. SPARC/Solaris 4. Opteron/Solaris o All platforms are suited for serial programming, shared memory parallelization, message passing o We offer support for programming and parallelization Compilers, MPI libraries, debugging and performance analysis tools

Programming Tools 11 o Compiler o Debugger TotalView Visual Studio o Correctness tools Sun Thread Analyzer Intel Thread Checker Marmot o Performance analysis Intel VTune Sun Analyzer Vampir Scalasca Acumem

Compiler Company Compiler Version Language OpenMP support Autopar Sun Studio 11 F95/C/C++ F95/C/C++ F95/C/C++ Debugger dbx sunstudio Runtime Analysis analyzer, collect, er_print, gprof GNU 4.3 F95/C/C++ F95/C/C++ F95/C/C++ gdb gprof Intel 10.1 F95/C/C++ F95/C++ Threading Tools F95/C/C++ idb vtune GNU 4.1 F95/C/C++ F95/C++ Threading Tools F95/C/C++ gdb gprof Sun Studio 12 F95/C/C++ F95/C/C++ F95/C/C++ dbx sunstudio analyzer, collect, er_print, gprof Microsoft Visual Studio 2005 C/C++ C/C++ Visual Studio VS 2008 Intel 10.1 F95/C/C++ F95/C/C++ F95/C/C++ Visual Studio vtune 12

MPI Libraries Company Version Debugger Runtime Analysis Plattform Network Sun HPC ClusterTools 6 TotalView analyzer, mpprof Solaris tcp, shm Sun HPC ClusterTools >7 based on OpenMPI TotalView analyzer, vampir Solaris tcp, shm, InfiniBand OpenMPI 1.2 TotalView analyzer, vampir Solaris tcp, shm, InfiniBand Intel 3.1 (based on mpich2) TotalView Intel TraceCollector Linux tcp, shm, InfiniBand & Analyzer OpenMPI 1.2 TotalView Vampir Linux tcp, shm, InfiniBand Microsoft based on mpich2 Visual Studio w/ MS Compute Cluster Pack Windows tcp, shm, IniniBand 13

Software Packages 14 FEM o Abaqus o Ansys o Hyperworks o Ls Dyna o Marc/Mentat o Nastran/Patran Chemie o Gaussian o Turbomole o Vasp o abinit o meep Site license Parallel version available No site license CFD o CFX/TASCFlow o Fluent o ICEM CFD o StarCD o gerris Mathematik o Maple o Mathematica Misc o Matlab/Simulink o Tecplot o Siesta

Agenda o Hardware o Development Tools and Software o Usage o Support 15

o Frontend nodes for login Login Unix Frontends (ssh protocol) (or NX* on Linux) cluster.rz.rwth-aachen.de (Sparc Solaris) cluster-solaris-sparc.rz... (cl-sol-s) cluster-solaris-opteron.rz... (cl-sol-o) cluster-linux.rz... (Xeon Linux) cluster-linux-xeon.rz... (cl-lin-x) cluster-linux-operon.rz... (cl-lin-o) Windows Frontends (rdp protocol) cluster-win.rz.rwth-aachen.de (Xeon Windows) cluster-win2003.rz... cluster-win-beta.rz... 16 * You can get an NX client from www.nomachine.com

Interactive Usage o Interactive frontend systems are meant for short test runs, GUIs and debugging. Compute jobs should be submitted to the batch system On Linux we offer a few nodes for MPI tests and debugging to avoid overloaded frontends mpiexec determines the hosts with the lowest load to run on On Unix different program version and tools are managed by the module system 17

Batch System o The batch system manages the distribution of compute jobs Makes sure systems are not overloaded to guarantee good performance On Unix the Sun Grid Engine (SGE) is employed On Windows the Microsoft Compute Cluster Pack is used For more info and examples see our documentation 18

19 HPC unter Linux File Storage o We offer several filesystems on all compute nodes /home/<userid> or H: for permanent files, e.g programs, batch scripts and config files Snapshots to cover for accidental file deletion.zfs/snapshot/ Downside: they use quota as well /work/<userid> or W: for temporary compute results Offers more space and better performance than /home Unused files will be deleted after a month! /lustre/work/ <userid> fast parallel file system available on request Tape archive for long term storage o $TMP or %TMP% Limited space on a local disc May be deleted after logout o X: (Windows only) On a windows fileserver to offer better meta data performance You can use UNC as well e.g.: \\cifs\cluster\home\<userid>

Agenda o Hardware o Development Tools and Software o Usage o Support 20

Links o Information related to HPC at the RWTH Aachen http://www.rz.rwth-aachen.de/hpc/ o RWTH Compute Cluster User s Guide http://www.rz.rwth-aachen.de/hpc/primer o Usage of the Windows HPC systems http://www.rz.rwth-aachen.de/hpc/win o Information on Hosting http://www.rz.rwth-aachen.de/ca/c/pgo 21

Support Just send an e-mail to our trouble ticket system: Support inquiries: support@rz.rwth-aachen.de Software and Licensing: sw@rz.rwth-aachen.de Programming, Debugging, Tuning, Parallelization: hpc@rz.rwth-aachen.de 22

Questions Thank you for your attention! Questions? 23