A Crash course to (The) Bighouse



Similar documents
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

The CNMS Computer Cluster

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

Altix Usage and Application Programming. Welcome and Introduction

24/08/2004. Introductory User Guide

1 Bull, 2011 Bull Extreme Computing

Multicore Parallel Computing with OpenMP

Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013

Getting Started with HPC

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

NEC HPC-Linux-Cluster

Cluster Computing at HRI

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Multi-Threading Performance on Commodity Multi-Core Processors

SUSE LINUX Enterprise Server for SGI Altix Systems

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

User s Manual

Performance Characteristics of Large SMP Machines

FileCruiser Backup & Restoring Guide

Enabling Technologies for Distributed Computing

OpenMP Programming on ScaleMP

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, Introduction 2

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Job Scheduling on a Large UV Chad Vizino SGI User Group Conference May Pittsburgh Supercomputing Center

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Cloud Computing through Virtualization and HPC technologies

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen

Agenda. Using HPC Wales 2

System Requirements Table of contents

Service Partition Specialized Linux nodes. Compute PE Login PE Network PE System PE I/O PE

An Introduction to High Performance Computing in the Department

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Introduction to SDSC systems and data analytics software packages "

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Enabling Technologies for Distributed and Cloud Computing

Introductory Tutorial for Discover - NCCS s newest Linux Networx Cluster. Software Integration and Visualization Office (SIVO)

The Foundation for Better Business Intelligence

8/15/2014. Best (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

Parallel Debugging with DDT

SGI High Performance Computing

Computational Platforms for VASP

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

ELEC 377. Operating Systems. Week 1 Class 3

The RWTH Compute Cluster Environment

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Parallel Processing using the LOTUS cluster

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Towards OpenMP Support in LLVM

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

An introduction to compute resources in Biostatistics. Chris Scheller

Building a Top500-class Supercomputing Cluster at LNS-BUAP

The Asterope compute cluster

Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

Mathematical Libraries and Application Software on JUROPA and JUQUEEN

Grid Engine Users Guide p1 Edition

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

Multi-core Programming System Overview

SR-IOV: Performance Benefits for Virtualized Interconnects!

FOR SERVERS 2.2: FEATURE matrix

CAS2K5. Jim Tuccillo

Trends in High-Performance Computing for Power Grid Applications

Running Native Lustre* Client inside Intel Xeon Phi coprocessor

INF-110. GPFS Installation

Windows HPC 2008 Cluster Launch

Configuration Maximums VMware Infrastructure 3

MOSIX: High performance Linux farm

INTEL PARALLEL STUDIO XE EVALUATION GUIDE

Sage Grant Management System Requirements

Running applications on the Cray XC30 4/12/2015

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture

BLM 413E - Parallel Programming Lecture 3

Using NeSI HPC Resources. NeSI Computational Science Team

Biowulf2 Training Session

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

Intel Xeon Phi Basic Tutorial

MPI / ClusterTools Update and Plans

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

Improved LS-DYNA Performance on Sun Servers

Grant Management. System Requirements

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Performance Application Programming Interface

OBSERVEIT DEPLOYMENT SIZING GUIDE

Introduction to HPC Workshop. Center for e-research

CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions

Adonis Technical Requirements

Transcription:

A Crash course to (The) Bighouse Brock Palen brockp@umich.edu SVTI Users meeting Sep 20th

Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software Compiled Code 4 PBS PBS Queues

Hardware: bighouse Bighouse bighouse is our Itanium SMP machine; Login: bighouse.engin.umich.edu Shares nyx s 6TB NFS file system Running SUsE Linux Enterprise Server 10 ProPack 5 from SGI

Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way 171.9 Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way 171.9 Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way 171.9 Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way 171.9 Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way 171.9 Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

ccnuma

Altix 4700 Brick

Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

PBS PBS Memory is Enforced Defaults to 1MB Use #PBS -l mem=100mb to request what you need Use route queue Only 30 cpus available for batch jobs 2 cpus for compiling sftp PBS etc Please clean up /tmp

Questions Questions? Questions? http://cac.engin.umich.edu/resources/bighouse.html cac-support@umich.edu