A Crash course to (The) Bighouse

Size: px
Start display at page:

Download "A Crash course to (The) Bighouse"

Transcription

1 A Crash course to (The) Bighouse Brock Palen SVTI Users meeting Sep 20th

2 Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software Compiled Code 4 PBS PBS Queues

3 Hardware: bighouse Bighouse bighouse is our Itanium SMP machine; Login: bighouse.engin.umich.edu Shares nyx s 6TB NFS file system Running SUsE Linux Enterprise Server 10 ProPack 5 from SGI

4 Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

5 Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

6 Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

7 Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

8 Bighouse Hardware Current Hardware 16 CPU, 32 core Intel Itanium II s Measured 5.5 Gflop/cpu running 4 way Gflop running 32 way 96 GB Ram Max 41 GB/s Aggregate Memory bandwidth

9 ccnuma

10 Altix 4700 Brick

11 Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

12 Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

13 Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

14 Packaged Software Packaged Software Abaqus/6.6 abaqus v6.env standard memory policy=15000mb standard memory policy=maximum Nastran/2007r2 Gaussian/03 %nproc=8 %mem=20gb DO NOT SET $GAUSS SCR

15 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

16 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

17 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

18 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

19 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

20 Abaqus Abaqus Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. abaqus job=input scratch=/tmp interactive cpus=10 cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

21 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

22 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

23 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

24 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

25 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

26 Nastran Nastran Example mkdir /tmp/$pbs JOBID cd /tmp/$pbs JOBID cp ~/Input.inp. cp ~/abaqus v6.env. nastran batch=no hpmpi=yes dmp=10 input.dat cp -fr * ~/ && rm -fr /tmp/$pbs JOBID

27 Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

28 Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

29 Compilers Compilers ifort Fortran90/77 icc C icpc C++ GNU Compilers are available but not recommended Compiler Options -O2 General optimization -O3 -ipo -funroll-loops -ftz Better Optimization -openmp Enable OpenMP support

30 Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

31 Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

32 Libraries Libraries MPT MPI Library Optimized for Shared Memory ifort source.f90 -lmpi mpirun -np 10 a.out MKL Math Kernel Library Optimized Threaded Math Library Full Support for BLAS and LAPACK PRNG FFT s, and FFTW compatible DO Use, Contact us for support

33 PBS PBS Memory is Enforced Defaults to 1MB Use #PBS -l mem=100mb to request what you need Use route queue Only 30 cpus available for batch jobs 2 cpus for compiling sftp PBS etc Please clean up /tmp

34 Questions Questions? Questions?

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

24/08/2004. Introductory User Guide

24/08/2004. Introductory User Guide 24/08/2004 Introductory User Guide CSAR Introductory User Guide Introduction This material is designed to provide new users with all the information they need to access and use the SGI systems provided

More information

1 Bull, 2011 Bull Extreme Computing

1 Bull, 2011 Bull Extreme Computing 1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance

More information

Multicore Parallel Computing with OpenMP

Multicore Parallel Computing with OpenMP Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large

More information

Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013

Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013 Cluster performance, how to get the most out of Abel Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013 Introduction Architecture x86-64 and NVIDIA Compilers MPI Interconnect Storage Batch queue

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

More information

SUSE LINUX Enterprise Server for SGI Altix Systems

SUSE LINUX Enterprise Server for SGI Altix Systems SUSE LINUX Enterprise Server for SGI Altix Systems 007 4651 002 COPYRIGHT 2004, Silicon Graphics, Inc. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein.

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

Best Practice mini-guide "Stokes"

Best Practice mini-guide Stokes SGI Altix ICE at ICHEC Michael Lysaght, ICHEC Niall Wilson, ICHEC Eoin McHugh, ICHEC Michael Browne, ICHEC Gilles Civario, ICHEC February 2013 1 Table of Contents 1. Introduction... 3 2. System architecture

More information

Performance Characteristics of Large SMP Machines

Performance Characteristics of Large SMP Machines Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark

More information

FileCruiser Backup & Restoring Guide

FileCruiser Backup & Restoring Guide FileCruiser Backup & Restoring Guide Version: 0.3 FileCruiser Model: VA2600/VR2600 with SR1 Date: JAN 27, 2015 1 Index Index... 2 Introduction... 3 Backup Requirements... 6 Backup Set up... 7 Backup the

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

More information

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2 CNAG User s Guide Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015 Contents 1 Introduction 2 2 System Overview 2 3 Connecting to CNAG cluster 2 3.1 Transferring files...................................

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

Job Scheduling on a Large UV 1000. Chad Vizino SGI User Group Conference May 2011. 2011 Pittsburgh Supercomputing Center

Job Scheduling on a Large UV 1000. Chad Vizino SGI User Group Conference May 2011. 2011 Pittsburgh Supercomputing Center Job Scheduling on a Large UV 1000 Chad Vizino SGI User Group Conference May 2011 Overview About PSC s UV 1000 Simon UV Distinctives UV Operational issues Conclusion PSC s UV 1000 - Blacklight Blacklight

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises

Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University

More information

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen Niels Bohr Institute University of Copenhagen Who am I Theoretical physics (KU, NORDITA, TF, NSC) Computing non-standard superconductivity and superfluidity condensed matter / statistical physics several

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

Service Partition Specialized Linux nodes. Compute PE Login PE Network PE System PE I/O PE

Service Partition Specialized Linux nodes. Compute PE Login PE Network PE System PE I/O PE 2 Service Partition Specialized Linux nodes Compute PE Login PE Network PE System PE I/O PE Microkernel on Compute PEs, full featured Linux on Service PEs. Service PEs specialize by function Software Architecture

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Introduction to SDSC systems and data analytics software packages "

Introduction to SDSC systems and data analytics software packages Introduction to SDSC systems and data analytics software packages " Mahidhar Tatineni (mahidhar@sdsc.edu) SDSC Summer Institute August 05, 2013 Getting Started" System Access Logging in Linux/Mac Use available

More information

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Introductory Tutorial for Discover - NCCS s newest Linux Networx Cluster. Software Integration and Visualization Office (SIVO)

Introductory Tutorial for Discover - NCCS s newest Linux Networx Cluster. Software Integration and Visualization Office (SIVO) Introductory Tutorial for Discover - NCCS s newest Linux Networx Cluster by Software Integration and Visualization Office (SIVO) Introduction to SIVO Who we are: Leadership in the development of advanced

More information

The Foundation for Better Business Intelligence

The Foundation for Better Business Intelligence Product Brief Intel Xeon Processor E7-8800/4800/2800 v2 Product Families Data Center The Foundation for Big data is changing the way organizations make business decisions. To transform petabytes of data

More information

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status Best Practices @OLCF (and more) Bill Renaud OLCF User Support General Information This presentation covers some helpful information for users of OLCF Staying informed Aspects of system usage that may differ

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

SGI High Performance Computing

SGI High Performance Computing SGI High Performance Computing Accelerate time to discovery, innovation, and profitability 2014 SGI SGI Company Proprietary 1 Typical Use Cases for SGI HPC Products Large scale-out, distributed memory

More information

Computational Platforms for VASP

Computational Platforms for VASP Computational Platforms for VASP Robert LORENZ Institut für Materialphysik and Center for Computational Material Science Universität Wien, Strudlhofgasse 4, A-1090 Wien, Austria b-initio ienna ackage imulation

More information

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group

OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group OLCF Best Practices (and More) Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may

More information

ELEC 377. Operating Systems. Week 1 Class 3

ELEC 377. Operating Systems. Week 1 Class 3 Operating Systems Week 1 Class 3 Last Class! Computer System Structure, Controllers! Interrupts & Traps! I/O structure and device queues.! Storage Structure & Caching! Hardware Protection! Dual Mode Operation

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Advanced Techniques with Newton Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Workshop Goals Gain independence Executing your work Finding Information Fixing Problems Optimizing Effectiveness

More information

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Towards OpenMP Support in LLVM

Towards OpenMP Support in LLVM Towards OpenMP Support in LLVM Alexey Bataev, Andrey Bokhanko, James Cownie Intel 1 Agenda What is the OpenMP * language? Who Can Benefit from the OpenMP language? OpenMP Language Support Early / Late

More information

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Table of Contents New User Orientation...1

Table of Contents New User Orientation...1 Table of Contents New User Orientation...1 Introduction...1 Helpful Resources...3 HPC Environment Overview...4 Basic Tasks...10 Understanding and Managing Your Allocations...16 New User Orientation Introduction

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET

Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET CRAY XT3 DATASHEET Cray XT3 Supercomputer Scalable by Design The Cray XT3 system offers a new level of scalable computing where: a single powerful computing system handles the most complex problems every

More information

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

OLCF Best Practices. Bill Renaud OLCF User Assistance Group OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from

More information

Mathematical Libraries and Application Software on JUROPA and JUQUEEN

Mathematical Libraries and Application Software on JUROPA and JUQUEEN Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries and Application Software on JUROPA and JUQUEEN JSC Training Course May 2014 I.Gutheil Outline General Informations Sequential Libraries Parallel

More information

Grid Engine Users Guide. 2011.11p1 Edition

Grid Engine Users Guide. 2011.11p1 Edition Grid Engine Users Guide 2011.11p1 Edition Grid Engine Users Guide : 2011.11p1 Edition Published Nov 01 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the

More information

Optimization on Huygens

Optimization on Huygens Optimization on Huygens Wim Rijks wimr@sara.nl Contents Introductory Remarks Support team Optimization strategy Amdahls law Compiler options An example Optimization Introductory Remarks Modern day supercomputers

More information

Program Grid and HPC5+ workshop

Program Grid and HPC5+ workshop Program Grid and HPC5+ workshop 24-30, Bahman 1391 Tuesday Wednesday 9.00-9.45 9.45-10.30 Break 11.00-11.45 11.45-12.30 Lunch 14.00-17.00 Workshop Rouhani Karimi MosalmanTabar Karimi G+MMT+K Opening IPM_Grid

More information

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the

More information

Multi-core Programming System Overview

Multi-core Programming System Overview Multi-core Programming System Overview Based on slides from Intel Software College and Multi-Core Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts,

More information

SR-IOV: Performance Benefits for Virtualized Interconnects!

SR-IOV: Performance Benefits for Virtualized Interconnects! SR-IOV: Performance Benefits for Virtualized Interconnects! Glenn K. Lockwood! Mahidhar Tatineni! Rick Wagner!! July 15, XSEDE14, Atlanta! Background! High Performance Computing (HPC) reaching beyond traditional

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

CAS2K5. Jim Tuccillo jtuccillo@lnxi.com 912.576.5215

CAS2K5. Jim Tuccillo jtuccillo@lnxi.com 912.576.5215 CAS2K5 Jim Tuccillo jtuccillo@lnxi.com 912.576.5215 Agenda icorporate Overview isystem Architecture inode Design iprocessor Options iinterconnect Options ihigh Performance File Systems Lustre isystem Management

More information

Trends in High-Performance Computing for Power Grid Applications

Trends in High-Performance Computing for Power Grid Applications Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views

More information

Running Native Lustre* Client inside Intel Xeon Phi coprocessor

Running Native Lustre* Client inside Intel Xeon Phi coprocessor Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

Keeneland Enabling Heterogeneous Computing for the Open Science Community Philip C. Roth Oak Ridge National Laboratory

Keeneland Enabling Heterogeneous Computing for the Open Science Community Philip C. Roth Oak Ridge National Laboratory Keeneland Enabling Heterogeneous Computing for the Open Science Community Philip C. Roth Oak Ridge National Laboratory with contributions from the Keeneland project team and partners 2 NSF Office of Cyber

More information

Windows HPC 2008 Cluster Launch

Windows HPC 2008 Cluster Launch Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich hpc@rrze.uni-erlangen.de Launch overview Small presentation and basic introduction Questions and answers Hands-On

More information

Configuration Maximums VMware Infrastructure 3

Configuration Maximums VMware Infrastructure 3 Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure

More information

MOSIX: High performance Linux farm

MOSIX: High performance Linux farm MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm

More information

INTEL PARALLEL STUDIO XE EVALUATION GUIDE

INTEL PARALLEL STUDIO XE EVALUATION GUIDE Introduction This guide will illustrate how you use Intel Parallel Studio XE to find the hotspots (areas that are taking a lot of time) in your application and then recompiling those parts to improve overall

More information

Sage Grant Management System Requirements

Sage Grant Management System Requirements Sage Grant Management System Requirements You should meet or exceed the following system requirements: One Server - Database/Web Server The following system requirements are for Sage Grant Management to

More information

Running applications on the Cray XC30 4/12/2015

Running applications on the Cray XC30 4/12/2015 Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes

More information

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture White Paper Intel Xeon processor E5 v3 family Intel Xeon Phi coprocessor family Digital Design and Engineering Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture Executive

More information

BLM 413E - Parallel Programming Lecture 3

BLM 413E - Parallel Programming Lecture 3 BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several

More information

SEE-GRID-SCI. Cluster installation and configuration. www.see-grid.eu. SEE-GRID-SCI Training Event, Yerevan, Armenia, 24-25 July 2008

SEE-GRID-SCI. Cluster installation and configuration. www.see-grid.eu. SEE-GRID-SCI Training Event, Yerevan, Armenia, 24-25 July 2008 SEE-GRID-SCI Cluster installation and configuration www.see-grid.eu SEE-GRID-SCI Training Event, Yerevan, Armenia, 24-25 July 2008 Mikayel Gyurjyan Institute for Informatics and Automation Problems National

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Biowulf2 Training Session

Biowulf2 Training Session Biowulf2 Training Session 9 July 2015 Slides at: h,p://hpc.nih.gov/docs/b2training.pdf HPC@NIH website: h,p://hpc.nih.gov System hardware overview What s new/different The batch system & subminng jobs

More information

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

Performance Evaluation of Amazon EC2 for NASA HPC Applications! National Aeronautics and Space Administration Performance Evaluation of Amazon EC2 for NASA HPC Applications! Piyush Mehrotra!! J. Djomehri, S. Heistand, R. Hood, H. Jin, A. Lazanoff,! S. Saini, R. Biswas!

More information

Intel Xeon Phi Basic Tutorial

Intel Xeon Phi Basic Tutorial Intel Xeon Phi Basic Tutorial Evan Bollig and Brent Swartz 1pm, 12/19/2013 Overview Intro to MSI Intro to the MIC Architecture Targeting the Xeon Phi Examples Automatic Offload Offload Mode Native Mode

More information

MPI / ClusterTools Update and Plans

MPI / ClusterTools Update and Plans HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski

More information

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics 22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC

More information

Improved LS-DYNA Performance on Sun Servers

Improved LS-DYNA Performance on Sun Servers 8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

Grant Management. System Requirements

Grant Management. System Requirements January 26, 2014 This is a publication of Abila, Inc. Version 2014.x 2013 Abila, Inc. and its affiliated entities. All rights reserved. Abila, the Abila logos, and the Abila product and service names mentioned

More information

Network Bandwidth Measurements and Ratio Analysis with the HPC Challenge Benchmark Suite (HPCC)

Network Bandwidth Measurements and Ratio Analysis with the HPC Challenge Benchmark Suite (HPCC) Proceedings, EuroPVM/MPI 2005, Sep. 18-21, Sorrento, Italy, LNCS, Springer-Verlag, 2005. c Springer-Verlag, http://www.springer.de/comp/lncs/index.html Network Bandwidth Measurements and Ratio Analysis

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

Performance Application Programming Interface

Performance Application Programming Interface /************************************************************************************ ** Notes on Performance Application Programming Interface ** ** Intended audience: Those who would like to learn more

More information

OBSERVEIT DEPLOYMENT SIZING GUIDE

OBSERVEIT DEPLOYMENT SIZING GUIDE OBSERVEIT DEPLOYMENT SIZING GUIDE The most important number that drives the sizing of an ObserveIT deployment is the number of Concurrent Connected Users (CCUs) you plan to monitor. This document provides

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions

CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions CHEOPS Cologne High Efficient Operating Platform for Science Brief Instructions (Version: 07.10.2013) Foto: Thomas Josek/JosekDesign Viktor Achter Dr. Stefan Borowski Lech Nieroda Dr. Lars Packschies Volker

More information

Adonis Technical Requirements

Adonis Technical Requirements Information Sheet Adonis Technical Requirements CONTENTS Contents... 1 Adonis Project Implementation... 1 Host Installation / Onboard Installation Full replication (LARGER Vessels):... 1 Onboard installation

More information