Informationsaustausch für Nutzer des Aachener HPC Clusters

Size: px
Start display at page:

Download "Informationsaustausch für Nutzer des Aachener HPC Clusters"

Transcription

1 Informationsaustausch für Nutzer des Aachener HPC Clusters Paul Kapinos, Marcus Wagner

2 Informationsaustausch für Nutzer des Aachener HPC Clusters Agenda (The RWTH Compute cluster) Project-based management of the cluster resources Interactive usage Using the batch system Integratives Hosting Discussion 3 von 5

3 The RWTH Compute cluster No. 32 in TOP500 (June 2011), no. 4 in Germany No. 272 in TOP500 (Nov 2014) MPI Partition 1358 Westmere EP nodes 2x Xeon X5675 (6-core 3.06 GHz => 16k cores 24 or 96 GB RAM (4:1) SMP Partition 88 Nehalem EX nodes 16x Xeon X7550 (8-core 2.00 GHz => 11k cores GB RAM Interactive front ends and back ends ( 1% of cluster) 300 cores, max. 256 GB RAM, users p.n. (GPU cluster, MIC cluster, old hardware, IH systems)

4 Why? Because we must: requirements of the Science Council (Wissenschaftsrat) to request funding for future computer systems:. a scientific process for the allocation of the expensive compute resources has to be established which will guarantee fair handling of all users.

5 Why? (ii) fair distribution of resources main goals: collation between used resources and scientific value defined (short...) job staring times defined, predictable throughput for researchers effective and resource saving usage pattern would you drive fuel-saving, if you do not have to pay for the fuel? if you indeed have no clue how many fuel did you burned? and, last but not least, to keep some buddies within bounds

6 The status Implemented with projects and queues in LSF batch system JARA-HPC partition (30% ): since 2012 general introduction: Q3/2014 up n runnin now Use a project: add a line to your batch file #BSUB -P abcd4321 Check your quota: $ r_batch_usage

7 How? (II) Free quota scientific employees: 2000 core-h per month (about a week of 12-core node p.m.) students: 500 core-h per month Need more? file a project! RWTH Small (S): up to 10,000 core-h p.m. (0.01 mio core-h p.m.) technical review only RWTH Standard (M): up to 0.05 mio core-h p.m. a project description is required (=> internal scientific review) JARA-HPC/RWTH Big (XL/L): up to 2.5 mio core-h p.m. submission twice a year following the JARA-HPC procedures a detailed project description is required (=> external scientific reviews) student need more for lecture / course / thesis? RWTH lecture, RWTH thesis (XS) (up to 20,000 core-h p.m. (*) ) technical review only

8 How-to file an Application for computing time go to https://doc.itc.rwth-aachen.de/display/cc/projektbewirtschaftung decide for what type of project you should apply determine your needs; don t be shy! don t try to be too exact It s better to ask for 30% too much than 1% too less it s easier to ask for round sum (compute time, duration..) both for you and us think about special requirements: overlong compute time? (more that 120h not possible) disk storage? one huge project, or maybe multiple subprojects?.

9 How-to file an Application for computing time (II) go to https://doc.itc.rwth-aachen.de/display/cc/projektbewirtschaftung fill in the right form use Acrobat X to edit the PDF file we need the data to be extracted electronically do not use meaningless values like normal, much for e.g. memory consumption do not cut a corner we do not know who Mr See Above is! send the electronically-readable PDF file to do not send us screen shots, JPG, PNG, DOCX, TXT files do not send us signed+scanned PDFs via print the same file, sign it, and fax or mail it to us do not send send us signed+scanned PDFs via At the end we need the same document in two versions: signed+legal (thus fax or mail), and electronically readable.

10 How-to file an Application for computing time (III) go to https://doc.itc.rwth-aachen.de/display/cc/projektbewirtschaftung filing an application for RWTH Standard (M) project? A project description is required (for internal scientific reviews) Bring up if your project is a follow-up project, is funded by some organisation, filing an application for JARA-HPC/RWTH-Big (XL/L) project? Submission twice a year following the JARA-HPC procedures

11 How-to file an Application for computing time (IV) go to https://doc.itc.rwth-aachen.de/display/cc/projektbewirtschaftung Application form filled, ed, printed out, signed, faxed/mailed? then wait In typically a week: a message that both versions of application form has been arrived. Some days later: 1) a message that the project is ready-to-use, (for small projects), or 2) a message that the project has been introduced with a test quota of 0.01 Mio corehours per month, and the scientific review process started (for larger projects) For (2), some weeks (or either months ) later: a message that the project is approved and full remaining quota is granted (often the runtime of project is adjusted, too, according to the delay) Yes we know: this process is a really tedious and lengthy one Working on improving it. But at least the scientific review will stay delaying factor.

12 What happen if over quota? running jobs continue to the end (and still consume core-h!) new-submitted and pending jobs moved to low-priority queue they still can start! but if and only if there are free resources not used by normal-priority jobs if started from low queue, still consume core-h quota may go well in the red! today, no hard limit in low-priority queue this will be subject of change in future, very likely at 1 st of any month, you get next month s quota added. if you are in the black with your quota then, new jobs will be submitted and pending jobs will be moved to normal-priority queue technically, no difference from which queue job is started only start time differs!

13 How is the quota computed? main goal is to motivate the users for continuous using of resources but still allow some peaks Three-month sliding window up to 300% of month quota available in a month unused quota from previous month is transferred to the current month but not further The quota for the previous, the current, and the next month are added up The consumed core-h for the previous and for the current month are added up The difference between both values is the amount of core-h available in the current month Huh? https://doc.itc.rwth-aachen.de/display/cc/resource+contingents

14 Check your quota now! Check your quota: $ r_batch_usage h (manual of r_batch_usage) $ r_batch_usage (overwiew; big terminal window advisable) $ r_batch_usage -q User: pk Status of user: RWTH-Mitarbeiter Quota monthly (core-h): 2000 Remaining core-h of prev. month: Consumed core-h act. month: 3938 Consumable core-h (%): -101 Consumable core-h: 0

15 Check your project quota now! Usage of batch system with projects: $ r_batch_usage h (manual of r_batch_usage) $ r_batch_usage p <projectname> (overwiew) $ r_batch_usage -p jara0001 -q Group: jara0001 Start of Accounting Period: End of Accounting Period: State of project: active Quota monthly (core-h): Remaining core-h of prev. month: 0 Consumed core-h act. month: Consumable core-h (%): 70 Consumable core-h:

16 Informationsaustausch für Nutzer des Aachener HPC Clusters Agenda (The RWTH Compute cluster) Project-based management of the cluster resources Interactive usage Using the batch system Integratives Hosting Discussion 3 von 5

17 Interactive usage Batch system: MPI Partition 1358 Westmere EP nodes => 16k cores SMP Partition 88 Nehalem EX nodes=> 11k cores Interactive front ends and back ends ( 1% of cluster) 300 cores, max. 256 GB RAM, users p.n. challenged resource! an issue with an interactive node directly affect 100s of users we want to enable as much of test options as possible notorious trade-off between feature and stability

18 Interactive usage Go to: https://doc.itc.rwth-aachen.de/display/cc/interactive+usage Interactive front ends cluster.rz.rwth-aachen.de, cluster-linux.rz.rwth-aachen.de (main front ends) cluster-copy.rz.rwth-aachen.de, cluster-copy2.rz.rwth-aachen.de (for file transfer only) cluster-x.rz.rwth-aachen.de, cluster-x2.rz.rwth-aachen.de (GUI / remote desktop). and others. supported protocols: SSH (with X11 forwarding), SCP, remote desktop (FastX/XWin32) $ ssh -X -l ab cluster.rz.rwth-aachen.de accessible from RWTH network only (VPN helps!) Interactive back ends used to off-load MPI processes started on front ends off-loading managed by Interactive MPIEXEC wrapper hardware subject of change currently: 8x 12-core Westmere with 96 GB RAM 3 von 5

19 Interactive usage Go to: https://doc.itc.rwth-aachen.de/display/cc/interactive+usage Interactive front ends are frequented by 100s of users! any issue directly interrupt work of these users! Purposes: data transfer, job submission, application porting, testing, tuning, debugging NOT FOR PRODUCTIVE RUNS USE BATCH SYSTEM Rule of thumb: not more than 20 minutes of CPU time that does not mean I can start 80x of 19.5-minute-runs one after another! Really need compute power and interactive session? Batch jobs with GUI: https://doc.itc.rwth-aachen.de/display/cc/submitting+a+job+with+gui In terms of advanced testing, we set flexible quotas using cgroup system CPU: processes of a user are configured to get the same amount of CPU cycles as all processes of other user Memory: real memory is limited to a part of available RAM - this prevent the situation when one user consume all RAM and crash the whole node - use memquota command to find out current situation 3 von 5

20 Interactive usage Go to: https://doc.itc.rwth-aachen.de/display/cc/interactive+usage Interactive front ends are frequented by 100s of users! any issue directly interrupt work of these users! Main Performance Issue on front ends: DATA TRANSFER Use dedicated front ends for any data transfer, TAR, ZIP, cluster-copy.rz.rwth-aachen.de cluster-copy2.rz.rwth-aachen.de 3 von 5

21 Interactive usage Go to: https://doc.itc.rwth-aachen.de/display/cc/interactive+usage Interactive back ends unprovided for log in hardware subject of change (currently: 8x 12-core Westmere with 96 GB RAM) used to off-load MPI processes started on front ends (reduce load!) off-loading managed by Interactive MPIEXEC wrapper example: $MPIEXEC np 2 hostname processes started on less-loaded nodes, but with massive overloading allowed further, you re not alone on these systems load of 100+ is not unusual any productive runs and time measurements absurd to the highest degree the only sense of (overloading) test runs: test of will my binary start with XYZ ranks? - if yes, Ctrl-C and proceed to the Batch System - if not, you ve got the reply immediately (instead of waiting a day for the batch job) NOT FOR PRODUCTIVE RUNS USE BATCH SYSTEM https://doc.itc.rwth-aachen.de/display/cc/testing+of+mpi+jobs 3 von 5

22 Interactive usage : Changed Terms of Use passing along your HPC account to third parties is explicitly forbidden secondary logins will be gradually deactivated during the next months secondary accounts (after 05/2014) already configured without login permissions the only use of secondary accounts now: data sharing Jobs and processes (in Batch, on interactive front ends and back ends) which disturb other jobs/processes, may be killed without further notice. If your job has been killed, you probably do some bad thing read the documentation! https://doc.itc.rwth-aachen.de/display/cc/2014/12/02/ %3A+Changed+Terms+of+Use

23 "Would You Like to Know More?" Links The Primer, It s worth reading: slightly outdated now update planned The Dokuweb, it s most actual and worth reading, too: Linux Cluster: https://doc.itc.rwth-aachen.de/display/cc/home Parallel Programming: https://doc.itc.rwth-aachen.de/display/ccp/home Events: https://doc.itc.rwth-aachen.de/display/ve/home Today s themes in Dokuweb: Project-based management of the cluster resources https://doc.itc.rwth-aachen.de/display/cc/projektbewirtschaftung Manuals: https://doc.itc.rwth-aachen.de/display/cc/manuals Interactive usage: https://doc.itc.rwth-aachen.de/display/cc/interactive+usage Batch system: https://doc.itc.rwth-aachen.de/display/cc/using+the+batch+system

HPC-Nutzer Informationsaustausch. The Workload Management System LSF

HPC-Nutzer Informationsaustausch. The Workload Management System LSF HPC-Nutzer Informationsaustausch The Workload Management System LSF Content Cluster facts Job submission esub messages Scheduling strategies Tools and security Future plans 2 von 10 Some facts about the

More information

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU Cluster. Sandra Wienke wienke@rz.rwth-aachen.de November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Cluster Sanity Checks

Cluster Sanity Checks Cluster Sanity Checks Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC Deployment September 19, RWTH Aachen Agenda o Motivation o

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015

Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015 Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians

More information

Debugging with TotalView

Debugging with TotalView Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

Visualization Cluster Getting Started

Visualization Cluster Getting Started Visualization Cluster Getting Started Contents 1 Introduction to the Visualization Cluster... 1 2 Visualization Cluster hardware and software... 2 3 Remote visualization session through VNC... 2 4 Starting

More information

OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP OpenMP Programming on ScaleMP Dirk Schmidl schmidl@rz.rwth-aachen.de Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Eine CAE Infrastruktur für LS-DYNA. unter Verwendung von. Microsoft Windows HPC Server 2008

Eine CAE Infrastruktur für LS-DYNA. unter Verwendung von. Microsoft Windows HPC Server 2008 7. LS-DYNA Anwenderforum, Bamberg 2008 Eine CAE Infrastruktur für LS-DYNA unter Verwendung von Microsoft Windows HPC Server 2008 T. Groß, J. Martini (GNS Systems GmbH) 2008 Copyright by DYNAmore GmbH A

More information

An HPC Application Deployment Model on Azure Cloud for SMEs

An HPC Application Deployment Model on Azure Cloud for SMEs An HPC Application Deployment Model on Azure Cloud for SMEs Fan Ding CLOSER 2013, Aachen, Germany, May 9th,2013 Rechen- und Kommunikationszentrum (RZ) Agenda Motivation Windows Azure Relevant Technology

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2014 2 / 32. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information

Matlab on a Supercomputer

Matlab on a Supercomputer Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides

More information

Chapter 2: Getting Started

Chapter 2: Getting Started Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand

More information

High Performance Computing: A Review of Parallel Computing with ANSYS solutions. Efficient and Smart Solutions for Large Models

High Performance Computing: A Review of Parallel Computing with ANSYS solutions. Efficient and Smart Solutions for Large Models High Performance Computing: A Review of Parallel Computing with ANSYS solutions Efficient and Smart Solutions for Large Models 1 Use ANSYS HPC solutions to perform efficient design variations of large

More information

ABAQUS High Performance Computing Environment at Nokia

ABAQUS High Performance Computing Environment at Nokia ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS

PEPPERDATA IN MULTI-TENANT ENVIRONMENTS ..................................... PEPPERDATA IN MULTI-TENANT ENVIRONMENTS technical whitepaper June 2015 SUMMARY OF WHAT S WRITTEN IN THIS DOCUMENT If you are short on time and don t want to read the

More information

A High Performance Computing Scheduling and Resource Management Primer

A High Performance Computing Scheduling and Resource Management Primer LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

Java Bit Torrent Client

Java Bit Torrent Client Java Bit Torrent Client Hemapani Perera, Eran Chinthaka {hperera, echintha}@cs.indiana.edu Computer Science Department Indiana University Introduction World-wide-web, WWW, is designed to access and download

More information

Windows HPC 2008 Cluster Launch

Windows HPC 2008 Cluster Launch Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich hpc@rrze.uni-erlangen.de Launch overview Small presentation and basic introduction Questions and answers Hands-On

More information

White Paper Perceived Performance Tuning a system for what really matters

White Paper Perceived Performance Tuning a system for what really matters TMurgent Technologies White Paper Perceived Performance Tuning a system for what really matters September 18, 2003 White Paper: Perceived Performance 1/7 TMurgent Technologies Introduction The purpose

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

High Performance Computing in Aachen

High Performance Computing in Aachen High Performance Computing in Aachen Samuel Sarholz sarholz@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University HPC unter Linux Sep 15, RWTH Aachen Agenda o Hardware o Development

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Recent Advances in HPC for Structural Mechanics Simulations

Recent Advances in HPC for Structural Mechanics Simulations Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info

High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

OIS. Update on Windows 7 at CERN & Remote Desktop Gateway. Operating Systems & Information Services CERN IT-OIS

OIS. Update on Windows 7 at CERN & Remote Desktop Gateway. Operating Systems & Information Services CERN IT-OIS Operating Systems & Information Services Update on Windows 7 at CERN & Remote Desktop Gateway CERN IT-OIS Tim Bell, Michal Kwiatek, Michal Budzowski, Andreas Wagner HEPiX Fall 2010 Workshop 4th November

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the

More information

Energy-aware Memory Management through Database Buffer Control

Energy-aware Memory Management through Database Buffer Control Energy-aware Memory Management through Database Buffer Control Chang S. Bae, Tayeb Jamel Northwestern Univ. Intel Corporation Presented by Chang S. Bae Goal and motivation Energy-aware memory management

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

High Performance Computing in CST STUDIO SUITE

High Performance Computing in CST STUDIO SUITE High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver

More information

Hodor and Bran - Job Scheduling and PBS Scripts

Hodor and Bran - Job Scheduling and PBS Scripts Hodor and Bran - Job Scheduling and PBS Scripts UND Computational Research Center Now that you have your program compiled and your input file ready for processing, it s time to run your job on the cluster.

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Tableau Server 7.0 scalability

Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

Parallels Plesk Panel

Parallels Plesk Panel Parallels Plesk Panel Copyright Notice ISBN: N/A Parallels 660 SW 39th Street Suite 205 Renton, Washington 98057 USA Phone: +1 (425) 282 6400 Fax: +1 (425) 282 6444 Copyright 1999-2009, Parallels, Inc.

More information

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel

More information

WA1791 Designing and Developing Secure Web Services. Classroom Setup Guide. Web Age Solutions Inc. Web Age Solutions Inc. 1

WA1791 Designing and Developing Secure Web Services. Classroom Setup Guide. Web Age Solutions Inc. Web Age Solutions Inc. 1 WA1791 Designing and Developing Secure Web Services Classroom Setup Guide Web Age Solutions Inc. Web Age Solutions Inc. 1 Table of Contents Part 1 - Minimum Hardware Requirements...3 Part 2 - Minimum Software

More information

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

OLCF Best Practices. Bill Renaud OLCF User Assistance Group OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Scyld Cloud Manager User Guide

Scyld Cloud Manager User Guide Scyld Cloud Manager User Guide Preface This guide describes how to use the Scyld Cloud Manager (SCM) web portal application. Contacting Penguin Computing 45800 Northport Loop West Fremont, CA 94538 1-888-PENGUIN

More information

IMPLEMENTING GREEN IT

IMPLEMENTING GREEN IT Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK

More information

Quick Start Guide for Parallels Virtuozzo

Quick Start Guide for Parallels Virtuozzo PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current

More information

Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization

Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization Technical Backgrounder Adaptive Resource Optimizer For Optimal High Performance Compute Resource Utilization July 2015 Introduction In a typical chip design environment, designers use thousands of CPU

More information

ECDF Infrastructure Refresh - Requirements Consultation Document

ECDF Infrastructure Refresh - Requirements Consultation Document Edinburgh Compute & Data Facility - December 2014 ECDF Infrastructure Refresh - Requirements Consultation Document Introduction In order to sustain the University s central research data and computing

More information

Improved metrics collection and correlation for the CERN cloud storage test framework

Improved metrics collection and correlation for the CERN cloud storage test framework Improved metrics collection and correlation for the CERN cloud storage test framework September 2013 Author: Carolina Lindqvist Supervisors: Maitane Zotes Seppo Heikkila CERN openlab Summer Student Report

More information

MATLAB Distributed Computing Server System Administrator's Guide

MATLAB Distributed Computing Server System Administrator's Guide MATLAB Distributed Computing Server System Administrator's Guide R2015b How to Contact MathWorks Latest news: www.mathworks.com Sales and services: www.mathworks.com/sales_and_services User community:

More information

High Productivity Computing With Windows

High Productivity Computing With Windows High Productivity Computing With Windows Windows HPC Server 2008 Justin Alderson 16-April-2009 Agenda The purpose of computing is... The purpose of computing is insight not numbers. Richard Hamming Why

More information

Mark Bennett. Search and the Virtual Machine

Mark Bennett. Search and the Virtual Machine Mark Bennett Search and the Virtual Machine Agenda Intro / Business Drivers What to do with Search + Virtual What Makes Search Fast (or Slow!) Virtual Platforms Test Results Trends / Wrap Up / Q & A Business

More information

Moab and TORQUE Highlights CUG 2015

Moab and TORQUE Highlights CUG 2015 Moab and TORQUE Highlights CUG 2015 David Beer TORQUE Architect 28 Apr 2015 Gary D. Brown HPC Product Manager 1 Agenda NUMA-aware Heterogeneous Jobs Ascent Project Power Management and Energy Accounting

More information

Windows Server 2008 R2 Essentials

Windows Server 2008 R2 Essentials Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution

More information

Signiant Agent installation

Signiant Agent installation Signiant Agent installation Release 11.3.0 March 2015 ABSTRACT Guidelines to install the Signiant Agent software for the WCPApp. The following instructions are adapted from the Signiant original documentation

More information

Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es) Microsoft HPC V 1.0 José M. Cámara (checam@ubu.es) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity

More information

Running COMSOL in parallel

Running COMSOL in parallel Running COMSOL in parallel COMSOL can run a job on many cores in parallel (Shared-memory processing or multithreading) COMSOL can run a job run on many physical nodes (cluster computing) Both parallel

More information

NetIQ Privileged User Manager

NetIQ Privileged User Manager NetIQ Privileged User Manager Performance and Sizing Guidelines March 2014 Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

PARALLELS SERVER 4 BARE METAL README

PARALLELS SERVER 4 BARE METAL README PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels

More information

Using SAS in a Distributed Computing Environment Chris Smith, Platform Computing, San Jose, CA

Using SAS in a Distributed Computing Environment Chris Smith, Platform Computing, San Jose, CA Paper 168-27 Using SAS in a Distributed Computing Environment Chris Smith, Platform Computing, San Jose, CA ABSTRACT The Platform LSF software is a software solution which allows organizations to take

More information

Aras Innovator 11. Platform Specifications

Aras Innovator 11. Platform Specifications Document #: 11.0.02014120801 Last Modified: 12/30/2014 Copyright Information Copyright 2014 Aras Corporation. All Rights Reserved. Aras Corporation 300 Brickstone Square Suite 700 Andover, MA 01810 Phone:

More information

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.

More information

Large system usage HOW TO. George Magklaras PhD Biotek/NCMM IT USIT Research Computing Services

Large system usage HOW TO. George Magklaras PhD Biotek/NCMM IT USIT Research Computing Services Large system usage HOW TO George Magklaras PhD Biotek/NCMM IT USIT Research Computing Services Agenda Introduction: A Linux server as a collection of memory/disk/cpu What is the problem? memory and SWAP

More information

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC

Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing

More information

WA2102 Web Application Programming with Java EE 6 - WebSphere 8.5 - RAD 8.5. Classroom Setup Guide. Web Age Solutions Inc. Web Age Solutions Inc.

WA2102 Web Application Programming with Java EE 6 - WebSphere 8.5 - RAD 8.5. Classroom Setup Guide. Web Age Solutions Inc. Web Age Solutions Inc. WA2102 Web Application Programming with Java EE 6 - WebSphere 8.5 - RAD 8.5 Classroom Setup Guide Web Age Solutions Inc. Web Age Solutions Inc. 1 Table of Contents Part 1 - Minimum Hardware Requirements...3

More information

Tableau Server Scalability Explained

Tableau Server Scalability Explained Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted

More information

Using esxtop to Troubleshoot Performance Problems

Using esxtop to Troubleshoot Performance Problems VMWARE TECHNICAL TROUBLESHOOTING NOTE VMware ESX Server 2 Using esxtop to Troubleshoot Performance Problems The VMware esxtop tool provides a real-time view (updated every five seconds, by default) of

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt. SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Astaro Deployment Guide High Availability Options Clustering and Hot Standby

Astaro Deployment Guide High Availability Options Clustering and Hot Standby Connect With Confidence Astaro Deployment Guide Clustering and Hot Standby Table of Contents Introduction... 2 Active/Passive HA (Hot Standby)... 2 Active/Active HA (Cluster)... 2 Astaro s HA Act as One...

More information

Operating System for the K computer

Operating System for the K computer Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements

More information

Remote Access End User Reference Guide for Access to SJLinked Version 1.0 3/11/2014

Remote Access End User Reference Guide for Access to SJLinked Version 1.0 3/11/2014 Remote Access End User Reference Guide for Access to SJLinked Version 1.0 3/11/2014 Contents Introduction... 2 Installing Citrix Receiver for Windows... 3 Before You Begin... 3 System Requirements... 3

More information

SYSTEM SETUP FOR SPE PLATFORMS

SYSTEM SETUP FOR SPE PLATFORMS BEST PRACTICE SYSTEM SETUP FOR SPE PLATFORMS Product Snow License Manager Version 7.0 Content System requirements SQL Server configuration Maintenance Test environment Document date 2015-10-15 ABOUT THIS

More information

CHAPTER FIVE RESULT ANALYSIS

CHAPTER FIVE RESULT ANALYSIS CHAPTER FIVE RESULT ANALYSIS 5.1 Chapter Introduction 5.2 Discussion of Results 5.3 Performance Comparisons 5.4 Chapter Summary 61 5.1 Chapter Introduction This chapter outlines the results obtained from

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

SOSFTP Managed File Transfer

SOSFTP Managed File Transfer Open Source File Transfer SOSFTP Managed File Transfer http://sosftp.sourceforge.net Table of Contents n Introduction to Managed File Transfer n Gaps n Solutions n Architecture and Components n SOSFTP

More information

Monitoring IBM HMC Server. eg Enterprise v6

Monitoring IBM HMC Server. eg Enterprise v6 Monitoring IBM HMC Server eg Enterprise v6 Restricted Rights Legend The information contained in this document is confidential and subject to change without notice. No part of this document may be reproduced

More information

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture

More information

MFCF Grad Session 2015

MFCF Grad Session 2015 MFCF Grad Session 2015 Agenda Introduction Help Centre and requests Dept. Grad reps Linux clusters using R with MPI Remote applications Future computing direction Technical question and answer period MFCF

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

DNS must be up and running. Both the Collax server and the clients to be backed up must be able to resolve the FQDN of the Collax server correctly.

DNS must be up and running. Both the Collax server and the clients to be backed up must be able to resolve the FQDN of the Collax server correctly. This howto describes the setup of backup, bare metal recovery, and restore functionality. Collax Backup Howto Requirements Collax Business Server Collax Platform Server Collax Security Gateway Collax V-Cube

More information

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status

8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status Best Practices @OLCF (and more) Bill Renaud OLCF User Support General Information This presentation covers some helpful information for users of OLCF Staying informed Aspects of system usage that may differ

More information

Capacity Planning Process Estimating the load Initial configuration

Capacity Planning Process Estimating the load Initial configuration Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting

More information

IBM LoadLeveler for Linux delivers job scheduling for IBM pseries and IBM xseries platforms running Linux

IBM LoadLeveler for Linux delivers job scheduling for IBM pseries and IBM xseries platforms running Linux Software Announcement May 11, 2004 IBM LoadLeveler for Linux delivers job scheduling for IBM pseries and IBM xseries platforms running Linux Overview LoadLeveler for Linux is a versatile workload management

More information