Using Huygens. Jeroen Engelberts - jeroene@sara.nl Consultant Supercomputing. UVA HPC Cursus Using Huygens

Size: px
Start display at page:

Download "Using Huygens. Jeroen Engelberts - jeroene@sara.nl Consultant Supercomputing. UVA HPC Cursus Using Huygens"

Transcription

1 Using Huygens Jeroen Engelberts - jeroene@sara.nl Consultant Supercomputing

2 Overview Course SARA HPC facilities Huygens System description Endianness File systems Module environment Batch system Accounting User Support Hands on Let's Play!

3 About SARA SARA is an independent non-profit company delivering a full service package in the areas of High-Performance Computing and Networking, data storage, visualization and e-science services Founded in 1971 as an Amsterdam computing center by the two Amsterdam universities (UvA and VU) and the current CWI Independent as of 1995 Founded Vancis in 2008 offering ICT services and ICT products to enterprises, universities, and educational and healthcare institutions. First supercomputer in The Netherlands at SARA in 1984 (Control Data Cyber 205). Hosting the national supercomputer(s) ever since.

4 SARA OSD Formerly known as HPC&V and HPN, Operations, Support and Development is subdivided in six groups: Supercomputing Clustercomputing e-science & Cloud Services Visualization Data services Network innovation & support Some 50 people System Programmers / Consultants BSc - MSc - PhD

5 HPC Facilities at SARA Capacity computing BiG Grid national grid infrastructure Lisa national compute cluster Capability computing Huygens national supercomputer Experimental capability computing GPU cluster HPC Cloud Access method depends on your affiliation, for questions contact: SARA helpdesk

6 Supercomputing Huygens IBM pseries 575 Processor: IBM Power6, 4.7 GHz (dual core) Node: 16 dual core processors In total: 108 nodes, 3456 cores 65 Tflop/s Memory: 128 GB (90 SCN) or 256 GB (18 MCNs) TB Interconnect: 8 (4 DDR) Infiniband 160 Gbit/s inter-node Disk: 3240 x 300 GB = 972 TB (700 TB net capacity) OS: SuSE SLES11SP1 Watercooling: 33% more energy efficient (6º C inlet; 25º C outlet) Energy consumption: kw Cabling: 30.7 km (IB, FC, UTP) Huygens 9 compute racks (1700 kg each) 4 Infiniband racks 20 storage racks Number 52 in the November 2008 Top500 (48.55 Tflop/s) 79 in Jun09, 92 in Nov09, 109 in Jun10, 156 in Nov10, 303 in Jun11, Designed to be a well balanced system instead of being a Top500 killer!

7 Huygens Water Cooling

8 Internode Networking Huygens equipped with Infiniband network, each node has 2 Infiniband adapters. Each adapter has 4 ports. The 8 ports cabled to 8 separate switches. Port has throughput of 20 Gbit/s, aggregated to 160 Gbit/s. This comes down to 2 GB/s, aggregated to 16 GB/s.

9 Processors and Memory (1) NUMA (Non uniform memory architecture) effects occur in Huygens due to design. Each node consists of four quads, or multi-chip module (MCM). Each quad has four processors. Each processor is a dual chip module (DCM). Three levels of cache memory L1 (64kB instruction, 64kB data) on core L2 (4 MB) on chip L3 (32 MB) on MCM

10 Processors and Memory (2) POWER6 chip 32 MB L3 8B x2 rd 8B x2 wr 4B/8B x2 Core Ghz Core Ghz 128kB L1 128kb L1 4 MB Private L2 4 MB Private L2 L3 Control Directory I/O Ctlr 4B read 4B write Fabric Switch Memory Ctlr I/O Interface 4B/8B x2 Off-Node Fabric 8B x2 read 4B x2 write 2B/8B x2 2B/8B x2 2B/8B x2 On-Node Fabric Memory

11 Affinity It is possible to assign tasks to specific cores manually. On Huygens, affinitization is automatically done by LoadLeveler submit filter from SARA. More details on affinity later today.

12 Endianness (1) Origin: Gulliver s Travels, dispute between Lilliput (GB) and Blefuscu (F) regarding the opening of eggs introduced in ICT by Danny Cohen in 1980 How the hexadecimal value 0x89ABCDEF is stored:

13 Endianness (2) Huygens Big Endian PCs (Intel, AMD) Little Endian Transport of binary data files not trivial per se Options Text files instead of binary data files. NetCDF (Network Common Data Form) HDF (Hierarchical Data Format) XLFRTEOPTS=ufmt_littleendian=unit list (runtime)

14 File systems cabinets

15 File systems /home home directory of user Quotum (default) 200 GB / user Being backed up Divided in 9 parts of 14 TB / each Aggregated speed 540 MB/s One such part can be restored in a day from tape /scratch scratch directories of user Quotum 8 TB on /scratch/local and 8 TB on /scratch/shared Not being backed up /scratch/shared available to all nodes /scratch/local available to each node separately /scratch/shared aggregated speed of 26.4 GB/s /scratch/local aggregated speed of 21 GB/s Retention time: at least two weeks (currently)

16 File systems /project project space Special to be requested for projects Not being backed up Divided in 2 parts, one slow one fast Capacity/Aggregated speed Slow: 132 TB 5.3 GB/s Fast: 211 TB 21 GB/s /archive Directory for archive purposes Disk cache with tape robot Older files take long to retrieve Virtually no limit tapes can be added Preferably store large files tar, zip

17 Modules Why modules? Environment variables are set for you, like: PATH LD_LIBRARY_PATH Multiple versions of software can coincide

18 Modules Commands module avail module load modulename module add modulename module display modulename module unload modulename module rm modulename module list module help

19 Module avail

20 Module load

21 Module display

22 Module unload / Module list

23 LoadLeveler Batch environment Jobs Commands and Keywords Scheduling policy Submitfilter

24 LoadLeveler Batch environment Preparations interactive Computations batch environment Description of work in a job script submit On Huygens LoadLeveler. See: Tivoli Workload Scheduler Loadleveler (Using and administering). LL controls: resource allocation scheduling of jobs execution of jobs keeping track of job information. Huygens configuration for batch: 1 Interactive node 2 testnodes 101 batch nodes (18 with 256 GB of memory all others with 128 GB of memory).

25 LoadLeveler Job script Job script: Number of LoadLeveler keywords (starting with ) describing resource requirements to set up optimal environment for your job. Closed by queue Shell script that describes the setup of the computation. Creating a working directory and go to this directory Copy input files and executable to working directory Invoke your executable Copy output files to home directory or archive. A number of examples can be found at: Copy-paste them and adapt them to your situation.

26 LoadLeveler Job example #!/bin/bash node = 2 tasks_per_node = 32 notification = never input = /dev/null output = out.$(jobid) error = err.$(jobid) wall_clock_limit = 1:10:20 job_type = parallel network.mpi = sn_all,not_shared,us queue # Here the shell script starts. cd $HOME/workdir./my_program

27 LoadLeveler Commands Intrinsic LoadLeveler commands llsubmit: submit a job llcancel: cancel a job llq: query LoadLeveler on job status llstatus: provide status of nodes Additional SARA scripts (to be found in /usr/sara/bin), providing extra views: llstx: give node occupation llstc: give class configuration of Huygens lljobs: alternative for llq

28 LoadLeveler Batch classes A Job is queued in a class by LoadLeveler (not in job script), based on user specified node usage, wall clock limit and number of nodes #@ node_usage = shared/not_shared #@ wallclock_limit = hh:mm:ss #@ nodes = 4 To allow LoadLeveler scheduler maximum freedom only a few classes are defined: (shared, short, normal, special). Use llstc to view the current configuration in classes:

29 LoadLeveler Batch classes = unknown 1 = normal 2 = shared 3 = short 4 = spec_sys 5 = spec_training

30 LoadLeveler Scheduling policy Users can submit an unlimited number of jobs at one time. When a job is submitted it gets status: Queued (Eligible for scheduling) Not Queued (Not eligible for scheduling). Jobs are executed according to scheduling policy: First in - First out BUT: Only jobs in queued status are considered Timestamp when job acquires queued status is considered to be submission time. Number of jobs in queued status is limited Backfill mechanism applies

31 LoadLeveler Scheduling policy Configurable limits (by LoadLeveler administrator): Maxqueued: maximum number of jobs per user that can have queued status (default 8). Maxrunning: maximum number of jobs per user that can be running simultaneously (default 8). Maximum wallclock limit: (currently 5 days). Maximum number of nodes: (currently 80 in normal production mode).

32 LoadLeveler Submitfilter When you submit a job it is parsed by a submit filter (Huygens specific). It invokes the budget check script: Checks the remaining budget Computes the number of PNU s Of all jobs that have finished the same day, All jobs that are still in the queue It determines the class from the wallclock limit, number of nodes and node_usage. It checks whether or not some keywords are set. If some keywords are not set it sets them to a (sensible) default value. Defaults can be overruled by #@ skip_rset

33 Accounting Getting access to Huygens Accounts and Logins Budget and jobcost

34 Accounting Getting access to Huygens Take a look at the SARA website: Proposal to NWO Filling in the forms in IRIS Peer review process Approval from NWO, What next? Granting letter (from NWO) A copy to SARA Acceptance letter (from NWO) Fill it in and return it to NWO. User form (download from SARA website) Fill in and send it to SARA. Usage agreement (download from SARA website) each user should fill this in, sign it and send it to SARA.

35 Accounting Accounts and Logins After receiving the proper forms SARA will create an account and login for you. Account: administrative entity to keep track of used budget Owner of an account is the PI (Principal Investigator) who submitted the project proposal to NWO. A project can have one or more accounts associated with it Each account can have several logins coupled to it. Duration of an account is 1 year (expiration date set by NWO) Login: combination of username + password and environment to give physical access to Huygens. Logins are STRICTLY PERSONAL. A login is at any time associated with one and only one account Logins can be moved from one account to another

36 Accounting Account expiration Monthly warnings will be sent (to PI!!!) that account will expire, starting 3 months before expiration date. Extension of an account is possible (contact NWO). Asking for extra time (budget will remain) Asking for extra budget (will be added to remaining budget) Submit a continuation proposal for extension of the same project (budget will be reset to new value). Submit a completely new proposal with new accounts (logins can be moved to new account). After expiration date the Account will be blocked Login to Huygens is denied You will be asked to give SARA permission to remove the login and all data associated with it from the system If you don t respond we will first seek permission of the Account Owner to remove everything. If still no response we will remove everything after a grace period of 6 months (in Usage Agreement).

37 Accounting Budget and jobcost Budget: If project proposal is accepted a budget is assigned to the accounts. Budget is expressed in PNU (Processor Node Uur) or equivalently in SBU (System Billing Units). 1 PNU = the use of 1 core for 1 hour on Huygens. Jobcost (Shared nodes) Jobcost based on cpu time Jobcost (Dedicated nodes) Jobcost based on wallclock time: You always pay for a complete node. Using 1 node for 1 hour will cost you 32 PNU (regardless of how many processes are running on the node). Capacity of Huygens: Gross: ~ 28M PNU/year. Nett: ~ 24M PNU/year (2M/month)

38 Accounting Budget and jobcost For overview of jobcosts use the command accuse Gives consumed budget per day or per month. For overview of budget use the command accinfo Information about initial, consumed and remaining budget. Gives contact information ( address of account owner) Gives list of logins associated with the account.

39 Accounting Budget and jobcost Account : sscpwimr (HUYGENS2) Customer : (10305) Klant voor subinstelling wimr@sara.nl Initial budget ; 60944:13 Used budget ; 17659:03 Remaining budget ; 43285:10 Creation date ; Last modified ; Valid until ; User ID(s) linked to this account: User Group wimr ANY wimr2 ANY

40 Accounting Budget and jobcost Accounting information Jobinfo is kept by Loadleveler in a temporary file After the job finishes: Correct finish: the temporary file is added to a history file. Loadleveler crash: the temporary file is discarded Job restarted by system: temporary file is discarded Once a day (during the night): accounting information is extracted from this history file and added to the accounting database. The remaining budget is computed: If this is negative your account will be blocked. Budget check: To avoid that you will overtax your budget we introduce the budgetcheck, that will run at submission time and at job start. When remaining budget is not sufficient, your job will be refused.

41 User Support Expert user support (help desk, online documentation, (grid-) tutorials, courses) 7 full time HPC consultants (all computational scientists) for both Huygens and Lisa Optimization, parallelization/hyperscaling of user codes Benchmarking Visualization and data analysis of complex data sets Grid support Gridification of applications Grid-enabling of (local) resources Grid tutorials, education of traditional HPC users to use grid infrastructure Support for Grand-Challenge applications e-science support

42 Hands-On Contents Download necessary files locally Install user tools (Windows users only) Copy files to Huygens Login to Huygens Compile Molden (a comp. Chemistry tool) Look at input file with Molden Submit a job (geometry optimization) wait for the result... Analyze the result Copy back output/results locally

43 Hands-On Download Download the material from ftp://ftp.sara.nl/pub/outgoing/usinghuygens It includes: molden5.0.tar.gz Molecular Visualization Tool molecule.job, molecule.zmat Input for example For Windows users additionally: putty-0.62-installer.exe winscp437setup.exe Xming setup.exe For Windows users: Install the three packages (mentioned above)

44 Hands-On Copy files to Huygens Mac & Linux users: Open a Terminal (Linux) or X11 (Mac) Go to the directory where you downloaded files Type: scp molecule.* molden5.0.tar.gz sdemonnn@huygens.sara.nl: where nnn is your demo number For Windows users: Start WinSCP Create New and fill in: Host name: huygens.sara.nl User name: sdemonnn Password: ******* Look up downloaded files and copy them to Huygens

45 Hands-On Login to Huygens Mac & Linux users: Open a Terminal (Linux) or X11 (Mac) Type: ssh -X sdemonnn@huygens.sara.nl where nnn is your demo number For Windows users: Start Xming (if not yet started system tray) Start PuTTY Host Name: huygens.sara.nl Click on Connection/SSH/X11 Check under X11 forwarding Enable X11 forwarding Click Open User your sdemonnn username and password to login

46 Hands-On Molden All users: Extract Molden tarball: tar zxf molden5.0.tar.gz Go into Molden directory: cd molden5.0 Modify makefile: gedit makefile change CC = cc into CC = gcc Make the binary: make Move the resulting binary to ~/bin: mv molden../bin Go back to home directory: cd.. Have a look at the molecule: molden molecule.zmat

47 Hands-On Inspect job (1) All users: Edit job script: gedit molecule.job

48 Hands-On Inspect job (2) LoadLeveler instructions: #!/bin/bash node = 1 tasks_per_node = 32 input = /dev/null output = molecule.$(jobid).out error = molecule.$(jobid).err wall_clock_limit = 1:00:00 # notification = never # job_type = parallel network.mpi = sn_all,not_shared,us # ll_res_id = p r queue

49 Hands-On Inspect job (3) Shell commands: STARTDIR=`pwd` echo "%NProcShared = 32" > $TMPDIR/molecule.inp echo "#RHF/3-21G Opt" >> $TMPDIR/molecule.inp echo "" >> $TMPDIR/molecule.inp echo "My molecule" >> $TMPDIR/molecule.inp echo "" >> $TMPDIR/molecule.inp echo "0,1" >> $TMPDIR/molecule.inp cat molecule.zmat >> $TMPDIR/molecule.inp cd $TMPDIR module load gaussian/09 g09 < molecule.inp

50 Hands-On Submit job All users: Submit job: llsubmit molecule.job Inspect status of your job: llq -u sdemonnn Once running, inspect ouputfile: tail -f molecule.<jobid>.out fill in job_id Once finished, analyze outputfile: molden molecule.<jobid>.out In Molden, press Movie See how benzene becomes flat and hexagonal!

51 Thank you for listening!

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz)

Introduction to HPC Workshop. Center for e-research (eresearch@nesi.org.nz) Center for e-research (eresearch@nesi.org.nz) Outline 1 About Us About CER and NeSI The CS Team Our Facilities 2 Key Concepts What is a Cluster Parallel Programming Shared Memory Distributed Memory 3 Using

More information

LoadLeveler Overview. January 30-31, 2012. IBM Storage & Technology Group. IBM HPC Developer Education @ TIFR, Mumbai

LoadLeveler Overview. January 30-31, 2012. IBM Storage & Technology Group. IBM HPC Developer Education @ TIFR, Mumbai IBM HPC Developer Education @ TIFR, Mumbai IBM Storage & Technology Group LoadLeveler Overview January 30-31, 2012 Pidad D'Souza (pidsouza@in.ibm.com) IBM, System & Technology Group 2009 IBM Corporation

More information

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

Online Backup Client User Manual Linux

Online Backup Client User Manual Linux Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) ( TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria

Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew

More information

SLURM Workload Manager

SLURM Workload Manager SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux

More information

Streamline Computing Linux Cluster User Training. ( Nottingham University)

Streamline Computing Linux Cluster User Training. ( Nottingham University) 1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running

More information

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology

1.0. User Manual For HPC Cluster at GIKI. Volume. Ghulam Ishaq Khan Institute of Engineering Sciences & Technology Volume 1.0 FACULTY OF CUMPUTER SCIENCE & ENGINEERING Ghulam Ishaq Khan Institute of Engineering Sciences & Technology User Manual For HPC Cluster at GIKI Designed and prepared by Faculty of Computer Science

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

HPC Growing Pains. Lessons learned from building a Top500 supercomputer

HPC Growing Pains. Lessons learned from building a Top500 supercomputer HPC Growing Pains Lessons learned from building a Top500 supercomputer John L. Wofford Center for Computational Biology & Bioinformatics Columbia University I. What is C2B2? Outline Lessons learned from

More information

Online Backup Client User Manual

Online Backup Client User Manual Online Backup Client User Manual Software version 3.21 For Linux distributions January 2011 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF)

Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) Running on Blue Gene/Q at Argonne Leadership Computing Facility (ALCF) ALCF Resources: Machines & Storage Mira (Production) IBM Blue Gene/Q 49,152 nodes / 786,432 cores 768 TB of memory Peak flop rate:

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux Easy Setup Guide 1&1 CLOUD SERVER Creating Backups for Linux Legal notice 1&1 Internet Inc. 701 Lee Road, Suite 300 Chesterbrook, PA 19087 USA www.1and1.com info@1and1.com August 2015 Copyright 2015 1&1

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

Online Backup Client User Manual Mac OS

Online Backup Client User Manual Mac OS Online Backup Client User Manual Mac OS 1. Product Information Product: Online Backup Client for Mac OS X Version: 4.1.7 1.1 System Requirements Operating System Mac OS X Leopard (10.5.0 and higher) (PPC

More information

Online Backup Client User Manual Mac OS

Online Backup Client User Manual Mac OS Online Backup Client User Manual Mac OS 1. Product Information Product: Online Backup Client for Mac OS X Version: 4.1.7 1.1 System Requirements Operating System Mac OS X Leopard (10.5.0 and higher) (PPC

More information

Online Backup Client User Manual

Online Backup Client User Manual For Mac OS X Software version 4.1.7 Version 2.2 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by other means.

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

Installing IBM Websphere Application Server 7 and 8 on OS4 Enterprise Linux

Installing IBM Websphere Application Server 7 and 8 on OS4 Enterprise Linux Installing IBM Websphere Application Server 7 and 8 on OS4 Enterprise Linux By the OS4 Documentation Team Prepared by Roberto J Dohnert Copyright 2013, PC/OpenSystems LLC This whitepaper describes how

More information

GeBro-BACKUP. Die Online-Datensicherung. Manual Pro Backup Client on a NAS

GeBro-BACKUP. Die Online-Datensicherung. Manual Pro Backup Client on a NAS GeBro-BACKUP Die Online-Datensicherung. Manual Pro Backup Client on a NAS Created and tested on a QNAP TS-559 Pro Firmware 4.0.2 Intel x86 Architecture Default hardware configuration OBM v6.15.0.0 Last

More information

Computer System. Chapter 1. 1.1 Introduction

Computer System. Chapter 1. 1.1 Introduction Chapter 1 Computer System 1.1 Introduction The Japan Meteorological Agency (JMA) installed its first-generation computer (IBM 704) to run an operational numerical weather prediction model in March 1959.

More information

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being

More information

Why Not Oracle Standard Edition? A Dbvisit White Paper By Anton Els

Why Not Oracle Standard Edition? A Dbvisit White Paper By Anton Els Why Not Oracle Standard Edition? A Dbvisit White Paper By Anton Els Copyright 2011-2013 Dbvisit Software Limited. All Rights Reserved Nov 2013 Executive Summary... 3 Target Audience... 3 Introduction...

More information

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Miami University RedHawk Cluster Working with batch jobs on the Cluster Miami University RedHawk Cluster Working with batch jobs on the Cluster The RedHawk cluster is a general purpose research computing resource available to support the research community at Miami University.

More information

HPC system startup manual (version 1.30)

HPC system startup manual (version 1.30) HPC system startup manual (version 1.30) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data download

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Building Clusters for Gromacs and other HPC applications

Building Clusters for Gromacs and other HPC applications Building Clusters for Gromacs and other HPC applications Erik Lindahl lindahl@cbr.su.se CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network

More information

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University

HPC at IU Overview. Abhinav Thota Research Technologies Indiana University HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is

More information

Sun Constellation System: The Open Petascale Computing Architecture

Sun Constellation System: The Open Petascale Computing Architecture CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical

More information

GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems

GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems GC3: Grid Computing Competence Center Cluster computing, I Batch-queueing systems Riccardo Murri, Sergio Maffioletti Grid Computing Competence Center, Organisch-Chemisches Institut, University of Zurich

More information

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

Fujitsu PRIMERGY Servers Portfolio

Fujitsu PRIMERGY Servers Portfolio Fujitsu Servers Portfolio Dynamic Infrastructures for workgroup, datacenter and cloud computing shaping tomorrow with you Higher IT efficiency and reduced total cost of ownership Fujitsu Micro and Tower

More information

Submitting batch jobs Slurm on ecgate. Xavi Abellan xavier.abellan@ecmwf.int User Support Section

Submitting batch jobs Slurm on ecgate. Xavi Abellan xavier.abellan@ecmwf.int User Support Section Submitting batch jobs Slurm on ecgate Xavi Abellan xavier.abellan@ecmwf.int User Support Section Slide 1 Outline Interactive mode versus Batch mode Overview of the Slurm batch system on ecgate Batch basic

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0

LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0 LBNC Compute Cluster Installation and Configuration Author: Markus Baertschi Owner: Markus Baertschi Customer: LBNC Subject: LBNC Compute Cluster Installation and Configuration Page 1 of 14 Contents 1.

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

Partek Flow Installation Guide

Partek Flow Installation Guide Partek Flow Installation Guide Partek Flow is a web based application for genomic data analysis and visualization, which can be installed on a desktop computer, compute cluster or cloud. Users can access

More information

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption

More information

Research Technologies Data Storage for HPC

Research Technologies Data Storage for HPC Research Technologies Data Storage for HPC Supercomputing for Everyone February 17-18, 2014 Research Technologies High Performance File Systems hpfs-admin@iu.edu Indiana University Intro to HPC on Big

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

Backing Up TestTrack Native Project Databases

Backing Up TestTrack Native Project Databases Backing Up TestTrack Native Project Databases TestTrack projects should be backed up regularly. You can use the TestTrack Native Database Backup Command Line Utility to back up TestTrack 2012 and later

More information

GraySort on Apache Spark by Databricks

GraySort on Apache Spark by Databricks GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Fall 2009. Lecture 1. Operating Systems: Configuration & Use CIS345. Introduction to Operating Systems. Mostafa Z. Ali. mzali@just.edu.

Fall 2009. Lecture 1. Operating Systems: Configuration & Use CIS345. Introduction to Operating Systems. Mostafa Z. Ali. mzali@just.edu. Fall 2009 Lecture 1 Operating Systems: Configuration & Use CIS345 Introduction to Operating Systems Mostafa Z. Ali mzali@just.edu.jo 1-1 Chapter 1 Introduction to Operating Systems An Overview of Microcomputers

More information

Site Configuration SETUP GUIDE. Windows Hosts Single Workstation Installation. May08. May 08

Site Configuration SETUP GUIDE. Windows Hosts Single Workstation Installation. May08. May 08 Site Configuration SETUP GUIDE Windows Hosts Single Workstation Installation May08 May 08 Copyright 2008 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted

More information

How to Use the IBM Tivoli Storage Manager (TSM)

How to Use the IBM Tivoli Storage Manager (TSM) HPCx Archiving User Guide V 1.2 Elena Breitmoser, Ian Shore April 28, 2004 Abstract The Phase 2 HPCx system will have 100 Tb of storage space, of which around 70 Tb comprises offline tape storage rather

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Lecture 1: the anatomy of a supercomputer

Lecture 1: the anatomy of a supercomputer Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949

More information

Introduction to MSI* for PubH 8403

Introduction to MSI* for PubH 8403 Introduction to MSI* for PubH 8403 Sep 30, 2015 Nancy Rowe *The Minnesota Supercomputing Institute for Advanced Computational Research Overview MSI at a Glance MSI Resources Access System Access - Physical

More information

High-Performance Reservoir Risk Assessment (Jacta Cluster)

High-Performance Reservoir Risk Assessment (Jacta Cluster) High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA-GOCAD 2013.1 Paradigm 2011.3 With Epos 4.1 Data Management Configuration Guide 2008 2013 Paradigm Ltd. or its affiliates and subsidiaries.

More information

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015 Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this

More information

Current Status of FEFS for the K computer

Current Status of FEFS for the K computer Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system

More information

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine)

Grid Engine Basics. Table of Contents. Grid Engine Basics Version 1. (Formerly: Sun Grid Engine) Grid Engine Basics (Formerly: Sun Grid Engine) Table of Contents Table of Contents Document Text Style Associations Prerequisites Terminology What is the Grid Engine (SGE)? Loading the SGE Module on Turing

More information

LS DYNA Performance Benchmarks and Profiling. January 2009

LS DYNA Performance Benchmarks and Profiling. January 2009 LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The

More information

HPC Update: Engagement Model

HPC Update: Engagement Model HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems mikev@sun.com Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value

More information

MySQL Backups: From strategy to Implementation

MySQL Backups: From strategy to Implementation MySQL Backups: From strategy to Implementation Mike Frank Senior Product Manager 1 Program Agenda Introduction The 5 Key Steps Advanced Options References 2 Backups are a DBAs Top Priority Be Prepared

More information

NSave Table of Contents

NSave Table of Contents NSave Table of Contents Introduction to NSave for Desktops 2 CPP Installation Instructions 3 Backing up your Computer 8 Restoring Files to your Computer 10 Settings 13 History 14 Destinations 15 1 Introduction

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet

Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet Intel Do-It-Yourself Challenge Lab 2: Intel Galileo s Linux side Nicolas Vailliet www.intel-software-academic-program.com paul.guermonprez@intel.com Intel Software 2014-02-01 Prerequisites and objectives

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry

New High-performance computing cluster: PAULI. Sascha Frick Institute for Physical Chemistry New High-performance computing cluster: PAULI Sascha Frick Institute for Physical Chemistry 02/05/2012 Sascha Frick (PHC) HPC cluster pauli 02/05/2012 1 / 24 Outline 1 About this seminar 2 New Hardware

More information

Parallel Debugging with DDT

Parallel Debugging with DDT Parallel Debugging with DDT Nate Woody 3/10/2009 www.cac.cornell.edu 1 Debugging Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007

PBS Tutorial. Fangrui Ma Universit of Nebraska-Lincoln. October 26th, 2007 PBS Tutorial Fangrui Ma Universit of Nebraska-Lincoln October 26th, 2007 Abstract In this tutorial we gave a brief introduction to using PBS Pro. We gave examples on how to write control script, and submit

More information

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing

More information

Quick Introduction to HPSS at NERSC

Quick Introduction to HPSS at NERSC Quick Introduction to HPSS at NERSC Nick Balthaser NERSC Storage Systems Group nabalthaser@lbl.gov Joint Genome Institute, Walnut Creek, CA Feb 10, 2011 Agenda NERSC Archive Technologies Overview Use Cases

More information

Energy-aware job scheduler for highperformance

Energy-aware job scheduler for highperformance Energy-aware job scheduler for highperformance computing 7.9.2011 Olli Mämmelä (VTT), Mikko Majanen (VTT), Robert Basmadjian (University of Passau), Hermann De Meer (University of Passau), André Giesler

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Usage of the mass storage system. K. Rosbach PPS 19-Feb-2008

Usage of the mass storage system. K. Rosbach PPS 19-Feb-2008 Usage of the mass storage system K. Rosbach PPS 19-Feb-2008 Disclaimer This is just a summary based on the information available online at http://dv-zeuthen.desy.de/services/dcache_osm/e717/index_eng.html

More information

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

Measuring Cache and Memory Latency and CPU to Memory Bandwidth White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary

More information

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2

CNAG User s Guide. Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015. 1 Introduction 2 CNAG User s Guide Barcelona Supercomputing Center Copyright c 2015 BSC-CNS December 18, 2015 Contents 1 Introduction 2 2 System Overview 2 3 Connecting to CNAG cluster 2 3.1 Transferring files...................................

More information

A Tiny Queuing System for Blast Servers

A Tiny Queuing System for Blast Servers A Tiny Queuing System for Blast Servers Colas Schretter and Laurent Gatto December 9, 2005 Introduction When multiple Blast [4] similarity searches are run simultaneously against large databases and no

More information

Parallels Cloud Server 6.0

Parallels Cloud Server 6.0 Parallels Cloud Server 6.0 Parallels Cloud Storage I/O Benchmarking Guide September 05, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings

More information

A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.

A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing. Appro HyperBlade A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing. Appro HyperBlade clusters are flexible, modular scalable offering a high-density

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver 1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution

More information

High-Performance Computing

High-Performance Computing High-Performance Computing Windows, Matlab and the HPC Dr. Leigh Brookshaw Dept. of Maths and Computing, USQ 1 The HPC Architecture 30 Sun boxes or nodes Each node has 2 x 2.4GHz AMD CPUs with 4 Cores

More information

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform

Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4

More information

1 Bull, 2011 Bull Extreme Computing

1 Bull, 2011 Bull Extreme Computing 1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance

More information

icer Bioinformatics Support Fall 2011

icer Bioinformatics Support Fall 2011 icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)

More information