bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15.

Size: px
Start display at page:

Download "bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15."

Transcription

1 bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15. July 2010 Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

2 Course Organization bwgrid Treff Participants Current users of the bwgrid Clusters HD/MA Students and scientists interested in Grid Computing Members of the Universities Heidelberg and Mannheim Scope bwgrid Status and Plans Lectures and/or Workshops User contributions To meet you in person Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

3 Course Organization bwgrid Treff Summer Term 2010 Main Focus: 29. April bwgrid Status and Interconnection HD/MA 20. May Batch-System and Parallel execution of single-core jobs 17. June Parallel Programming with Java 15. July Parallel Programming with Java (Part II) Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

4 Course Organization bwgrid Treff 15. July 2010 Agenda for today: bwgrid News Parallel Programming with Java Threads (Part II) Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

5 bwgrid News bwgrid News Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

6 bwgrid News What is bwgrid? D-Grid Community project of the Universities in BW Compute Clusters at 7 Universities Central storage unit in Karlsruhe Distributed system with local administration Computing centers focus on software in different fields of research Access via at least one middleware supported by D-Grid Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

7 bwgrid News bwgrid Resources Compute cluster: Mannheim/Heidelberg: 280 nodes Direct Interconnection Frankfurt Karlsruhe: 140 nodes Stuttgart: 420 nodes Tübingen: 140 nodes Ulm (Konstanz): 280 nodes Hardware in Ulm Mannheim Karlsruhe Heidelberg Stuttgart Freiburg: 140 nodes Esslingen: 180 nodes Central storage: Freiburg Tübingen Ulm (joint cluster with Konstanz) München Karlsruhe: 128 TB (with Backup) 256 TB (without Backup) Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

8 bwgrid User Support bwgrid News General Information: Hardware and software Grid access (server addresses) Project descriptions Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

9 bwgrid User Support bwgrid News General Information: Hardware and software Grid access (server addresses) Project descriptions D-Grid user support: Trouble ticket system News module for maintenance Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

10 bwgrid User Support bwgrid News General Information: Hardware and software Grid access (server addresses) Project descriptions D-Grid user support: Trouble ticket system News module for maintenance User Support available at all sites: Login messages Local webpages, wikis... address for local support Local news man bw-grid Terms and conditions for using bwgrid Home directory, scratch/work space, /tmp... Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

11 bwgrid News bwgrid Cluster Mannheim/Heidelberg Belwue VORM Benutzer Benutzer PBS Benutzer Benutzer VORM LDAP Admin AD Cluster Mannheim InfiniBand Obsidian + ADVA passwd InfiniBand Cluster Heidelberg Lustre bwfs MA Lustre bwfs HD Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

12 bwgrid News bwgrid Cluster Mannheim/Heidelberg News Lustre System in Heidelberg still offline Scratch or work space not available Use $HOME for temporary data, but try to stay below 50 GB Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

13 bwgrid News bwgrid Cluster Mannheim/Heidelberg News Lustre System in Heidelberg still offline Scratch or work space not available Use $HOME for temporary data, but try to stay below 50 GB New MPI modules for operating system SL 5.5 Mvapich 1.2 Mvapich2 1.4, 1.5 OpenMPI 1.2.8, 1.2.9, 1.3.4, Intel MPI 4.0 for GNU compiler 4.1 and Intel compiler 10.1, 11.1 Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

14 bwgrid News bwgrid Cluster Mannheim/Heidelberg News Lustre System in Heidelberg still offline Scratch or work space not available Use $HOME for temporary data, but try to stay below 50 GB New MPI modules for operating system SL 5.5 Mvapich 1.2 Mvapich2 1.4, 1.5 OpenMPI 1.2.8, 1.2.9, 1.3.4, Intel MPI 4.0 for GNU compiler 4.1 and Intel compiler 10.1, 11.1 Other new software modules chem/gaussian/g09a02 (Heidelberg users only!) schrodinger/2009u2 compiler/intel/11.1 math/r/2.9.2 math/r/ math/r/ vis/molden/4.8 vis/molekel/5.4.0 vis/root/5.26 Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

15 Parallel Programming with Java Parallel Programming with Java (Part II) Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

16 Farewell Thank you for participating. Lecture times in fall/winter: Mannheim: Heidelberg: Plan for further meetings: September 2010: in Mannheim October 2010: in Mannheim November 2010: in Heidelberg December 2010: in Heidelberg January 2011: in Heidelberg once a month, on Thursdays, 16:15 18:00 User contributions? Richling/Kredel (URZ/RUM) bwgrid Treff SS / 12

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. October 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 27 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 11. July 2012 Richling/Kredel (URZ/RUM) bwgrid Treff SS 2012 1 / 21 Course Organization

More information

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 5. October 2011 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2011/12 1 / 21 Course

More information

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15.

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15. bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 15. May 2013 Richling/Kredel (URZ/RUM) bwgrid Treff SS 2013 1 / 22 Course Organization

More information

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.

bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 19. January 2011 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 33 Course

More information

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27.

Linux für bwgrid. Sabine Richling, Heinz Kredel. Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim. 27. Linux für bwgrid Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 27. June 2011 Richling/Kredel (URZ/RUM) Linux für bwgrid FS 2011 1 / 33 Introduction

More information

An introduction to Fyrkat

An introduction to Fyrkat Cluster Computing May 25, 2011 How to get an account https://fyrkat.grid.aau.dk/useraccount How to get help https://fyrkat.grid.aau.dk/wiki What is a Cluster Anyway It is NOT something that does any of

More information

Lessons learned from parallel file system operation

Lessons learned from parallel file system operation Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

Workshop Agenda Feb 25th 2015

Workshop Agenda Feb 25th 2015 Workshop Agenda Feb 25th 2015 Time Presenter Title 09:30 T. König Talk bwhpc Concept & bwhpc-c5 - Federated User Support Activities 09:45 R. Walter Talk bwhpc architecture (bwunicluster, bwforcluster JUSTUS,

More information

HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG

HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG Contents High Performance Computing Competence Center Baden-Württemberg (hkz-bw)... 4 Vector Parallel Supercomputer NEC SX-6X... 8 Massively

More information

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers

More information

The Asterope compute cluster

The Asterope compute cluster The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU

More information

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011

Advanced Techniques with Newton. Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Advanced Techniques with Newton Gerald Ragghianti Advanced Newton workshop Sept. 22, 2011 Workshop Goals Gain independence Executing your work Finding Information Fixing Problems Optimizing Effectiveness

More information

Berkeley Research Computing. Town Hall Meeting Savio Overview

Berkeley Research Computing. Town Hall Meeting Savio Overview Berkeley Research Computing Town Hall Meeting Savio Overview SAVIO - The Need Has Been Stated Inception and design was based on a specific need articulated by Eliot Quataert and nine other faculty: Dear

More information

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research ! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at

More information

icer Bioinformatics Support Fall 2011

icer Bioinformatics Support Fall 2011 icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)

More information

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Jan Wiebelitz Wolfgang Müller, Michael Brenner, Gabriele von Voigt Cracow Grid Workshop 2008, Cracow,

More information

Parallel Processing using the LOTUS cluster

Parallel Processing using the LOTUS cluster Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS

More information

Sun Constellation System: The Open Petascale Computing Architecture

Sun Constellation System: The Open Petascale Computing Architecture CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Estonian Scientific Computing Infrastructure (ETAIS)

Estonian Scientific Computing Infrastructure (ETAIS) Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder hardi@eenet.ee University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

Lustre failover experience

Lustre failover experience Lustre failover experience Lustre Administrators and Developers Workshop Paris 1 September 25, 2012 TOC Who we are Our Lustre experience: the environment Deployment Benchmarks What's next 2 Who we are

More information

Introduction to Supercomputing with Janus

Introduction to Supercomputing with Janus Introduction to Supercomputing with Janus Shelley Knuth shelley.knuth@colorado.edu Peter Ruprecht peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?

More information

High Performance Computing in Aachen

High Performance Computing in Aachen High Performance Computing in Aachen Samuel Sarholz sarholz@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University HPC unter Linux Sep 15, RWTH Aachen Agenda o Hardware o Development

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen Niels Bohr Institute University of Copenhagen Who am I Theoretical physics (KU, NORDITA, TF, NSC) Computing non-standard superconductivity and superfluidity condensed matter / statistical physics several

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Florida Site Report. US CMS Tier-2 Facilities Workshop. April 7, 2014. Bockjoo Kim University of Florida

Florida Site Report. US CMS Tier-2 Facilities Workshop. April 7, 2014. Bockjoo Kim University of Florida Florida Site Report US CMS Tier-2 Facilities Workshop April 7, 2014 Bockjoo Kim University of Florida Outline Site Overview Computing Resources Site Status Future Plans Summary 2 Florida Tier-2 Paul Avery

More information

Cluster@WU User s Manual

Cluster@WU User s Manual Cluster@WU User s Manual Stefan Theußl Martin Pacala September 29, 2014 1 Introduction and scope At the WU Wirtschaftsuniversität Wien the Research Institute for Computational Methods (Forschungsinstitut

More information

SUN HPC SOFTWARE CLUSTERING MADE EASY

SUN HPC SOFTWARE CLUSTERING MADE EASY SUN HPC SOFTWARE CLUSTERING MADE EASY New Software Access Visualization Workstation, Thin Clients, Remote Access Developer Management OS Compilers, Workload, Linux, Solaris Debuggers, Systems and Optimization

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk cloud-support@surfsara.nl Agenda Introduction and Overview (current

More information

NEC HPC-Linux-Cluster

NEC HPC-Linux-Cluster NEC HPC-Linux-Cluster Hardware configuration: 4 Front-end servers: each with SandyBridge-EP processors: 16 cores per node 128 GB memory 134 compute nodes: 112 nodes with SandyBridge-EP processors (16 cores

More information

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing

More information

Manual for using Super Computing Resources

Manual for using Super Computing Resources Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad

More information

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality

More information

Agenda. Using HPC Wales 2

Agenda. Using HPC Wales 2 Using HPC Wales Agenda Infrastructure : An Overview of our Infrastructure Logging in : Command Line Interface and File Transfer Linux Basics : Commands and Text Editors Using Modules : Managing Software

More information

Introduction to parallel computing and UPPMAX

Introduction to parallel computing and UPPMAX Introduction to parallel computing and UPPMAX Intro part of course in Parallel Image Analysis Elias Rudberg elias.rudberg@it.uu.se March 22, 2011 Parallel computing Parallel computing is becoming increasingly

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden)

Introduction to the CRAY XE6(Lindgren) environment at PDC. Dr. Lilit Axner (PDC, Sweden) Introduction to the CRAY XE6(Lindgren) environment at PDC Dr. Lilit Axner (PDC, Sweden) Lindgren System used after the summer school! Cray XE6 8 interactive nodes 1516 dedicated nodes (queue needed!) 24

More information

How to Run Parallel Jobs Efficiently

How to Run Parallel Jobs Efficiently How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu

Grid 101. Grid 101. Josh Hegie. grid@unr.edu http://hpc.unr.edu Grid 101 Josh Hegie grid@unr.edu http://hpc.unr.edu Accessing the Grid Outline 1 Accessing the Grid 2 Working on the Grid 3 Submitting Jobs with SGE 4 Compiling 5 MPI 6 Questions? Accessing the Grid Logging

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Sustainability in Grid-Computing Christian Baun. Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH)

Sustainability in Grid-Computing Christian Baun. Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Sustainability in Grid-Computing Christian Baun Die Kooperation von Sustainability in Grid-Computing Important topics in Grid-Computing during GridKa-School 2007: Grid applications Grid middleware systems

More information

Parallels Plesk Automation

Parallels Plesk Automation Parallels Plesk Automation Contents Compact Configuration: Linux Shared Hosting 3 Compact Configuration: Mixed Linux and Windows Shared Hosting 4 Medium Size Configuration: Mixed Linux and Windows Shared

More information

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007

More information

Auto-administration of glite-based

Auto-administration of glite-based 2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel

More information

benchmarking Amazon EC2 for high-performance scientific computing

benchmarking Amazon EC2 for high-performance scientific computing Edward Walker benchmarking Amazon EC2 for high-performance scientific computing Edward Walker is a Research Scientist with the Texas Advanced Computing Center at the University of Texas at Austin. He received

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information

CNR-INFM DEMOCRITOS and SISSA elab Trieste

CNR-INFM DEMOCRITOS and SISSA elab Trieste elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open

More information

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch

More information

1 Bull, 2011 Bull Extreme Computing

1 Bull, 2011 Bull Extreme Computing 1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

Comparison of the High Availability and Grid Options

Comparison of the High Availability and Grid Options Comparison of the High Availability and Grid Options 2008 Informatica Corporation Overview This article compares the following PowerCenter options: High availability option. When you configure high availability

More information

IT service for life science

IT service for life science anterio performs research in the field of molecular modelling including computer-aided drug design. With our experience in these fields we help customers to implement an IT infrastructure to aid these

More information

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz)

Using NeSI HPC Resources. NeSI Computational Science Team (support@nesi.org.nz) NeSI Computational Science Team (support@nesi.org.nz) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

Fujitsu HPC Cluster Suite

Fujitsu HPC Cluster Suite Webinar Fujitsu HPC Cluster Suite 29 th May 2013 Павел Борох 0 HPC: полный спектр предложений от Fujitsu PRIMERGY Server, Workstation Cluster Management & Operation ISV and Research Partnerships HPC Cluster

More information

LANL Computing Environment for PSAAP Partners

LANL Computing Environment for PSAAP Partners LANL Computing Environment for PSAAP Partners Robert Cunningham rtc@lanl.gov HPC Systems Group (HPC-3) July 2011 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

High Performance Computing Cluster Quick Reference User Guide

High Performance Computing Cluster Quick Reference User Guide High Performance Computing Cluster Quick Reference User Guide Base Operating System: Redhat(TM) / Scientific Linux 5.5 with Alces HPC Software Stack Copyright 2011 Alces Software Ltd All Rights Reserved

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

A PERFORMANCE COMPARISON USING HPC BENCHMARKS: WINDOWS HPC SERVER 2008 AND RED HAT ENTERPRISE LINUX 5

A PERFORMANCE COMPARISON USING HPC BENCHMARKS: WINDOWS HPC SERVER 2008 AND RED HAT ENTERPRISE LINUX 5 A PERFORMANCE COMPARISON USING HPC BENCHMARKS: WINDOWS HPC SERVER 2008 AND RED HAT ENTERPRISE LINUX 5 R. Henschel, S. Teige, H. Li, J. Doleschal, M. S. Mueller October 2010 Contents HPC at Indiana University

More information

HPC Annual Report 2006-2007

HPC Annual Report 2006-2007 HPC Annual Report 2006-2007 Aug 16, 2007 Overview A lot of information about the University of Florida High Performance Computing Center and its activities is available on the web at http://www.hpc.ufl.edu.

More information

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for

More information

Application and Micro-benchmark Performance using MVAPICH2-X on SDSC Gordon Cluster

Application and Micro-benchmark Performance using MVAPICH2-X on SDSC Gordon Cluster Application and Micro-benchmark Performance using MVAPICH2-X on SDSC Gordon Cluster Mahidhar Tatineni (mahidhar@sdsc.edu) MVAPICH User Group Meeting August 27, 2014 NSF grants: OCI #0910847 Gordon: A Data

More information

NERSC File Systems and How to Use Them

NERSC File Systems and How to Use Them NERSC File Systems and How to Use Them David Turner! NERSC User Services Group! Joint Facilities User Forum on Data- Intensive Computing! June 18, 2014 The compute and storage systems 2014 Hopper: 1.3PF,

More information

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures 11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the

More information

Computational Resources for Drug Discovery

Computational Resources for Drug Discovery Computational Resources for Drug Discovery 7.6.2106 INTEGRATE Summer School Dr. Atte Sillanpää, CSC Dr. Kimmo Mattila, CSC Getting access to CSC 1. User account (sui.csc.fi) 2. Attach account to a computing

More information

CENTRAL EUROPEAN SPACE PROGRAMME

CENTRAL EUROPEAN SPACE PROGRAMME YOUR PARTNER FOR IN EUROPE OBJECTIVE3 Territorial Cooperation 2007-2013 Priorities of the Transnational Territorial Cooperation 2007-2013 CENTRAL EUROPEAN SPACE PROGRAMME Priority I: Innovation Robert

More information

Current Status of FEFS for the K computer

Current Status of FEFS for the K computer Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system

More information

Data Management Best Practices

Data Management Best Practices December 4, 2013 Data Management Best Practices Ryan Mokos Outline Overview of Nearline system (HPSS) Hardware File system structure Data transfer on Blue Waters Globus Online (GO) interface Web GUI Command-Line

More information

Overview of HPC systems and software available within

Overview of HPC systems and software available within Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster

More information

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu

An introduction to compute resources in Biostatistics. Chris Scheller schelcj@umich.edu An introduction to compute resources in Biostatistics Chris Scheller schelcj@umich.edu 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.

More information

Windows HPC 2008 Cluster Launch

Windows HPC 2008 Cluster Launch Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich hpc@rrze.uni-erlangen.de Launch overview Small presentation and basic introduction Questions and answers Hands-On

More information

In search of the right way for extreme-scale HPC file system metadata

In search of the right way for extreme-scale HPC file system metadata ++ In search of the right way for extreme-scale HPC file system metadata Qing Zheng 1, Kai Ren 1, Garth Gibson 1, Bradley W. Settlemyer 2 1 Carnegie MellonUniversity 2 Los AlamosNationalLaboratory [LA-UR-15-25703]

More information

PLGrid Infrastructure Solutions For Computational Chemistry

PLGrid Infrastructure Solutions For Computational Chemistry PLGrid Infrastructure Solutions For Computational Chemistry Mariola Czuchry, Klemens Noga, Mariusz Sterzel ACC Cyfronet AGH 2 nd Polish- Taiwanese Conference From Molecular Modeling to Nano- and Biotechnology,

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu

More information

Oracle Fusion Middleware. 1 Oracle Identity Management Templates

Oracle Fusion Middleware. 1 Oracle Identity Management Templates Oracle Fusion Middleware Domain Reference for Oracle Identity and Access Management 11g Release 2 (11.1.2) E35961-01 July 2012 This chapter describes the WebLogic domain and extension templates that are

More information

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS This portion of the survey is for clients who are NOT on TSI Healthcare s ASP and are hosting NG software on their own server. This information must be collected by an IT staff member at your practice.

More information

Windows Compute Cluster Server 2003. Miron Krokhmal CTO

Windows Compute Cluster Server 2003. Miron Krokhmal CTO Windows Compute Cluster Server 2003 Miron Krokhmal CTO Agenda The Windows compute cluster architecture o Hardware and software requirements o Supported network topologies o Deployment strategies, including

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation

Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation Microsoft Compute Clusters in High Performance Technical Computing Björn Tromsdorf, HPC Product Manager, Microsoft Corporation Flexible and efficient job scheduling via Windows CCS has allowed more of

More information

EREBOS: CosmoSim Database. CLUES Research Environment. Harry Enke (Kristin Riebe, Jochen Klar, Adrian Partl) CLUES Meeting 2015, Copenhagen

EREBOS: CosmoSim Database. CLUES Research Environment. Harry Enke (Kristin Riebe, Jochen Klar, Adrian Partl) CLUES Meeting 2015, Copenhagen EREBOS: CLUES Research Environment CosmoSim Database Harry Enke (Kristin Riebe, Jochen Klar, Adrian Partl) CLUES Meeting 2015, Copenhagen Collaborative Research Environment (CRE) Elements: - huge data

More information

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview

More information

OLCF Best Practices. Bill Renaud OLCF User Assistance Group

OLCF Best Practices. Bill Renaud OLCF User Assistance Group OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from

More information

Internal ROC DECH Report

Internal ROC DECH Report Internal ROC DECH Report Sven Hermann et.al. Karlsruhe Institute of Technology www.eu-egee.org EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks DECH Region U Dortmund DESY DESY Zeuthen

More information