1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations



Similar documents
Overview of HPC systems and software available within

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

1 Bull, 2011 Bull Extreme Computing

Building Clusters for Gromacs and other HPC applications

Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013

Cornell University Center for Advanced Computing

Building a Top500-class Supercomputing Cluster at LNS-BUAP

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

High Performance Computing Infrastructure at DESY

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

High Performance Computing in CST STUDIO SUITE

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Cluster Implementation and Management; Scheduling

Cornell University Center for Advanced Computing

ST810 Advanced Computing

Lecture 1: the anatomy of a supercomputer

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

A National Computing Grid: FGI

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

HPC Update: Engagement Model

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie

LANL Computing Environment for PSAAP Partners

Parallel Programming Survey

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Overview of HPC Resources at Vanderbilt

Designed for Maximum Accelerator Performance

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

RWTH GPU Cluster. Sandra Wienke November Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

The CNMS Computer Cluster

Estonian Scientific Computing Infrastructure (ETAIS)

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

FLOW-3D Performance Benchmark and Profiling. September 2012

Altix Usage and Application Programming. Welcome and Introduction

ABSTRACT. TALPALLIKAR, NIKHIL VIVEK. High-Performance Cloud Computing: VCL Case Study. (Under the direction of Dr. Mladen Vouk.)

Sun Constellation System: The Open Petascale Computing Architecture

The IFISC computing network

Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen

vrealize Business System Requirements Guide

Manual for using Super Computing Resources

Getting Started with HPC

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

The Asterope compute cluster

Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Parallel Large-Scale Visualization

What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1

TREND MICRO SOFTWARE APPLIANCE SUPPORT

Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2

HPC Hardware Overview

SR-IOV: Performance Benefits for Virtualized Interconnects!

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

IT Business Management System Requirements Guide

TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale

Smarter Cluster Supercomputing from the Supercomputer Experts

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Parallel Computing with MATLAB

HP ProLiant SL270s Gen8 Server. Evaluation Report

Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez

Retargeting PLAPACK to Clusters with Hardware Accelerators

Supercomputing Resources in BSC, RES and PRACE

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Optimizing a 3D-FWT code in a cluster of CPUs+GPUs

Trends in High-Performance Computing for Power Grid Applications

Performance Measurement of a High-Performance Computing System Utilized for Electronic Medical Record Management

High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand

Turbomachinery CFD on many-core platforms experiences and strategies

Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström

SR-IOV In High Performance Computing

IBM System x family brochure

Parallel Processing using the LOTUS cluster

Qualified Apple Mac Workstations for Avid Media Composer v5.0.x

HPC Software Requirements to Support an HPC Cluster Supercomputer

Using NeSI HPC Resources. NeSI Computational Science Team

Bright Cluster Manager 5.2. Administrator Manual. Revision: Date: Fri, 27 Nov 2015

RDMA over Ethernet - A Preliminary Study

High Performance Computing within the AHRP

HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle

A Flexible Cluster Infrastructure for Systems Research and Software Development

Michael Kagan.

Enquiry No. IITH/IMP/Phy/Kancha/2013/022 Date: INVITATION TO TENDER

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

HP PCIe IO Accelerator For Proliant Rackmount Servers And BladeSystems

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Transcription:

Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres. These are located at the Technical University of Denmark (DCSC/DTU), University of Copenhagen (DCSC/KU), University of Southern Denmark (DCSC/SDU), Aarhus University (DCSC/AU) and Aalborg University (DCSC/AAU). Details on each installation are provided below: 1 DCSC/AU: HUGE Computer type and Vender (Cluster/SMP): Four IBM power6/7 SMP-systems joined in a cluster environment. Date for first day of operations (year; month): January 2008 Theoretical Peak Performance: ca. 1.4 TFLOPS Operating System: AIX 5.3 Computer interconnect(s): 1 Gbps interconnect. Processor type and speed: 1. node: 8 dualcore 4.7 Ghz IBM power6. 2. node: 6 dualcore 4.7 Ghz IBM p6. 3.+4. node: 32 core 3.3 GHz IBM p7 (Installed Dec. 2010, ca. 0.9 TFlops) Memory (pr. Processor): 1. node: 512 GB shared memory, 2. node: 48 GB shared memory. 3.+4. node: 128 GB memory each. Storage Capacity: 1.1 TB scratch file system on each node, ca. 20 TB userfilesystem. Queuing system: Torque and Maui Backup: TSM, client to the central AU-backupsystem. Software: Gaussian09; IBM compilers and ESSL (IBM Math. Library); POE (IBM's MPI) Network Capacity: Gigabit ethernet connections; 100 Mbps Firewall. Miscellaneous technical information: Userfilesystem shared with Grendel. DCSC System Administrator: Niels Carl Hansen; +45.89423201; ncwh@cscaa.dk; www.cscaa.dk 2 DCSC/AU: Grendel Computer type and Vender (Cluster/SMP): Linux cluster from SUN, Dell, Supermicro, HP and IBM Date for first day of operations (year; month): Aug 2007. Theoretical Peak Performance: 104 TeraFLOPS CPU + 133 / 37 TeraFLOPS (GPU, single/double precission) Operating System: CentOS 5.x / 6.x x86_64 Linux Computer interconnect(s): Gigabit, Infiniband SDR and QDR. Total nodes / total cores: 850 / 8540

Processor types, speed and memory Dell/AMD 2218, 2x4 cores @ 2.6 GHz / 8 GB, 85 nodes from Aug 2007 SUN/AMD 2356, 2x8 cores @ 2.3 GHz / 16 GB, 230 nodes from Jan 2008 HP/Intel X5550, 2x8 cores @ 2.6 GHz / 24 GB, 196 nodes from Jan 2009 Supermicro/Intel 2x8 cores @ 2.6 GHz / 24 GB, 18 nodes from Jan 2009 (w. 36 Nvidia 1060 GPUs) HP/Intel X5650 2x6 cores @ 2.6 GHz / 48 GB, 165 nodes from Apr 2011 HP/AMD 6176 4x12 cores @ 2.3 GHz / 96 GB, 9 nodes from Apr 2011 HP/Intel X5650 2x6 cores @ 2.6 GHz / 48 GB, 40 nodes from Apr 2011 (w. 40 Nvidia 1060, and 60 Nvidia 2050 GPUs) IBM/Intel E5-2670 2x8 cores @ 2.6 GHz / 64 GB, 110 nodes from Oct 2012 Storage Capacity: 175 TB user file system; between 200-1700 GB local scratch on each compute node. Queuing system: Torque and Maui Backup: TSM, client to the central AU-backupsystem. Software: Gaussian 09; WASP;; Intel compilers and MKL (Intel Math. Library); Portland Group compilers: ACML (AMD Math. library), openmpi Network Capacity: Gigabit ; 40 Gbps QDR Infiniband;100 Mbps Firewall. DCSC System Administrator: Niels Carl Hansen; +45.89423201; ncwh@cscaa.dk; www.cscaa.dk 3 DCSC/DTU: NIFLHEIM Computer type and Vendor (Cluster/SMP): Linux cluster from HP and IBM. Date for first day of operations (year; month): September 2012 (2006). Peak performance: 60 TeraFLOPS. Operating System: CentOS Linux. Computer interconnect(s): QDR Infiniband, SDR Infiniband, dual-gigabit. Processor type and speed: Dual-processor 8-core Intel Xeon E5-2670 2.6 GHz (76 nodes from 2012). Dual-processor quad-core Intel Xeon X5550 2.67 GHz (116 nodes from 2010). Dual-processor quad-core Intel Xeon X5570 2.93 GHz (412 nodes from 2009). Dual-processor dual-core AMD Opteron 2218 2.6 GHz (162 nodes from 2007). Dual-processor dual-core AMD Opteron 285 2.6 GHz (48 nodes from 2006). Memory (per processor):

4 or 8 GB RAM per CPU core in the 8-core Intel Xeon nodes. 3 GB RAM per CPU core in the quad-core Intel Xeon nodes. 2 or 6 GB RAM per CPU core in the AMD Opteron nodes. Storage Capacity: Central data storage 113 TB on 4 Linux NFS file servers. 11 TB on an older IBM/NetApp storage system. Queuing system: TORQUE resource manager with MAUI job scheduler. Backup: Daily backup of NFS servers to 140 TB disk storage. Software: Scientific software packages installed by user groups; Commercial compilers: Intel; Math libraries: AMD ACML, Intel MKL. Communication: OpenMPI. Network Capacity: QDR Infiniband on the 8-core Intel nodes; SDR Infiniband on 24 Opteron nodes; dual-gigabit on all other nodes; 1 Gbit/s external network to DTU and the Danish Research Network. DCSC System Administrator: Ole Holm Nielsen; +45.45253187; support@fysik.dtu.dk; https://wiki.fysik.dtu.dk/niflheim 4 DCSC/DTU: SUNHPC 5 DCSC/DTU: CBS 6 DCSC/DTU: Alfheim 7 DCSC/KU: Steno 8 DCSC/SDU: Horseshoe8 Computer type and Vender (Cluster/SMP): Cluster of 27 Dell C6220 nodes Date for first day of operations (year; month): 2012; 11 Theoretical peak performance: 8,3 TFlops Operating System: Linux, CentOS 6.3 Computer interconnect(s): Gigabit Processor type and speed: 2 x Intel E5-2665 Octocore Sandy Bridge 2,4 GHz CPU's Memory (pr. processor): 6 nodes with 8 GB/core, 21 nodes with 4GB/core Storage Capacity: 32 TB raw capacity Queuing system: Torque 2.5.12 / MAUI 3.3.1 Software: GCC 4.4.6, Intel C/C++ and Fortran 77/9x 12.0.2, Intel MKL, AMD ACML, OpenMPI 1.6.4 Network Capacity: 1GBps DCSC System Administrator: Torben Madsen; +45 65503862; 9 DCSC/SDU: Horseshoe7 Computer type and Vender (Cluster/SMP): Cluster of 12 Fujitsu Celcius R670-2 GPU enabled nodes.

Date for first day of operations (year; month): 2011; 09 Theoretical peak performance: 1,7TFLops (CPU) +12,4 TFlops DP/ 25 TFlops SP (GPU) Operating System: Linux, CentOS 6.3 Computer interconnect(s): DDR Infiniband (20GBps), Gigabit Processor type and speed: 2 x Intel X5670 Hexcore Westmere 2,93 GHz CPU's and 2 x NVIDIA C2070/C2075 Memory (pr. processor): 2 GB/core Storage Capacity: 48 TB raw capacity Queuing system: Torque 2.5.12 / MAUI 3.2.6 Software: GCC 4.4.6, Intel C/C++ and Fortran 77/9x 12.0.2, Intel MKL, AMD ACML, OpenMPI 1.6.4, OFED 3.5, CUDA 5.0 Network Capacity: 1GBps external / 20GBps internal DCSC System Administrator: Torben Madsen; +45 65503862; 10 DCSC/SDU: Horseshoe6 Computer type and Vender (Cluster/SMP): Cluster of 264 IBM idataplex dx-360 m2 nodes. Date for first day of operations (year; month): 200x; YY Theoretical peak performance: 44,7 TeraFLOPS Operating System: Linux, CentOS 5.3 Computer interconnect(s): QDR Infiniband (40GBps), Gigabit Processor type and speed: 2 x Intel X5550 Quadcore Nehalem 2,66 GHz CPU's Memory (pr. processor): 240 nodes with 3 GB/core; 24 nodes with 6 GB/core Storage Capacity: 168 TB raw capacity Queuing system: Torque 2.4.2 / MAUI 3.2.6 Software: GCC 4.4.x, Intel C/C++ and Fortran 77/9x, Intel MKL, OpenMPI 1.3.2, OFED 1.4.2 Network Capacity: 1GBps external / 40GBps internal DCSC System Administrator: Torben Madsen; +45 65503862; 11 DCSC/SDU: Horseshoe5 (Retired DCSC, but still active) Computer type and Vender (Cluster/SMP): Cluster, 72 x IBM System X3550 Date for first day of operations (year; month): 2009; 03 Theoretical peak performance: 5,8 TeraFLOPS Operating System: Linux, CentOS 5.2

Computer interconnect(s): DDR Infiniband (20GBps), Gigabit Processor type and speed: 2 x Intel L5450 QuadCore Penrynn lowpower 2,5 GHz CPU's Memory (pr. core): 2 GB Storage Capacity: 33TB raw capacity distributed on the nodes. Queuing system: Torque 2.3.6 / MAUI 3.2.6 Software: GCC 4.1.2, OpenMPI 1.2.5, OFED 1.3 Network Capacity: 1GBps external / 20GBps internal DCSC System Administrator: Torben Madsen; +45 65503862; tm@nat.dk; Erik Madsen; +45 65502399; erikm@sdu.dk 12 DCSC/AAU: Fyrkat (GTX 580) Computer type and Vendor (Cluster/SMP): 2 (16 cores) Supermicro X8DTG-QF, 39 GB RAM, 300 GB Date for first day of operations (year; month): 200x; YY Peak performance: (Ikke beregnet) TeraFLOPS Operating System: Ubuntu 12.04 (Precise) Computer interconnect(s): Infiniband (10Gbps), Gigabit Processor type and speed: 2 x Intel(R) Xeon(R) CPU X5570 @ 2.93GHz (HT Enabled) GPU type: 6 x Nvidia Tesla GTX 580 (3 pr. Node) Memory (pr. processor): 4.8GiB Storage Capacity: 45TiB Queuing system: Slurm 2.5.3 Software: icc-10.1.018; ifort-10.1.018; gcc version 4.6.3; matlab- R2013a, CST2013, MUST Network Capacity: Gigabit internal/external DCSC System Administrator: Mads Boye; +45 9940 3453; mb@its.aau.dk; Helge Willum Larsen; +45 9940 3458; hwl@its.aau.dk