1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
|
|
|
- Nickolas Fitzgerald
- 10 years ago
- Views:
Transcription
1 Bilag /RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres. These are located at the Technical University of Denmark (DCSC/DTU), University of Copenhagen (DCSC/KU), University of Southern Denmark (DCSC/SDU), Aarhus University (DCSC/AU) and Aalborg University (DCSC/AAU). Details on each installation are provided below: 1 DCSC/AU: HUGE Computer type and Vender (Cluster/SMP): Four IBM power6/7 SMP-systems joined in a cluster environment. Date for first day of operations (year; month): January 2008 Theoretical Peak Performance: ca. 1.4 TFLOPS Operating System: AIX 5.3 Computer interconnect(s): 1 Gbps interconnect. Processor type and speed: 1. node: 8 dualcore 4.7 Ghz IBM power6. 2. node: 6 dualcore 4.7 Ghz IBM p node: 32 core 3.3 GHz IBM p7 (Installed Dec. 2010, ca. 0.9 TFlops) Memory (pr. Processor): 1. node: 512 GB shared memory, 2. node: 48 GB shared memory node: 128 GB memory each. Storage Capacity: 1.1 TB scratch file system on each node, ca. 20 TB userfilesystem. Queuing system: Torque and Maui Backup: TSM, client to the central AU-backupsystem. Software: Gaussian09; IBM compilers and ESSL (IBM Math. Library); POE (IBM's MPI) Network Capacity: Gigabit ethernet connections; 100 Mbps Firewall. Miscellaneous technical information: Userfilesystem shared with Grendel. DCSC System Administrator: Niels Carl Hansen; ; [email protected]; 2 DCSC/AU: Grendel Computer type and Vender (Cluster/SMP): Linux cluster from SUN, Dell, Supermicro, HP and IBM Date for first day of operations (year; month): Aug Theoretical Peak Performance: 104 TeraFLOPS CPU / 37 TeraFLOPS (GPU, single/double precission) Operating System: CentOS 5.x / 6.x x86_64 Linux Computer interconnect(s): Gigabit, Infiniband SDR and QDR. Total nodes / total cores: 850 / 8540
2 Processor types, speed and memory Dell/AMD 2218, 2x4 2.6 GHz / 8 GB, 85 nodes from Aug 2007 SUN/AMD 2356, 2x8 2.3 GHz / 16 GB, 230 nodes from Jan 2008 HP/Intel X5550, 2x8 2.6 GHz / 24 GB, 196 nodes from Jan 2009 Supermicro/Intel 2x8 2.6 GHz / 24 GB, 18 nodes from Jan 2009 (w. 36 Nvidia 1060 GPUs) HP/Intel X5650 2x6 2.6 GHz / 48 GB, 165 nodes from Apr 2011 HP/AMD x GHz / 96 GB, 9 nodes from Apr 2011 HP/Intel X5650 2x6 2.6 GHz / 48 GB, 40 nodes from Apr 2011 (w. 40 Nvidia 1060, and 60 Nvidia 2050 GPUs) IBM/Intel E x8 2.6 GHz / 64 GB, 110 nodes from Oct 2012 Storage Capacity: 175 TB user file system; between GB local scratch on each compute node. Queuing system: Torque and Maui Backup: TSM, client to the central AU-backupsystem. Software: Gaussian 09; WASP;; Intel compilers and MKL (Intel Math. Library); Portland Group compilers: ACML (AMD Math. library), openmpi Network Capacity: Gigabit ; 40 Gbps QDR Infiniband;100 Mbps Firewall. DCSC System Administrator: Niels Carl Hansen; ; [email protected]; 3 DCSC/DTU: NIFLHEIM Computer type and Vendor (Cluster/SMP): Linux cluster from HP and IBM. Date for first day of operations (year; month): September 2012 (2006). Peak performance: 60 TeraFLOPS. Operating System: CentOS Linux. Computer interconnect(s): QDR Infiniband, SDR Infiniband, dual-gigabit. Processor type and speed: Dual-processor 8-core Intel Xeon E GHz (76 nodes from 2012). Dual-processor quad-core Intel Xeon X GHz (116 nodes from 2010). Dual-processor quad-core Intel Xeon X GHz (412 nodes from 2009). Dual-processor dual-core AMD Opteron GHz (162 nodes from 2007). Dual-processor dual-core AMD Opteron GHz (48 nodes from 2006). Memory (per processor):
3 4 or 8 GB RAM per CPU core in the 8-core Intel Xeon nodes. 3 GB RAM per CPU core in the quad-core Intel Xeon nodes. 2 or 6 GB RAM per CPU core in the AMD Opteron nodes. Storage Capacity: Central data storage 113 TB on 4 Linux NFS file servers. 11 TB on an older IBM/NetApp storage system. Queuing system: TORQUE resource manager with MAUI job scheduler. Backup: Daily backup of NFS servers to 140 TB disk storage. Software: Scientific software packages installed by user groups; Commercial compilers: Intel; Math libraries: AMD ACML, Intel MKL. Communication: OpenMPI. Network Capacity: QDR Infiniband on the 8-core Intel nodes; SDR Infiniband on 24 Opteron nodes; dual-gigabit on all other nodes; 1 Gbit/s external network to DTU and the Danish Research Network. DCSC System Administrator: Ole Holm Nielsen; ; [email protected]; 4 DCSC/DTU: SUNHPC 5 DCSC/DTU: CBS 6 DCSC/DTU: Alfheim 7 DCSC/KU: Steno 8 DCSC/SDU: Horseshoe8 Computer type and Vender (Cluster/SMP): Cluster of 27 Dell C6220 nodes Date for first day of operations (year; month): 2012; 11 Theoretical peak performance: 8,3 TFlops Operating System: Linux, CentOS 6.3 Computer interconnect(s): Gigabit Processor type and speed: 2 x Intel E Octocore Sandy Bridge 2,4 GHz CPU's Memory (pr. processor): 6 nodes with 8 GB/core, 21 nodes with 4GB/core Storage Capacity: 32 TB raw capacity Queuing system: Torque / MAUI Software: GCC 4.4.6, Intel C/C++ and Fortran 77/9x , Intel MKL, AMD ACML, OpenMPI Network Capacity: 1GBps DCSC System Administrator: Torben Madsen; ; 9 DCSC/SDU: Horseshoe7 Computer type and Vender (Cluster/SMP): Cluster of 12 Fujitsu Celcius R670-2 GPU enabled nodes.
4 Date for first day of operations (year; month): 2011; 09 Theoretical peak performance: 1,7TFLops (CPU) +12,4 TFlops DP/ 25 TFlops SP (GPU) Operating System: Linux, CentOS 6.3 Computer interconnect(s): DDR Infiniband (20GBps), Gigabit Processor type and speed: 2 x Intel X5670 Hexcore Westmere 2,93 GHz CPU's and 2 x NVIDIA C2070/C2075 Memory (pr. processor): 2 GB/core Storage Capacity: 48 TB raw capacity Queuing system: Torque / MAUI Software: GCC 4.4.6, Intel C/C++ and Fortran 77/9x , Intel MKL, AMD ACML, OpenMPI 1.6.4, OFED 3.5, CUDA 5.0 Network Capacity: 1GBps external / 20GBps internal DCSC System Administrator: Torben Madsen; ; 10 DCSC/SDU: Horseshoe6 Computer type and Vender (Cluster/SMP): Cluster of 264 IBM idataplex dx-360 m2 nodes. Date for first day of operations (year; month): 200x; YY Theoretical peak performance: 44,7 TeraFLOPS Operating System: Linux, CentOS 5.3 Computer interconnect(s): QDR Infiniband (40GBps), Gigabit Processor type and speed: 2 x Intel X5550 Quadcore Nehalem 2,66 GHz CPU's Memory (pr. processor): 240 nodes with 3 GB/core; 24 nodes with 6 GB/core Storage Capacity: 168 TB raw capacity Queuing system: Torque / MAUI Software: GCC 4.4.x, Intel C/C++ and Fortran 77/9x, Intel MKL, OpenMPI 1.3.2, OFED Network Capacity: 1GBps external / 40GBps internal DCSC System Administrator: Torben Madsen; ; 11 DCSC/SDU: Horseshoe5 (Retired DCSC, but still active) Computer type and Vender (Cluster/SMP): Cluster, 72 x IBM System X3550 Date for first day of operations (year; month): 2009; 03 Theoretical peak performance: 5,8 TeraFLOPS Operating System: Linux, CentOS 5.2
5 Computer interconnect(s): DDR Infiniband (20GBps), Gigabit Processor type and speed: 2 x Intel L5450 QuadCore Penrynn lowpower 2,5 GHz CPU's Memory (pr. core): 2 GB Storage Capacity: 33TB raw capacity distributed on the nodes. Queuing system: Torque / MAUI Software: GCC 4.1.2, OpenMPI 1.2.5, OFED 1.3 Network Capacity: 1GBps external / 20GBps internal DCSC System Administrator: Torben Madsen; ; [email protected]; Erik Madsen; ; [email protected] 12 DCSC/AAU: Fyrkat (GTX 580) Computer type and Vendor (Cluster/SMP): 2 (16 cores) Supermicro X8DTG-QF, 39 GB RAM, 300 GB Date for first day of operations (year; month): 200x; YY Peak performance: (Ikke beregnet) TeraFLOPS Operating System: Ubuntu (Precise) Computer interconnect(s): Infiniband (10Gbps), Gigabit Processor type and speed: 2 x Intel(R) Xeon(R) CPU 2.93GHz (HT Enabled) GPU type: 6 x Nvidia Tesla GTX 580 (3 pr. Node) Memory (pr. processor): 4.8GiB Storage Capacity: 45TiB Queuing system: Slurm Software: icc ; ifort ; gcc version 4.6.3; matlab- R2013a, CST2013, MUST Network Capacity: Gigabit internal/external DCSC System Administrator: Mads Boye; ; [email protected]; Helge Willum Larsen; ; [email protected]
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013
Cluster performance, how to get the most out of Abel Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013 Introduction Architecture x86-64 and NVIDIA Compilers MPI Interconnect Storage Batch queue
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
High Performance Computing Infrastructure at DESY
High Performance Computing Infrastructure at DESY Sven Sternberger & Frank Schlünzen High Performance Computing Infrastructures at DESY DV-Seminar / 04 Feb 2013 Compute Infrastructures at DESY - Outline
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
ST810 Advanced Computing
ST810 Advanced Computing Lecture 17: Parallel computing part I Eric B. Laber Hua Zhou Department of Statistics North Carolina State University Mar 13, 2013 Outline computing Hardware computing overview
Lecture 1: the anatomy of a supercomputer
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
A National Computing Grid: FGI
A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI Grids in Finland : a short history 3/10/2012 FGI In the beginning, we had M-Grid Interest in Grid technology
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Kashif Iqbal - PhD [email protected]
HPC/HTC vs. Cloud Benchmarking An empirical evalua.on of the performance and cost implica.ons Kashif Iqbal - PhD [email protected] ICHEC, NUI Galway, Ireland With acknowledgment to Michele MicheloDo
LANL Computing Environment for PSAAP Partners
LANL Computing Environment for PSAAP Partners Robert Cunningham [email protected] HPC Systems Group (HPC-3) July 2011 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
ABSTRACT. TALPALLIKAR, NIKHIL VIVEK. High-Performance Cloud Computing: VCL Case Study. (Under the direction of Dr. Mladen Vouk.)
ABSTRACT TALPALLIKAR, NIKHIL VIVEK. High-Performance Cloud Computing: VCL Case Study. (Under the direction of Dr. Mladen Vouk.) High performance computing used to be domain of specialties and of the relatively
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
The IFISC computing network
30 January 2009 The IFISC computing network Pere Colet Rubén Tolosa Eduardo Herraiz Juanjo Enseñat Manuel A. Matías - Mallorca - Spain Outline Overview of previous configuration LANs & VLANs Public and
Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen
Niels Bohr Institute University of Copenhagen Who am I Theoretical physics (KU, NORDITA, TF, NSC) Computing non-standard superconductivity and superfluidity condensed matter / statistical physics several
vrealize Business System Requirements Guide
vrealize Business System Requirements Guide vrealize Business Advanced and Enterprise 8.2.1 This document supports the version of each product listed and supports all subsequent versions until the document
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Xian Shi 1 bio I am a second-year Ph.D. student from Combustion Analysis/Modeling Lab,
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
High Performance Computing Facility Specifications, Policies and Usage. Supercomputer Project. Bibliotheca Alexandrina
High Performance Computing Facility Specifications, Policies and Usage Supercomputer Project Bibliotheca Alexandrina Bibliotheca Alexandrina 1/16 Topics Specifications Overview Site Policies Intel Compilers
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Parallel Large-Scale Visualization
Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x
TREND MICRO SOFTWARE APPLIANCE SUPPORT
TREND MICRO SOFTWARE APPLIANCE SUPPORT What Trend Micro solutions support Software Appliance deployments? The following solutions support the software appliance form factor, and the subsequent hardware
Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2
Fraunhofer Institute for Algorithms and Scientific Computing SCAI Performance Comparison of ISV Simulation Codes on Microsoft HPC Server 28 and SUSE Enterprise Server 1.2 Karsten Reineck und Horst Schwichtenberg
HPC Hardware Overview
HPC Hardware Overview John Lockman III February 7, 2012 Texas Advanced Computing Center The University of Texas at Austin Outline Some general comments Lonestar System Dell blade-based system InfiniBand
SR-IOV: Performance Benefits for Virtualized Interconnects!
SR-IOV: Performance Benefits for Virtualized Interconnects! Glenn K. Lockwood! Mahidhar Tatineni! Rick Wagner!! July 15, XSEDE14, Atlanta! Background! High Performance Computing (HPC) reaching beyond traditional
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
IT Business Management System Requirements Guide
IT Business Management System Requirements Guide IT Business Management 8.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale Toshio Endo,Akira Nukada, Satoshi Matsuoka GSIC, Tokyo Institute of Technology ( 東 京 工 業 大 学 ) Performance/Watt is the Issue
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Maximize Your Productivity with Flexible, High-Performance Cray CS400 Cluster Supercomputers In science and business, as soon as one question
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
HP ProLiant SL270s Gen8 Server. Evaluation Report
HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich [email protected]
Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez
Energy efficient computing on Embedded and Mobile devices Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez A brief look at the (outdated) Top500 list Most systems are built
Retargeting PLAPACK to Clusters with Hardware Accelerators
Retargeting PLAPACK to Clusters with Hardware Accelerators Manuel Fogué 1 Francisco Igual 1 Enrique S. Quintana-Ortí 1 Robert van de Geijn 2 1 Departamento de Ingeniería y Ciencia de los Computadores.
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs Gregorio Bernabé Javier Cuenca Domingo Giménez Universidad de Murcia Scientific Computing and Parallel Programming Group XXIX Simposium Nacional de la
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
Performance Measurement of a High-Performance Computing System Utilized for Electronic Medical Record Management
Performance Measurement of a High-Performance Computing System Utilized for Electronic Medical Record Management 1 Kiran George, 2 Chien-In Henry Chen 1,Corresponding Author Computer Engineering Program,
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand
High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand Hari Subramoni *, Ping Lai *, Raj Kettimuthu **, Dhabaleswar. K. (DK) Panda * * Computer Science and Engineering Department
Turbomachinery CFD on many-core platforms experiences and strategies
Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29
Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström
Introduction Physics at CSC Tomasz Malkiewicz Jan Åström CSC Autumn School in Computational Physics 2013 9.00-9.30 9.30-10.15 Monday November 25 Tuesday November 26 Course intro, physics@csc (T. Malkiewicz,
SR-IOV In High Performance Computing
SR-IOV In High Performance Computing Hoot Thompson & Dan Duffy NASA Goddard Space Flight Center Greenbelt, MD 20771 [email protected] [email protected] www.nccs.nasa.gov Focus on the research side
IBM System x family brochure
IBM Systems and Technology System x IBM System x family brochure IBM System x rack and tower servers 2 IBM System x family brochure IBM System x servers Highlights IBM System x and BladeCenter servers
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
Qualified Apple Mac Workstations for Avid Media Composer v5.0.x
Qualified Apple Mac Workstations for Media Composer v5.0.x Qualified Workstation Two 2.66GHz 6-Core Intel Xeon Westmere (12 cores) 6 GB Ram (6x1GB) ATI Radeon HD 5770 1GB ^ Nitris Mojo Mojo Mojo SDI or
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
Bright Cluster Manager 5.2. Administrator Manual. Revision: 6776. Date: Fri, 27 Nov 2015
Bright Cluster Manager 5.2 Administrator Manual Revision: 6776 Date: Fri, 27 Nov 2015 2012 Bright Computing, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless
RDMA over Ethernet - A Preliminary Study
RDMA over Ethernet - A Preliminary Study Hari Subramoni, Miao Luo, Ping Lai and Dhabaleswar. K. Panda Computer Science & Engineering Department The Ohio State University Outline Introduction Problem Statement
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle
HP ProLiant BL460c achieves #1 performance spot on Siebel CRM Release 8.0 Benchmark Industry Applications running Microsoft, Oracle HP ProLiant BL685c takes #2 spot HP Leadership» The HP ProLiant BL460c
A Flexible Cluster Infrastructure for Systems Research and Software Development
Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
Enquiry No. IITH/IMP/Phy/Kancha/2013/022 Date: 26.08.2013 INVITATION TO TENDER
Indian Institute of Technology Hyderabad Ordnance Factory Campus, Yeddumailaram 502205 Phone:040-23016018: Fax:040-,23016032 email: [email protected] Enquiry No. IITH/IMP/Phy/Kancha/2013/022 Date: 26.08.2013
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration
HP PCIe IO Accelerator For Proliant Rackmount Servers And BladeSystems
WHITE PAPER HP PCIe IO Accelerator For Proliant Rackmount Servers And BladeSystems 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Overview & Features... 3 QuickSpecs...3 HP Supported
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
