Access to the Federal High-Performance Computing-Centers
|
|
|
- Dennis Hunt
- 10 years ago
- Views:
Transcription
1 Access to the Federal High-Performance Computing-Centers University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) Slide 1 TOP 500 Nov. List German Sites, 1 st part federal thematic regional Rank Installation Site Max-Planck-Gesellschaft MPI/IPP Leibniz Rechenzentrum Location Garching München R max 2010 =51% 1653 =82% Peak Manufacturer Computer / Procs 1.3GHz/ 752 Hitachi SR8000-F1/168/ 168 Year of Installat ion 2000 / 31 Deutscher Wetterdienst Offenbach 1293 =67% 1920 IBM SP Power3 375MHz 16way/ HLRN at Universitaet Hannover / RRZN Hannover 1038 =52% 1.3GHz/ HLRN at ZIB/Konrad Zuse- Zentrum fuer Informationstechnik Berlin 1038 =52% 1.3GHz/ DKRZ - Deutsches Klimarechenzentrum Hamburg 982 =96% 1024 NEC SX-6/128M16/ Universitaet Heidelberg - IWR Heidelberg 825 =58% 1430 Megware HELICS AMD1.4GHz Myrinet/ Deutscher Wetterdienst Offenbach 671 =69% 974 Cray Inc. T3E1200/ Jülich 135 Forschungszentrum Juelich Slide 2 / Universitaet Aachen/RWTH Aachen 447 =69% 357 =69% Cray Inc. T3E1200/ 540 SunFire15K/
2 TOP 500 Nov. List German Sites, 2 nd part federal thematic regional Rank Installation Site Max-Planck-Gesellschaft MPI/IPP HWW/Universitaet Stuttgart Location Garching Stuttgart R max 355 =73% 341 =70% Peak Manufacturer Computer / Procs Cray Inc. T3E/ 812 Cray Inc. T3E900/ 540 Year of Installat ion ZIB/Konrad Zuse-Zentrum fuer Informationstechnik Berlin 341 =70% 486 Cray Inc. T3E900/ Theoretical Chemistry, Ruhr- University Bochum Bochum =66% Megware AMD Athlon MP SCI 2D-Torus/ Universitaet Darmstadt Darmstadt =47% GHz GigEth/ Forschungszentrum Juelich Jülich 234 =72% 324 Cray Inc. T3E600/ GWDG Göttingen 226 =67% 336 IBM SP Power3 375 MHz/ Technische Universitaet Chemnitz Chemnitz =52% Self-made CLIC PIII 800 MHz/ Göttingen 373 GWDG Slide 3 / Universitaet Aachen/RWTH Aachen 198 =45% IBM pseries GHz GigEth/ 96 SunFire15K/ Universitaet Aachen/RWTH Aachen SunFire15K/ 144 High Performance Computing in Germany Federal HLRS/HWW Stuttgart NIC Jülich LRZ/HLRB Munich Thematic (federal) RZ Garching DKRZ/HLRE Hamburg DWD Offenbach Regional ZIB Berlin RRZN Hannover See also and Slide 4 / 12
3 Architectures Parallel Vector Processor (PVP) & Symmetric Multi Processor (SMP) NEC SX-6, SX-5 CPUs Cray SV1, X1 Memory-Interconnect Memory MPP, massively parallel processor achieve Cray T3E-1200, 900, 600 Linux cluster Memory CPUs Node Interconnect Hierarchical: Cluster of SMP nodes Hitachi SR8000 IBM SP NEC SX cluster Linux cluster Node Interconnect nodes CPUs Memory-Interconnect Memory Slide 5 / 12 Vector Processing some platforms perform vector processing vector processor mainly a question of memory bandwidth how many load/stores per floating point operation? memory-interconnect for which memory access pattern the compiler supports the vector access the hardware supports the vector access Slide 6 / 12
4 HLRS Stuttgart: hww hww configuration (2003) NEC SX-4/40 H2 NEC SX-5/32 M2e Cray SV1/20 Fileserver Research networks debis networks hp V2250 Vaihingen 2 GBit/s d 20 km Untertürkheim SGI Origin 2000 Hitachi SR8000 Cray T3E-900/512 Slide 7 / 12 HLRS at Stuttgart High-Performance Computing-Center Stuttgart (HLRS) at University of Stuttgart Platforms: NEC SX-5/32 (128 GFlop/s peak) NEC SX-6/48 (550 GFlop/s peak) vector system, available ~ April 2004 Next generation NEC SX? (~ 12 TFlop/s peak) vector system, available ~ early 2005 Cray T3E will be substituted by: IA32 and IA64 clusters Online-Proposals: > HW&Access > Access > proposal > test account (Schnupperaccount) Slide 8 / 12 NEC SX-6 NEC SX-5/32 M2e 2*16 PEs, GByte Cray T3E PEs, GByte
5 NIC at Jülich John von Neumann-Institut für Computing (NIC) at Forschungszentrum Jülich Platforms: IBM Regatta (30x32 CPUs Power GHz, 128 GB/node, Federation Switch with 4 adapters per 32-CPU-SMP node, since Feb 11, 2004) Extension to 41 nodes: March 2004, 8.9 Tflop/s Cray T3E (will be switched off in Summer 2004) Cray SV1ex Online-Proposals: > Computing Time Deadlines April 30, for accounts valid July June (next year) Oct. 31, for accounts valid Jan. Dec. (next year) Slide 9 / 12 LRZ / HLRB at Munich: Hitachi SR8000 System: 168 nodes, 2.02 TFLOP/s peak 1.65 TFLOP/s Linpack 1.3 TB memory Node: 8 CPUs, 12 GFLOP/s 8 GB, SMP pseudo-vector ext. b/w: 950 MB/s CPU: 1.5 GFLOP/s, 375 MHz 4 GB/s memory b/w Installed: 1.Q 2000 at LRZ (112 nodes) Slide 10 / 12
6 Hitachi SR 8000-F1/112 (Rank 5 in TOP 500 / June 2000) Höchstleistungsrechner in Bayern (HLRB) at Leibniz-Rechenzentrum (LRZ) Platform: Hitachi SR8000-F1 (168 nodes, each with 8 processors) Online-Proposals: > Online Project Proposal Submission Slide 11 / 12 High Performance Computing in Germany Summary Federal HLRS/HWW Stuttgart NIC Jülich LRZ/HLRB Munich Thematic (federal) RZ Garching DKRZ Hamburg DWD Offenbach Regional ZIB Berlin RRZN Hannover See also >HW&Access >Access nic/nic-e.html compute/hlrb/ Slide 12 / 12
Architecture of Hitachi SR-8000
Architecture of Hitachi SR-8000 University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 Most of the slides from Hitachi Slide 2 the problem modern computer are data
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit Prof. Dieter Kranzlmüller (LRZ) LRZ HPC-Systems at the End of the UNIX-Era (Years 2000-2002) German national supercomputer Hitachi SR800
Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner
(Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical
and RISC Optimization Techniques for the Hitachi SR8000 Architecture
1 KONWIHR Project: Centre of Excellence for High Performance Computing Pseudo-Vectorization and RISC Optimization Techniques for the Hitachi SR8000 Architecture F. Deserno, G. Hager, F. Brechtefeld, G.
How To Write A Parallel Computer Program
An Introduction to Parallel Programming An Introduction to Parallel Programming Tobias Wittwer VSSD Tobias Wittwer First edition 2006 Published by: VSSD Leeghwaterstraat 42, 2628 CA Delft, The Netherlands
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Res. Lett. Inf. Math. Sci., 2003, Vol.5, pp 1-10 Available online at http://iims.massey.ac.nz/research/letters/ 1 Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
A Scalable Ethernet Clos-Switch
A Scalable Ethernet Clos-Switch Norbert Eicker John von Neumann-Institute for Computing Research Centre Jülich Technisches Seminar Desy Zeuthen 9.5.2006 Outline Motivation Clos-Switches Ethernet Crossbar
Performance of the JMA NWP models on the PC cluster TSUBAME.
Performance of the JMA NWP models on the PC cluster TSUBAME. K.Takenouchi 1), S.Yokoi 1), T.Hara 1) *, T.Aoki 2), C.Muroi 1), K.Aranami 1), K.Iwamura 1), Y.Aikawa 1) 1) Japan Meteorological Agency (JMA)
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering
JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview
64-Bit versus 32-Bit CPUs in Scientific Computing
64-Bit versus 32-Bit CPUs in Scientific Computing Axel Kohlmeyer Lehrstuhl für Theoretische Chemie Ruhr-Universität Bochum March 2004 1/25 Outline 64-Bit and 32-Bit CPU Examples
Extreme Scale Compu0ng at LRZ
Extreme Scale Compu0ng at LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy of Sciences
Current Trend of Supercomputer Architecture
Current Trend of Supercomputer Architecture Haibei Zhang Department of Computer Science and Engineering [email protected] Abstract As computer technology evolves at an amazingly fast pace,
Extreme Scaling on Energy Efficient SuperMUC
Extreme Scaling on Energy Efficient SuperMUC Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy
Lecture 2 Parallel Programming Platforms
Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1
Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
A Brief Survery of Linux Performance Engineering. Philip J. Mucci University of Tennessee, Knoxville [email protected]
A Brief Survery of Linux Performance Engineering Philip J. Mucci University of Tennessee, Knoxville [email protected] Overview On chip Hardware Performance Counters Linux Performance Counter Infrastructure
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms
Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms Björn Rocker Hamburg, June 17th 2010 Engineering Mathematics and Computing Lab (EMCL) KIT University of the State
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
GPFS und HPSS am HLRS
GPFS und HPSS am HLRS Peter W. Haas Archivierung im Bereich Höchstleistungsrechner Swisstopo, Bern 3. Juli 2009 1 High Performance Computing Center Stuttgart Table of Contents 1. What are GPFS and HPSS
Parallel Computing. Introduction
Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00
Performance of Various Computers Using Standard Linear Equations Software
CS - 89-85 Performance of Various s Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Science Department University of Tennessee Knoxville, TN 37996-1301 Science and
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
Getting the Performance Out Of High Performance Computing. Getting the Performance Out Of High Performance Computing
Getting the Performance Out Of High Performance Computing Jack Dongarra Innovative Computing Lab University of Tennessee and Computer Science and Math Division Oak Ridge National Lab http://www.cs.utk.edu/~dongarra/
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
High performance parallel computing for Computational Fluid
Technical Paper High performance parallel computing for Computational Fluid Atsushi Itou Toshikazu Nakanishi Takashi Mizuguchi Masatake Yoshida Tei Saburi The supersonic aerodynamic coefficient of a sphere
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Linux Cluster Computing An Administrator s Perspective
Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course
SERVER CLUSTERING TECHNOLOGY & CONCEPT
SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: [email protected] Abstract Server Cluster is one of the clustering technologies; it is use for
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
The K computer: Project overview
The Next-Generation Supercomputer The K computer: Project overview SHOJI, Fumiyoshi Next-Generation Supercomputer R&D Center, RIKEN The K computer Outline Project Overview System Configuration of the K
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
High Performance Computing in the Multi-core Area
High Performance Computing in the Multi-core Area Arndt Bode Technische Universität München Technology Trends for Petascale Computing Architectures: Multicore Accelerators Special Purpose Reconfigurable
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
High Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
Wind-Tunnel Simulation using TAU on a PC-Cluster: Resources and Performance Stefan Melber-Wilkending / DLR Braunschweig
Wind-Tunnel Simulation using TAU on a PC-Cluster: Resources and Performance Stefan Melber-Wilkending / DLR Braunschweig Folie 1 > Vortrag > Stefan Melber-Wilkending Wind-Tunnel Simulation using TAU on
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
Cray XT3 Supercomputer Scalable by Design CRAY XT3 DATASHEET
CRAY XT3 DATASHEET Cray XT3 Supercomputer Scalable by Design The Cray XT3 system offers a new level of scalable computing where: a single powerful computing system handles the most complex problems every
Performance Characteristics of Large SMP Machines
Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and
Experiences with HPC on Windows
Experiences with on Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Server Computing Summit 2008 April 7 11, HPI/Potsdam Experiences with on
Improved LS-DYNA Performance on Sun Servers
8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms
Evaluation of CUDA Fortran for the CFD code Strukti
Evaluation of CUDA Fortran for the CFD code Strukti Practical term report from Stephan Soller High performance computing center Stuttgart 1 Stuttgart Media University 2 High performance computing center
Lecture 1: the anatomy of a supercomputer
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949
ANALYSIS OF SUPERCOMPUTER DESIGN
ANALYSIS OF SUPERCOMPUTER DESIGN CS/ECE 566 Parallel Processing Fall 2011 1 Anh Huy Bui Nilesh Malpekar Vishnu Gajendran AGENDA Brief introduction of supercomputer Supercomputer design concerns and analysis
Cray Gemini Interconnect. Technical University of Munich Parallel Programming Class of SS14 Denys Sobchyshak
Cray Gemini Interconnect Technical University of Munich Parallel Programming Class of SS14 Denys Sobchyshak Outline 1. Introduction 2. Overview 3. Architecture 4. Gemini Blocks 5. FMA & BTA 6. Fault tolerance
Linux tools for debugging and profiling MPI codes
Competence in High Performance Computing Linux tools for debugging and profiling MPI codes Werner Krotz-Vogel, Pallas GmbH MRCCS September 02000 Pallas GmbH Hermülheimer Straße 10 D-50321
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)
PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters from One Stop Systems (OSS) PCIe Over Cable PCIe provides greater performance 8 7 6 5 GBytes/s 4
High Performance Computing, an Introduction to
High Performance ing, an Introduction to Nicolas Renon, Ph. D, Research Engineer in Scientific ations CALMIP - DTSI Université Paul Sabatier University of Toulouse ([email protected]) Michel
Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network
A typical Linux cluster consists of a group of compute nodes for executing parallel jobs and a head node to which users connect to build and launch their jobs. Often the compute nodes are connected to
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation
Parallel Programming and High-Performance Computing
Parallel Programming and High-Performance Computing Part 1: Introduction Dr. Ralf-Peter Mundani CeSIM / IGSSE General Remarks materials: http://www5.in.tum.de/lehre/vorlesungen/parhpp/ss08/ Ralf-Peter
GASPI A PGAS API for Scalable and Fault Tolerant Computing
GASPI A PGAS API for Scalable and Fault Tolerant Computing Specification of a general purpose API for one-sided and asynchronous communication and provision of libraries, tools, examples and best practices
RLX Technologies Server Blades
Jane Wright Product Report 10 July 2003 RLX Technologies Server Blades Summary RLX Technologies has designed its product line to support parallel applications with high-performance compute clusters of
Grids Computing and Collaboration
Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide
ST810 Advanced Computing
ST810 Advanced Computing Lecture 17: Parallel computing part I Eric B. Laber Hua Zhou Department of Statistics North Carolina State University Mar 13, 2013 Outline computing Hardware computing overview
Accelerating CFD using OpenFOAM with GPUs
Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide
