Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
|
|
|
- Chrystal Burke
- 10 years ago
- Views:
Transcription
1 Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
2 Schedule: Monday, May 19 13:00-13:30 Welcome and Introduction of JSC Norbert Attig, Daniel Rohe, JSC 13:30-15:00 JUROPA/HPC-FF - An Overview U. Detert, C. Paschoulas, T. Kondylis - JSC 15:00-15:30 Break 15:30-16:15 JUROPA- Tuning for the platform - part I Peter Niessen, ParTec 16:15-17:30 JUROPA - Tuning for the platform - part II Heinrich Bockhorst, Intel 17:39 Bus SB11 from Seecasino to Rurtalbahn, Jülich 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 2
3 Schedule: Friday, May 20 (morning) 08:30-09:45 HPC Software - Compiler and Tools Bernd Mohr, JSC 09:45-10:00 Break 10:00-10:30 HPC Software - Math Libs & Application Software Inge Gutheil, JSC 10:30-11:00 Postprocessing/Visualization-Cluster JUVIS Herwig Zilken, JSC 11:00-11:45 Uniform Resource Access at JSC UNICORE, M. Rambadt PTP, C. Karbach, JSC 11:45-12:45 Lunch break 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 3
4 Schedule: Friday, May 20 (afternoon) 12:45-14:00 JUQUEEN An overview of BlueGene/Q Jutta Docter, JSC 14:00-14:15 Break 14:15-15:15 JUQUEEN Best Practices on BlueGene/Q Florian Janetzko, JSC 15:15-15:30 Break 15:30-16:30 Tuning for the platform BlueGene/Q Christoph Pospiech, IBM 16:30 End of Day 2 16:36 Bus SB11 from Seecasino to Rurtalbahn 16:47 Bus SB11 from Seecasino to Aachen/Jülich 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 4
5 Organisational Information List of participants Slides of all talks are available after the course at WLAN access Eduroam or MAC-address needed ( List) 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 5
6 Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre Introduction N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
7 Jülich Supercomputing Centre Supercomputer operation for: Centre FZJ, Regional JARA Helmholtz & National NIC, GCS Europe PRACE, EU projects Application support Traditional and SimLab support model Scientific visualization Peer review support and coordination R&D work Methods and algorithms, performance analysis and tools Community data management service Computer architectures, Exascale Laboratories: EIC, ECL, NVIDIA Education and Training 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 7
8 Access to Supercomputing Resources at Jülich Traditional access to JUROPA via John von Neumann Institute for Computing (NIC) Kommission zur Vergabe von SC Ressourcen (VSR) (for FZJ staff members only) Access to JUQUEEN via JARA-HPC Vergabegremium (VGG) and/or Kommission zur Vergabe von SC Ressourcen (VSR) (for FZJ and RWTH staff members only; JARA-HPC Partition) Gauss Centre for Supercomputing (GCS) (JUQUEEN CPU time proposals are evaluated by NIC) European RI PRACE, Project Access: Biannual CfPs since June 2010 Call for preparatory access open, no closing dates 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 8
9 Gauss Centre for Supercomputing (GCS) Alliance of the three German national supercomputing centres Jülich Supercomputing Centre (JSC) Leibniz-Rechenzentrum (LRZ) der Bayerischen Akademie der Wissenschaften Höchstleistungsrechenzentrum Stuttgart (HLRS) Support of computational science through Multi-Petaflop/s supercomputers Multi-Petabyte storage Multi-Gigabit networking infrastructure Large-Scale projects Gauss projects Call for Proposals German representative in PRACE 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 9
10 PRACE Partnership for Advanced Computing in Europe Consists of 24 European partner states, each represented by one institution Prepares the creation of a persistent, sustainable pan- European HPC service Prepares the establishment of three to five Tier-0 supercomputing centres at different European sites Defines and establishes a legal and organisational structure involving HPC centres, national funding agencies, and scientific user communities Develops funding and usage models and establishes a peer review process Provides training for European scientists and creates a permanent education programme 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 10
11 Supercomputer Systems: Dual Concept IBM Power 4+ JUMP, 9 TFlop/s JUROPA successor ~ 2 PFlop/s IBM Power 6 JUMP, 9 TFlop/s JUROPA 200 TFlop/s HPC-FF 100 TFlop/s + Booster ~ 10 PFlop/s File Server Lustre GPFS IBM Blue Gene/L JUBL, 45 TFlop/s IBM Blue Gene/P JUGENE, 1 PFlop/s IBM Blue Gene/Q JUQUEEN 5.9 PFlop/s JUQUEEN successor ~ 50 PFlop/s General-Purpose Cluster Highly Scalable System 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 11
12 JUQUEEN: Jülich s Scalable Petaflop System IBM Blue Gene/Q JUQUEEN IBM PowerPC A2 1.6 GHz, 16 cores per node 28 racks, 458,752 cores 5,9 Petaflop/s peak 5,0 Petaflop/s Linpack 448 TByte main memory connected to a Global Parallel File System (GPFS) with 6 PByte online disk and up to 25 PByte offline tape capacity 5D network Production start: Nov 5, May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 12
13 JUROPA: Jülich s General-Purpose Supercomputer JUROPA, an Intel-based cluster 2 Intel Nehalem quad-core processors, 2.93 GHz, SMT 3,288 compute nodes, 26,304 cores 308 Teraflop/s peak 275 Teraflop/s Linpack 79 TByte memory Mellanox Infiniband QDR with non-blocking Fat Tree topology 1.8 PByte disk storage on Lustre file system, connected to GPFS JUROPA designed in co-development with HPC companies JUROPA serves as prototype for HPC-FF, U. of Cambridge, CHPC (Cape Town), PRACE 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 13
14 Use by Compute Time Shares 100% 80% 80% of the available time is being granted! 60% 40% 20% FZJ obligations FZJ projects JARA-HPC (regional) NIC (Germany) GCS/NIC (Germany) PRACE (Europe) 0% JUQUEEN JUROPA 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 15
15 Research Fields of Current National Projects Leadership-Class System JUQUEEN ~ 100 Projects General-Purpose Supercomputer JUROPA ~ 160 Projects Granting periods 05/ / / /2014 Astrophysics Biophysics Chemistry Earth & Environment Plasma Physics Soft Matter Fluid Dynamics Elementary Particle Physics Computer Science Condensed Matter Material Science 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 16
16 National and European User Groups Proposals for computer time accepted from Germany and Europe Peer review by international referees CPU time is granted by independent Scientific Councils 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 17
17 JUQUEEN Usage 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 18
18 JUROPA Usage 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 19
19 Domain-specific User Support and Research 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 20
20 Summary The Jülich Supercomputing Centre provides world-class supercomputers high-end primary and domain-specific user support to German and European research groups working in the computational sciences and in engineering JSC expects to see breakthrough science parallel applications, using a substantial number of processors simultaneously 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 21
21 End of Presentation 19. May 2014 Supercomputing Resources in Jülich: Welcome and Introduction of JSC 22
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering
JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform
Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4
Access, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab
Access, Documentation and Service Desk Anupam Karmakar / Application Support Group / Astro Lab Time to get answer to these questions Who is allowed to use LRZ hardware? My file system is full. How can
Report on Project: Advanced System Monitoring for the Parallel Tools Platform (PTP)
Mitglied der Helmholtz-Gemeinschaft Report on Project: Advanced System Monitoring for the Parallel Tools Platform (PTP) September, 2014 Wolfgang Frings and Carsten Karbach Project progress Server caching
A highly configurable and efficient simulator for job schedulers on supercomputers
Mitglied der Helmholtz-Gemeinschaft A highly configurable and efficient simulator for job schedulers on supercomputers April 12, 2013 Carsten Karbach, Jülich Supercomputing Centre (JSC) Motivation Objective
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
Extreme Scaling on Energy Efficient SuperMUC
Extreme Scaling on Energy Efficient SuperMUC Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy
PRACE hardware, software and services. David Henty, EPCC, [email protected]
PRACE hardware, software and services David Henty, EPCC, [email protected] Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Introduction History Design Blue Gene/Q Job Scheduler Filesystem Power usage Performance Summary Sequoia is a petascale Blue Gene/Q supercomputer Being constructed by IBM for the National Nuclear Security
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum
International High Performance Computing Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum Why International HPC? Large-scale science projects can require resources
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
Relations with ISV and Open Source. Stephane Requena GENCI [email protected]
Relations with ISV and Open Source Stephane Requena GENCI [email protected] Agenda of this session 09:15 09:30 Prof. Hrvoje Jasak: Director, Wikki Ltd. «HPC Deployment of OpenFOAM in an Industrial
PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council
PRACE the European HPC Research Infrastructure Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council Barcelona, 6-June-2013 PRACE an European e-infrastructure & ESFRI-list item in operation
Parallel file I/O bottlenecks and solutions
Mitglied der Helmholtz-Gemeinschaft Parallel file I/O bottlenecks and solutions Views to Parallel I/O: Hardware, Software, Application Challenges at Large Scale Introduction SIONlib Pitfalls, Darshan,
Extreme Scale Compu0ng at LRZ
Extreme Scale Compu0ng at LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy of Sciences
Access to the Federal High-Performance Computing-Centers
Access to the Federal High-Performance Computing-Centers [email protected] University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 TOP 500 Nov. List German Sites,
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
Summit and Sierra Supercomputers:
Whitepaper Summit and Sierra Supercomputers: An Inside Look at the U.S. Department of Energy s New Pre-Exascale Systems November 2014 1 Contents New Flagship Supercomputers in U.S. to Pave Path to Exascale
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
HPC Infrastructure Development in Bulgaria
HPC Infrastructure Development in Bulgaria Svetozar Margenov [email protected] Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str. Bl. 25-A,
Journée Mésochallenges 2015 SysFera and ROMEO Make Large-Scale CFD Simulations Only 3 Clicks Away
SysFera and ROMEO Make Large-Scale CFD Simulations Only 3 Clicks Away Benjamin Depardon SysFera Sydney Tekam Tech-Am ING Arnaud Renard ROMEO Manufacturing with HPC 98% of products will be developed digitally
Information about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations
Information about Pan-European HPC infrastructure PRACE Vít Vondrák IT4Innovations Realizing the ESFRI Vision for an HPC RI European HPC- facilities at the top of an HPC provisioning pyramid Tier- 0: European
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
SR-IOV In High Performance Computing
SR-IOV In High Performance Computing Hoot Thompson & Dan Duffy NASA Goddard Space Flight Center Greenbelt, MD 20771 [email protected] [email protected] www.nccs.nasa.gov Focus on the research side
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
walberla: A software framework for CFD applications on 300.000 Compute Cores
walberla: A software framework for CFD applications on 300.000 Compute Cores J. Götz (LSS Erlangen, [email protected]), K. Iglberger, S. Donath, C. Feichtinger, U. Rüde Lehrstuhl für Informatik 10 (Systemsimulation)
Parallel I/O on JUQUEEN
Parallel I/O on JUQUEEN 3. February 2015 3rd JUQUEEN Porting and Tuning Workshop Sebastian Lührs, Kay Thust [email protected], [email protected] Jülich Supercomputing Centre Overview Blue Gene/Q
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner
(Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale Toshio Endo,Akira Nukada, Satoshi Matsuoka GSIC, Tokyo Institute of Technology ( 東 京 工 業 大 学 ) Performance/Watt is the Issue
A Scalable Ethernet Clos-Switch
A Scalable Ethernet Clos-Switch Norbert Eicker John von Neumann-Institute for Computing Research Centre Jülich Technisches Seminar Desy Zeuthen 9.5.2006 Outline Motivation Clos-Switches Ethernet Crossbar
Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez
Energy efficient computing on Embedded and Mobile devices Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez A brief look at the (outdated) Top500 list Most systems are built
High Productivity Computing With Windows
High Productivity Computing With Windows Windows HPC Server 2008 Justin Alderson 16-April-2009 Agenda The purpose of computing is... The purpose of computing is insight not numbers. Richard Hamming Why
Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT
Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT SCIENTIFIC COMPUTING, HPC AND GRIDS KIT the cooperation of Forschungszentrum Karlsruhe GmbH and Universität Karlsruhe (TH)
RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer
Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: [email protected] Press Release RSC presents SPbPU supercomputer center and new scientific
Parallel Large-Scale Visualization
Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU
Experience of Data Transfer to the Tier-1 from a DIRAC Perspective
Experience of Data Transfer to the Tier-1 from a DIRAC Perspective Lydia Heck Institute for Computational Cosmology Manager of the DiRAC-2 Data Centric Facility COSMA 1 Talk layout Introduction to DiRAC?
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
David Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
Retargeting PLAPACK to Clusters with Hardware Accelerators
Retargeting PLAPACK to Clusters with Hardware Accelerators Manuel Fogué 1 Francisco Igual 1 Enrique S. Quintana-Ortí 1 Robert van de Geijn 2 1 Departamento de Ingeniería y Ciencia de los Computadores.
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Petascale Visualization: Approaches and Initial Results
Petascale Visualization: Approaches and Initial Results James Ahrens Li-Ta Lo, Boonthanome Nouanesengsy, John Patchett, Allen McPherson Los Alamos National Laboratory LA-UR- 08-07337 Operated by Los Alamos
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Performance Evaluation of Amazon EC2 for NASA HPC Applications!
National Aeronautics and Space Administration Performance Evaluation of Amazon EC2 for NASA HPC Applications! Piyush Mehrotra!! J. Djomehri, S. Heistand, R. Hood, H. Jin, A. Lazanoff,! S. Saini, R. Biswas!
Current Status of FEFS for the K computer
Current Status of FEFS for the K computer Shinji Sumimoto Fujitsu Limited Apr.24 2012 LUG2012@Austin Outline RIKEN and Fujitsu are jointly developing the K computer * Development continues with system
Cray s Storage History and Outlook Lustre+ Jason Goodman, Cray LUG 2015 - Denver
Cray s Storage History and Outlook Lustre+ Jason Goodman, Cray - Denver Agenda Cray History from Supercomputers to Lustre Where we are Today Cray Business OpenSFS Flashback to the Future SSDs, DVS, and
KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit Prof. Dieter Kranzlmüller (LRZ) LRZ HPC-Systems at the End of the UNIX-Era (Years 2000-2002) German national supercomputer Hitachi SR800
System Software for High Performance Computing. Joe Izraelevitz
System Software for High Performance Computing Joe Izraelevitz Agenda Overview of Supercomputers Blue Gene/Q System LoadLeveler Job Scheduler General Parallel File System HPC at UR What is a Supercomputer?
