LANL Computing Environment for PSAAP Partners
|
|
|
- Arleen Walters
- 10 years ago
- Views:
Transcription
1 LANL Computing Environment for PSAAP Partners Robert Cunningham HPC Systems Group (HPC-3) July 2011
2 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux (TOSS) cluster, Moab scheduler, shared /scratchn 4,736 Xeons with IB, 50.4 TF,? procs/job Conejo: companion to Mapache small ASC allocation Lobo is current workhorse TLCC platform Linux (TOSS), Moab scheduler, shared /scratchn 4,352 compute CPUs; max 2,144 procs/job Small Roadrunner hybrid platform available: Cerrillos Big future for Turquoise (Mustang, etc.) but not big ASC
3 LANL Mapache Cluster SGI XE1300 Series with quad-core Intel Xeon % allocation for ASC, primarily for PSAAP Architecture: 4,352 x Intel Xeon@ 2.66 GHz cores on 592 compute nodes = 8 cores/node Mellanox InfiniBand interconnect 14.2 TeraBytes RAM = 24GB/node Theoretical peak of ~50.4 teraflops
4 LANL Lobo Cluster Standard DoE Lab cookie-cutter cluster from Appro, Intl. 75% allocation for ASC, primarily for PSAAP 2 Connected Units (CUs) combined include: 4,352 x AMD 2.2 GHz cores on compute nodes Voltaire InfiniBand interconnect 8.7 TeraBytes RAM Theoretical peak of ~17.4 teraflops
5 LANL Cerrillos Cluster Hybrid architecture: Opteron+Cell, from IBM. 25% allocation for ASC, primarily for PSAAP 2 Connected Units (CUs) combined include: 1,440 x AMD 1.8 GHz cores on compute nodes 1,440 Cell Broadband Engines Voltaire InfiniBand interconnect 11.8 TeraBytes RAM Theoretical peak of ~152 teraflops
6 LANL TLCC2 Cluster Plans No order placed, so this is a prediction only, for one ASC cluster in Turquoise network: moonlight.lanl.gov Hybrid architecture: Opteron+GPGPU, from Appro Primarily for PSAAP, but other ASC will use it Expected in ~December 308 compute nodes, ie. 2 SUs Intel Sandy Bridge, dual proc node Nvidia M2090 Qlogic InfiniBand interconnect Theoretical peak of ~1.6 TF/node
7 LANL HPC Environment Obtain an account, acquire cryptocard (foreign nationals start early!) Access HPC platforms in Open Collaborative (Turquoise) Network via firewall/gateway: ssh wtrw.lanl.gov After connecting, ssh into front-end: lo-fe[1 2], mp-fe1, ce-fe[1 2] Moab + [Slurm Torque] schedules nodes, batch or interactive msub scriptname -or - msub I checkjob, showstate, showq, mjobctl mpirun to run parallel jobs across nodes Fairshare scheduling to deliver pre-determined allocation Unique security model no connecting (ssh) between platforms No kerberos tickets; keep cryptocard handy scp or sftp using File Transfer Agents (FTAs), turq-fta1.lanl.gov and turqfta2.lanl.gov
8 LANL HPC Environment Usage model Front-end: text editing, job script set-up, pathname arrangements, compilations, linking, and job launching Compute nodes: run applications I/O nodes: out to local disk, no direct user access Possible intra-network File Transfer Agent (FTA) arrangement in future Modulefiles to establish compilers, libraries, tools in $PATH, $LD_LIBRARY_PATH Compilers: Intel, PGI, Pathscale. MPI: OpenMPI, Mvapich Math libraries provided by compiler vendors + ATLAS
9 LANL Turquoise HPC Storage Tiny home directories, not shared between clusters (security) Larger NFS-based workspace in /usr/projects/proj_name Big parallel, globally-accessible filesystem: /scratchn Cross-mounted to all HPC nodes in Turquoise ~800 TB total space Fast for parallel I/O, slower than NFS for serial transfers No automated back-ups Purged weekly, 30-days or older! Archival storage, offline, available via GPFS File transfers an everpresent problem on the brink of a solution
10 Parallel Scalable Back Bone (PaScalBB) Disk Poor Clients cluster cluster cluster cluster Job Queue Scalable OBFS Enterprise Global FS File System Gateways BackBone Object Archive Diskless Clients cluster cluster cluster cluster Relieve the compute nodes Multiple clusters sharing large, global namespace parallel I/O subcluster Includes Cerrillos/Lobo/Coyote Network is combination of HPC Interconnect + commodity networking bridge Panasas is storage vendor I/O through a set of fileserver nodes over IB; nodes serve as interconnect<->gige routers.
11 LANL Turquoise File Transfers File Transfers between Turquoise and the wild are S L O W, we are addressing this now; may need your help to test. Throttled by gateway/firewall and security: ~1 MB/s! Packet reordering Sniffing Only scp allowed today: encryption Panasas filesystem (/scratchn) slow serial All data routed through tiny wtrw, twice Solution currently in testing: double-hop through a security enclave GPFS-based way station Parallel transfers using bbcp Orders of magnitude faster: 10s MB/s up to low 100s Turquoise holds the future for unclassified work, big changes ahead
12 Turquoise High Performance File Transfer Service Turquoise HPC Clusters wtrw GPFS Transfer New File Transfer New Perf Sonar SSH Existing New bbcp gridftp LANL Authentication Firewall LANL Yellow Collaborators via the Internet
13 LANL Turquoise Tools TAU (Tuning and Analysis Utils) -- profiling and tracing toolkit STAT (Stack Trace Analysis Tool) from LLNL Boost C++ Utility Libraries Open SpeedShop sampling experiments callstack analysis hardware performance counters MPI profiling and tracing I/O profiling and tracing floating-point exception analysis
14 LANL Turquoise Tools Javelina -- code coverage tool that uses dynamic instrumentation Valgrind -- instrumentation framework for dynamic analysis, memory errors cache and branch prediction profiler thread error detection heap profiling memp parallel heap profiling library mpip Lightweight, Scalable MPI profiling
15 LANL Debugging gdb (Gnu debugger) comes with distro Totalview interactive medium-scale parallel debugger parallel independent process views ThreadSpotter MemoryScape ReplayEngine GUI or command-line (TCL)
16 3 Useful LANL Web Sites for Users User Docs (or) (or call option 3, Calendar HPC Training
17 HPC Accounts Don t forget Photo Op!
18 Questions?
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014
Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
High Performance Computing in Aachen
High Performance Computing in Aachen Samuel Sarholz [email protected] aachen.de Center for Computing and Communication RWTH Aachen University HPC unter Linux Sep 15, RWTH Aachen Agenda o Hardware o Development
Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014
1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh [email protected] Edits,
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015
Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Maximize Your Productivity with Flexible, High-Performance Cray CS400 Cluster Supercomputers In science and business, as soon as one question
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist
The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing
Petascale Software Challenges. Piyush Chaudhary [email protected] High Performance Computing
Petascale Software Challenges Piyush Chaudhary [email protected] High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
CAS2K5. Jim Tuccillo [email protected] 912.576.5215
CAS2K5 Jim Tuccillo [email protected] 912.576.5215 Agenda icorporate Overview isystem Architecture inode Design iprocessor Options iinterconnect Options ihigh Performance File Systems Lustre isystem Management
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
SUN HPC SOFTWARE CLUSTERING MADE EASY
SUN HPC SOFTWARE CLUSTERING MADE EASY New Software Access Visualization Workstation, Thin Clients, Remote Access Developer Management OS Compilers, Workload, Linux, Solaris Debuggers, Systems and Optimization
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Debugging and Profiling Lab. Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma [email protected]
Debugging and Profiling Lab Carlos Rosales, Kent Milfeld and Yaakoub Y. El Kharma [email protected] Setup Login to Ranger: - ssh -X [email protected] Make sure you can export graphics
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
Berkeley Research Computing. Town Hall Meeting Savio Overview
Berkeley Research Computing Town Hall Meeting Savio Overview SAVIO - The Need Has Been Stated Inception and design was based on a specific need articulated by Eliot Quataert and nine other faculty: Dear
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
Fujitsu HPC Cluster Suite
Webinar Fujitsu HPC Cluster Suite 29 th May 2013 Павел Борох 0 HPC: полный спектр предложений от Fujitsu PRIMERGY Server, Workstation Cluster Management & Operation ISV and Research Partnerships HPC Cluster
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
Introduction to Supercomputing with Janus
Introduction to Supercomputing with Janus Shelley Knuth [email protected] Peter Ruprecht [email protected] www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 Scale-out and Cloud
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Streamline Computing Linux Cluster User Training. ( Nottingham University)
1 Streamline Computing Linux Cluster User Training ( Nottingham University) 3 User Training Agenda System Overview System Access Description of Cluster Environment Code Development Job Schedulers Running
SR-IOV: Performance Benefits for Virtualized Interconnects!
SR-IOV: Performance Benefits for Virtualized Interconnects! Glenn K. Lockwood! Mahidhar Tatineni! Rick Wagner!! July 15, XSEDE14, Atlanta! Background! High Performance Computing (HPC) reaching beyond traditional
Hack the Gibson. John Fitzpatrick Luke Jennings. Exploiting Supercomputers. 44Con Edition September 2013. Public EXTERNAL
Hack the Gibson Exploiting Supercomputers 44Con Edition September 2013 John Fitzpatrick Luke Jennings Labs.mwrinfosecurity.com MWR Labs Labs.mwrinfosecurity.com MWR Labs 1 Outline Introduction Important
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
Background and introduction Using the cluster Summary. The DMSC datacenter. Lars Melwyn Jensen. Niels Bohr Institute University of Copenhagen
Niels Bohr Institute University of Copenhagen Who am I Theoretical physics (KU, NORDITA, TF, NSC) Computing non-standard superconductivity and superfluidity condensed matter / statistical physics several
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
icer Bioinformatics Support Fall 2011
icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
Hadoop on the Gordon Data Intensive Cluster
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
Tutorial: Using WestGrid. Drew Leske Compute Canada/WestGrid Site Lead University of Victoria
Tutorial: Using WestGrid Drew Leske Compute Canada/WestGrid Site Lead University of Victoria Fall 2013 Seminar Series Date Speaker Topic 23 September Lindsay Sill Introduction to WestGrid 9 October Drew
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
High Performance Computing Infrastructure at DESY
High Performance Computing Infrastructure at DESY Sven Sternberger & Frank Schlünzen High Performance Computing Infrastructures at DESY DV-Seminar / 04 Feb 2013 Compute Infrastructures at DESY - Outline
Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ)
Advanced MPI Hybrid programming, profiling and debugging of MPI applications Hristo Iliev RZ Rechen- und Kommunikationszentrum (RZ) Agenda Halos (ghost cells) Hybrid programming Profiling of MPI applications
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
The Information Technology Solution. Denis Foueillassar TELEDOS project coordinator
The Information Technology Solution Denis Foueillassar TELEDOS project coordinator TELEDOS objectives (TELEservice DOSimetrie) Objectives The steps to reach the objectives 2 Provide dose calculation in
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
The RWTH Compute Cluster Environment
The RWTH Compute Cluster Environment Tim Cramer 11.03.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) How to login Frontends cluster.rz.rwth-aachen.de cluster-x.rz.rwth-aachen.de
www.bsc.es MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015
www.bsc.es MareNostrum 3 Javier Bartolomé BSC System Head Barcelona, April 2015 Index MareNostrum 3 Overview Compute Racks Infiniband Racks Management Racks GPFS Network Racks HPC GPFS Storage Hardware
A High Performance Computing Scheduling and Resource Management Primer
LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
Debugging with TotalView
Tim Cramer 17.03.2015 IT Center der RWTH Aachen University Why to use a Debugger? If your program goes haywire, you may... ( wand (... buy a magic... read the source code again and again and...... enrich
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
Archival Storage At LANL Past, Present and Future
Archival Storage At LANL Past, Present and Future Danny Cook Los Alamos National Laboratory [email protected] Salishan Conference on High Performance Computing April 24-27 2006 LA-UR-06-0977 Main points of
Windows HPC 2008 Cluster Launch
Windows HPC 2008 Cluster Launch Regionales Rechenzentrum Erlangen (RRZE) Johannes Habich [email protected] Launch overview Small presentation and basic introduction Questions and answers Hands-On
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
8/15/2014. Best Practices @OLCF (and more) General Information. Staying Informed. Staying Informed. Staying Informed-System Status
Best Practices @OLCF (and more) Bill Renaud OLCF User Support General Information This presentation covers some helpful information for users of OLCF Staying informed Aspects of system usage that may differ
Building an energy dashboard. Energy measurement and visualization in current HPC systems
Building an energy dashboard Energy measurement and visualization in current HPC systems Thomas Geenen 1/58 [email protected] SURFsara The Dutch national HPC center 2H 2014 > 1PFlop GPGPU accelerators
Linux Cluster Computing An Administrator s Perspective
Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14
Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7
Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Lecture 2 Parallel Programming Platforms
Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple
ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009
ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Lowers energy costs; datacenter PUE of 1.1 or lower Capable of up to 80 percent heat capture Maximize Your Productivity with Flexible, High-Performance
High Performance Computing at the Oak Ridge Leadership Computing Facility
Page 1 High Performance Computing at the Oak Ridge Leadership Computing Facility Page 2 Outline Our Mission Computer Systems: Present, Past, Future Challenges Along the Way Resources for Users Page 3 Our
ELEC 377. Operating Systems. Week 1 Class 3
Operating Systems Week 1 Class 3 Last Class! Computer System Structure, Controllers! Interrupts & Traps! I/O structure and device queues.! Storage Structure & Caching! Hardware Protection! Dual Mode Operation
Parallele Dateisysteme für Linux und Solaris. Roland Rambau Principal Engineer GSE Sun Microsystems GmbH
Parallele Dateisysteme für Linux und Solaris Roland Rambau Principal Engineer GSE Sun Microsystems GmbH 1 Agenda kurze Einführung QFS Lustre pnfs ( Sorry... ) [email protected] Sun Proprietary/Confidential
