The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist
|
|
|
- Naomi Cole
- 9 years ago
- Views:
Transcription
1 The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015
2 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing our clusters. Bright [Cluster Manager] is intuitive to use, and with it I can effectively manage my cluster without wasting time writing scripts, or synchronizing management tool revisions. Provisioning is fast and easy too. I prefer this approach over open source toolkits. 2
3 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
4 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
5 CPU GPUs Memory Disk Ethernet Interconnect IPMI / ilo PDU Bright Cluster Manager CUDA Environment Cluster Management GUI Provisioning User Portal SSL / SOAP / X509 / IPtables Cluster Management Daemon Slurm PBS Pro Torque/Maui Torque/MOAB Grid Engine LSF Monitoring Automation Health Checks Management SLES / RHEL / CentOS / SL Cluster Management Shell Compilers Libraries Debuggers Profilers
6 Unified Memory 6
7 7
8 8
9 9
10 NVIDIA GPU Boost 10
11 Modernized monitoring for HPC clusters 11
12 Cluster Health Management Provide problem free environment for running jobs Four elements 1. Cluster management automation 2. Regular health checks 3. Pre-job health checks 4. Hardware stability & performance tests All elements above are configurable and extensible
13 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
14 Syncing with GPUs + CUDA Innovation characterizes the entire history and evolution of GPU programmability through CUDA BUT introduces challenges and opportunities Bright Computing s approach leverages People Proactively maintaining business and technical relationships Process `Hands-on engineering begins with release candidates Product Preliminary to fully productized implementations Bright Cluster Manager released once twice per year Updates flow continuously
15 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
16 Available Versions of the CUDA Toolkit 16
17 Using CUDA
18
19 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
20 HPC Development Environment Compilers (GNU, Intel*, AMD, Portland*, etc.) Debuggers and profilers (GNU, TAU, Allinea, TotalView) MPI libraries (OpenMPI, MPICH, MPICH-MX, MVAPICH) Other libraries (threading libraries, OpenMP, Global Arrays, HDF5, IIPP, TBB, NetCDF, PETSc, etc.) Mathematical libraries (ACML, MKL*, FFTW, GMP, GotoBLAS, ScaLAPACK, etc.) Environment modules
21 Programming GPUs CUDA OpenCL OpenACC MPI Tools CUDA gdb nvidia-smi CUDA Utility Library Examples 3 rd Party Allinea Rogue Wave
22 CUDA Development Environment
23 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
24 HPC and Hadoop Use GPUs for HPC and Big Data Analytics Introduce GPUs into Hadoop clusters Make use of Hadoop services
25 25
26 26
27 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
28 GPUs in the Cloud? The Top Four Reasons 1. You can realize possibilities using the cloud You can scale up and scale out 2. You still realize the promise of GPU programmability via HPC in the cloud 3. Your use of the cloud is transparent You ve found ways to `hide latency Constraints apply for MPI apps 4. Your go-to apps still work in the cloud
29 Cloud Utilization Scenario I Cluster on Demand node001 head node node002 node003
30 Cloud Utilization Scenario II Cluster Extension node006 node004 node007 node005 head node node001 node002 node003
31 31
32 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
33 Case Study: TUAT (1) The Customer Engages materials-science research Compares computational models with physical experiments High-resolution, 3D phase field modeling at large scales using GPUs The Challenge Make available the latest innovations in GPU technology without distracting focus from research
34 Case Study: TUAT (2) The Solution Laboratory GPU cluster designed and implemented by HPCTech Corp. Bright Cluster Manager deployed by HPCTech Use Bright to fully manage the entire CUDA environment including regular updates Use modules environment via Bright to manage multiple CUDA environments Prototype simulations using laboratory HPC cluster Includes debugging and tuning code Execute large-scale simulations using TSUBAME The Results
35 51μm [wt.%] Calculation steps : Caption: Snapshots of austenite-to-ferrite transformation behavior in Fe-C alloy simulated by a multi-phase-field method. Upper and lower panels show time evolution of ferrite grains and carbon concentration during the phase transformation. The simulation was performed on computational grids using 8 GPUs in lab cluster. (Prof. A. Yamanaka, TUAT)
36 Elapsed time [ 1000 s] Number of GPUs Caption: Performance of multiple-gpu computation of multi-phase-field simulation of austenite-to-ferrite transformation in Fe-C alloy. The performance was measured by performing the simulations on TSUBAME2.5 supercomputer of Tokyo Institute of Technology. The number of computational grids, crystal grains and calculation steps were 512 3, 4068 and 10 5, respectively. (Prof. A. Yamanaka, TUAT, priv. comm.)
37 Case Study: TUAT (3) We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing our clusters. Bright is intuitive to use, and with it I can effectively manage my cluster without wasting time writing scripts, or synchronizing management tool revisions. Provisioning is fast and easy too. I prefer this approach over open source toolkits. 37
38 CUDA-Ready Clusters 1. You focus on coding not infrastructure & toolchains 2. You re always in sync with GPUs + CUDA 3. You cross-develop with confidence and ease Maintaining and using highly customized environments 4. You choose and combine in programming GPUs CUDA or OpenCL or OpenACC and combine with MPI 5. You have converged HPC + Big Data Analytics You have access to Hadoop alongside HPC 6. You seamlessly utilize The Cloud You extend into AWS, deploy OpenStack, CUDA-ready clusters are GPU developer-ready
39 Q & A Ian Lumb, [email protected]
40 Additional Slides
41
42 42
43 Cluster Health Management Goal: provide problem free environment for running jobs Four elements 1. Cluster management automation 2. Regular health checks Actions that return PASS, FAIL or UNKNOWN Can be associated with a settable severity and a message Can launch an action based on any response value 3. Pre-job health checks Let the workload manager hold the job very briefly Check the health of each reserved node If unhealthy, take the node offline, inform the system administrator Let the workload manager reschedule the job to a different set of nodes 4. Hardware stability & performance tests Very wide range of tests May include disk overwrites and reboot(s) All elements above are configurable and extensible
44 Bright API 44
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
Part I Courses Syllabus
Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
The GPU Accelerated Data Center. Marc Hamilton, August 27, 2015
The GPU Accelerated Data Center Marc Hamilton, August 27, 2015 THE GPU-ACCELERATED DATA CENTER HPC DEEP LEARNING PC VIRTUALIZATION CLOUD GAMING RENDERING 2 Product design FROM ADVANCED RENDERING TO VIRTUAL
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 Scale-out and Cloud
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
SUN HPC SOFTWARE CLUSTERING MADE EASY
SUN HPC SOFTWARE CLUSTERING MADE EASY New Software Access Visualization Workstation, Thin Clients, Remote Access Developer Management OS Compilers, Workload, Linux, Solaris Debuggers, Systems and Optimization
Hybrid Cluster Management: Reducing Stress, increasing productivity and preparing for the future
Hybrid Cluster Management: Reducing Stress, increasing productivity and preparing for the future Clement Lau, Ph. D. Sales Director, APJ Bright Computing Agenda 1.Reduce 2.IncRease 3.PrepaRe Reduce System
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Maximize Your Productivity with Flexible, High-Performance Cray CS400 Cluster Supercomputers In science and business, as soon as one question
Amazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
Parallel Computing: Strategies and Implications. Dori Exterman CTO IncrediBuild.
Parallel Computing: Strategies and Implications Dori Exterman CTO IncrediBuild. In this session we will discuss Multi-threaded vs. Multi-Process Choosing between Multi-Core or Multi- Threaded development
Scaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Fujitsu HPC Cluster Suite
Webinar Fujitsu HPC Cluster Suite 29 th May 2013 Павел Борох 0 HPC: полный спектр предложений от Fujitsu PRIMERGY Server, Workstation Cluster Management & Operation ISV and Research Partnerships HPC Cluster
Petascale Software Challenges. Piyush Chaudhary [email protected] High Performance Computing
Petascale Software Challenges Piyush Chaudhary [email protected] High Performance Computing Fundamental Observations Applications are struggling to realize growth in sustained performance at scale Reasons
Bright Cluster Manager
Bright Cluster Manager For HPC, Hadoop and OpenStack Craig Hunneyman Director of Business Development Bright Computing [email protected] Agenda Who is Bright Computing? What is Bright
5x in 5 hours Porting SEISMIC_CPML using the PGI Accelerator Model
5x in 5 hours Porting SEISMIC_CPML using the PGI Accelerator Model C99, C++, F2003 Compilers Optimizing Vectorizing Parallelizing Graphical parallel tools PGDBG debugger PGPROF profiler Intel, AMD, NVIDIA
Advanced MPI. Hybrid programming, profiling and debugging of MPI applications. Hristo Iliev RZ. Rechen- und Kommunikationszentrum (RZ)
Advanced MPI Hybrid programming, profiling and debugging of MPI applications Hristo Iliev RZ Rechen- und Kommunikationszentrum (RZ) Agenda Halos (ghost cells) Hybrid programming Profiling of MPI applications
Linux Cluster Computing An Administrator s Perspective
Linux Cluster Computing An Administrator s Perspective Robert Whitinger Traques LLC and High Performance Computing Center East Tennessee State University : http://lxer.com/pub/self2015_clusters.pdf 2015-Jun-14
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
Debugging in Heterogeneous Environments with TotalView. ECMWF HPC Workshop 30 th October 2014
Debugging in Heterogeneous Environments with TotalView ECMWF HPC Workshop 30 th October 2014 Agenda Introduction Challenges TotalView overview Advanced features Current work and future plans 2014 Rogue
TOOLS AND TIPS FOR MANAGING A GPU CLUSTER. Adam DeConinck HPC Systems Engineer, NVIDIA
TOOLS AND TIPS FOR MANAGING A GPU CLUSTER Adam DeConinck HPC Systems Engineer, NVIDIA Steps for configuring a GPU cluster Select compute node hardware Configure your compute nodes Set up your cluster for
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
Sourcery Overview & Virtual Machine Installation
Sourcery Overview & Virtual Machine Installation Damian Rouson, Ph.D., P.E. Sourcery, Inc. www.sourceryinstitute.org Sourcery, Inc. About Us Sourcery, Inc., is a software consultancy founded by and for
icer Bioinformatics Support Fall 2011
icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Lowers energy costs; datacenter PUE of 1.1 or lower Capable of up to 80 percent heat capture Maximize Your Productivity with Flexible, High-Performance
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
Clusters with GPUs under Linux and Windows HPC
Clusters with GPUs under Linux and Windows HPC Massimiliano Fatica (NVIDIA), Calvin Clark (Microsoft) Hillsborough Room Oct 2 2009 Agenda Overview Requirements for GPU Computing Linux clusters Windows
GTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved
GTC Presentation March 19, 2013 Copyright 2012 Penguin Computing, Inc. All rights reserved Session S3552 Room 113 S3552 - Using Tesla GPUs, Reality Server and Penguin Computing's Cloud for Visualizing
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
Allinea Forge User Guide. Version 6.0.1
Allinea Forge User Guide Version 6.0.1 Contents Contents 1 I Allinea Forge 11 1 Introduction 11 1.1 Allinea DDT........................................ 11 1.2 Allinea MAP........................................
Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez
Energy efficient computing on Embedded and Mobile devices Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez A brief look at the (outdated) Top500 list Most systems are built
Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University
Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules
Automated Testing of Installed Software
Automated Testing of Installed Software or so far, How to validate MPI stacks of an HPC cluster? Xavier Besseron HPC and Computational Science @ FOSDEM 2014 February 1, 2014 Automated Testing of Installed
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
A High Performance Computing Scheduling and Resource Management Primer
LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was
Intel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software. SC13, November, 2013
Intel HPC Distribution for Apache Hadoop* Software including Intel Enterprise Edition for Lustre* Software SC13, November, 2013 Agenda Abstract Opportunity: HPC Adoption of Big Data Analytics on Apache
BIG CPU, BIG DATA. Solving the World s Toughest Computational Problems with Parallel Computing. Alan Kaminsky
Solving the World s Toughest Computational Problems with Parallel Computing Solving the World s Toughest Computational Problems with Parallel Computing Department of Computer Science B. Thomas Golisano
BIG CPU, BIG DATA. Solving the World s Toughest Computational Problems with Parallel Computing. Alan Kaminsky
Solving the World s Toughest Computational Problems with Parallel Computing Alan Kaminsky Solving the World s Toughest Computational Problems with Parallel Computing Alan Kaminsky Department of Computer
GPU Tools Sandra Wienke
Sandra Wienke Center for Computing and Communication, RWTH Aachen University MATSE HPC Battle 2012/13 Rechen- und Kommunikationszentrum (RZ) Agenda IDE Eclipse Debugging (CUDA) TotalView Profiling (CUDA
GPU Profiling with AMD CodeXL
GPU Profiling with AMD CodeXL Software Profiling Course Hannes Würfel OUTLINE 1. Motivation 2. GPU Recap 3. OpenCL 4. CodeXL Overview 5. CodeXL Internals 6. CodeXL Profiling 7. CodeXL Debugging 8. Sources
So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell
So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell R&D Manager, Scalable System So#ware Department Sandia National Laboratories is a multi-program laboratory managed and
Building an energy dashboard. Energy measurement and visualization in current HPC systems
Building an energy dashboard Energy measurement and visualization in current HPC systems Thomas Geenen 1/58 [email protected] SURFsara The Dutch national HPC center 2H 2014 > 1PFlop GPGPU accelerators
Developing Parallel Applications with the Eclipse Parallel Tools Platform
Developing Parallel Applications with the Eclipse Parallel Tools Platform Greg Watson IBM STG [email protected] Parallel Tools Platform Enabling Parallel Application Development Best practice tools for experienced
Experiences with Tools at NERSC
Experiences with Tools at NERSC Richard Gerber NERSC User Services Programming weather, climate, and earth- system models on heterogeneous mul>- core pla?orms September 7, 2011 at the Na>onal Center for
Technical Computing Suite Job Management Software
Technical Computing Suite Job Management Software Toshiaki Mikamo Fujitsu Limited Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster Outline System Configuration and Software Stack Features The major functions
Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013
Cluster performance, how to get the most out of Abel Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013 Introduction Architecture x86-64 and NVIDIA Compilers MPI Interconnect Storage Batch queue
Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Workshop on Parallel and Distributed Scientific and Engineering Computing, Shanghai, 25 May 2012
Scientific Application Performance on HPC, Private and Public Cloud Resources: A Case Study Using Climate, Cardiac Model Codes and the NPB Benchmark Suite Peter Strazdins (Research School of Computer Science),
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
LANL Computing Environment for PSAAP Partners
LANL Computing Environment for PSAAP Partners Robert Cunningham [email protected] HPC Systems Group (HPC-3) July 2011 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
How to Run Parallel Jobs Efficiently
How to Run Parallel Jobs Efficiently Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education May 9, 2013 1 The big picture: running parallel jobs on Hoffman2
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
Introduction to Hybrid Programming
Introduction to Hybrid Programming Hristo Iliev Rechen- und Kommunikationszentrum aixcelerate 2012 / Aachen 10. Oktober 2012 Version: 1.1 Rechen- und Kommunikationszentrum (RZ) Motivation for hybrid programming
CUDA in the Cloud Enabling HPC Workloads in OpenStack With special thanks to Andrew Younge (Indiana Univ.) and Massimo Bernaschi (IAC-CNR)
CUDA in the Cloud Enabling HPC Workloads in OpenStack John Paul Walters Computer Scien5st, USC Informa5on Sciences Ins5tute [email protected] With special thanks to Andrew Younge (Indiana Univ.) and Massimo
TEGRA X1 DEVELOPER TOOLS SEBASTIEN DOMINE, SR. DIRECTOR SW ENGINEERING
TEGRA X1 DEVELOPER TOOLS SEBASTIEN DOMINE, SR. DIRECTOR SW ENGINEERING NVIDIA DEVELOPER TOOLS BUILD. DEBUG. PROFILE. C/C++ IDE INTEGRATION STANDALONE TOOLS HARDWARE SUPPORT CPU AND GPU DEBUGGING & PROFILING
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Bright Cluster Manager
Bright Cluster Manager A Unified Management Solution for HPC and Hadoop Martijn de Vries CTO Introduction Architecture Bright Cluster CMDaemon Cluster Management GUI Cluster Management Shell SOAP/ JSONAPI
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Joris Poort, President & CEO, Rescale, Inc. Ilea Graedel, Manager, Rescale, Inc. 1 Cloud HPC on the Rise 1.1 Background Engineering and science
Middleware- Driven Mobile Applications
Middleware- Driven Mobile Applications A motwin White Paper When Launching New Mobile Services, Middleware Offers the Fastest, Most Flexible Development Path for Sophisticated Apps 1 Executive Summary
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Themis Athanassiadou HPC Project Manager. ClusterVision. ClusterVision. Engineer Innovate Integrate
Themis Athanassiadou HPC Project Manager About 12 years Europe's Dedicated Specialist for High-Performance Computing End-to-end hardware/software/services solution provider HPC engineering and innovation
Eddy Integrated Development Environment, LemonIDE for Embedded Software System Development
Introduction to -based solution for embedded software development Section 1 Eddy Real-Time, Lemonix Section 2 Eddy Integrated Development Environment, LemonIDE Section 3 Eddy Utility Programs Eddy Integrated
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Open Source for Cloud Infrastructure
Open Source for Cloud Infrastructure June 29, 2012 Jackson He General Manager, Intel APAC R&D Ltd. Cloud is Here and Expanding More users, more devices, more data & traffic, expanding usages >3B 15B Connected
Until now: tl;dr: - submit a job to the scheduler
Until now: - access the cluster copy data to/from the cluster create parallel software compile code and use optimized libraries how to run the software on the full cluster tl;dr: - submit a job to the
Boas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, 2015. 20014 IBM Corporation
Boas Betzler Cloud IBM Distinguished Computing Engineer for a Smarter Planet Globally Distributed IaaS Platform Examples AWS and SoftLayer November 9, 2015 20014 IBM Corporation Building Data Centers The
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Work Environment. David Tur HPC Expert. HPC Users Training September, 18th 2015
Work Environment David Tur HPC Expert HPC Users Training September, 18th 2015 1. Atlas Cluster: Accessing and using resources 2. Software Overview 3. Job Scheduler 1. Accessing Resources DIPC technicians
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Building Platform as a Service for Scientific Applications
Building Platform as a Service for Scientific Applications Moustafa AbdelBaky [email protected] Rutgers Discovery Informa=cs Ins=tute (RDI 2 ) The NSF Cloud and Autonomic Compu=ng Center Department
Applications to Computational Financial and GPU Computing. May 16th. Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61
F# Applications to Computational Financial and GPU Computing May 16th Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61 Today! Why care about F#? Just another fashion?! Three success stories! How Alea.cuBase
Alternative Deployment Models for Cloud Computing in HPC Applications. Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix
Alternative Deployment Models for Cloud Computing in HPC Applications Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix The case for Cloud in HPC Build it in house Assemble in the cloud?
