TOOLS AND TIPS FOR MANAGING A GPU CLUSTER. Adam DeConinck HPC Systems Engineer, NVIDIA
|
|
|
- Gilbert Freeman
- 10 years ago
- Views:
Transcription
1 TOOLS AND TIPS FOR MANAGING A GPU CLUSTER Adam DeConinck HPC Systems Engineer, NVIDIA
2 Steps for configuring a GPU cluster Select compute node hardware Configure your compute nodes Set up your cluster for GPU jobs Monitor and test your cluster
3 NVML and nvidia-smi Primary management tools mentioned throughout this talk will be NVML and nvidia-smi NVML: NVIDIA Management Library Query state and configure GPU C, Perl, and Python API nvidia-smi: Command-line client for NVML GPU Deployment Kit: includes NVML headers, docs, and nvidia-healthmon
4 Select compute node hardware Choose the correct GPU Select server hardware Consider compatibility with networking hardware
5 What GPU should I use? Tesla M-series is designed for servers Passively Cooled Higher Performance Chassis/BMC Integration Out-of-Band Monitoring
6 PCIe Topology Matters Biggest factor right now in server selection is PCIe topology System Memory GPU0 Memory GPU1 Memory 0x0000 Direct memory access between devices w/ P2P transfers Unified addressing for system and GPUs CPU GPU0 0xFFFF GPU1 Works best when all devices are on same PCIe root or switch
7 P2P on dual-socket servers GPU0 GPU1 P2P communication supported between GPUs on the same IOH CPU0 PCI-e x16 x16 Incompatible with PCI-e P2P specification CPU1 PCI-e x16 x16 GPU2 GPU3 P2P communication supported between GPUs on the same IOH
8 For many GPUs, use PCIe switches GPU0 GPU1 GPU2 GPU3 x16 x16 x16 x16 Switch 0 Switch 1 CPU0 PCI-e x16 x16 PCIe switches fully supported Best P2P performance between devices on same switch
9 How many GPUs per server? If apps use P2P heavily: More GPUs per node are better Choose servers with appropriate PCIe topology Tune application to do transfers within PCIe complex If apps don t use P2P: May be dominated by host <-> device data transfers More servers with fewer GPUs/server
10 For many devices, use PCIe switches GPU0 GPU1 GPU2 Other Device x16 x16 x16 x16 Switch 0 Switch 1 CPU0 PCI-e x16 x16 PCIe switches fully supported for all operations Best P2P performance between devices on same switch P2P also supported with other devices such as NIC via GPUDirect RDMA
11 GPUDirect RDMA on the network PCIe P2P between NIC and GPU without touching host memory Greatly improved performance Currently supported on Cray (XK7 and XC-30) and Mellanox FDR Infiniband Some MPI implementations support GPUDirect RDMA
12 Configure your compute nodes Configure system BIOS Install and configure device drivers Configure GPU devices Set GPU power limits
13 Configure system BIOS Configure large PCIe address space Many servers ship with 64-bit PCIe addressing turned off Needs to be turned on for Tesla K40 or systems with many GPUs Might be called Enable 4G Decoding or similar Configure for cooling passive GPUs Tesla M-series has passive cooling relies on system fans Communicates thermals to BMC to manage fan speed Make sure BMC firmware is up to date, fans are configured correctly Make sure remote console uses onboard VGA, not offboard NVIDIA GPU
14 Disable the nouveau driver nouveau does not support CUDA and will conflict with NVIDIA driver Two steps to disable: 1. Edit /etc/modprobe.d/disable-nouveau.conf: blacklist nouveau nouveau modeset=0 2. Rebuild initial ramdisk: RHEL: dracut --force SUSE: mkinitrd Deb: update-initramfs -u
15 Install the NVIDIA driver Two ways to install the driver Command-line installer Bundled with CUDA toolkit developer.nvidia.com/cuda Stand-alone RPM/DEB Provided by NVIDIA (major versions only) Provided by Linux distros (other release schedule) Not easy to switch between these methods
16 Initializing a GPU in runlevel 3 Most clusters operate at runlevel 3 so you should initialize the GPU explicitly in an init script At minimum: Load kernel modules nvidia + nvidia_uvm (in CUDA 6) Create devices with mknod Optional steps: Configure compute mode Set driver persistence Set power limits
17 Install GPUDirect RDMA network drivers (if available) Mellanox OFED 2.1 (beta) has support for GPUDirect RDMA Should also be supported on Cray systems for CLE < > HW required: Mellanox FDR HCAs, Tesla K10/K20/K20X/K40 SW required: NVIDIA driver or better, CUDA 5.5 or better, GPUDirect plugin from Mellanox Enables an additional kernel driver, nv_peer_mem
18 Configure driver persistence By default, driver unloads when GPU is idle Driver must re-load when job starts, slowing startup If ECC is on, memory is cleared between jobs Persistence daemon keeps driver loaded when GPUs idle: # /usr/bin/nvidia-persistenced --persistence-mode \ [--user <username>] Faster job startup time Slightly lower idle power
19 Configure ECC Tesla and Quadro GPUs support ECC memory Correctable errors are logged but not scrubbed Uncorrectable errors cause error at user and system level GPU rejects new work after uncorrectable error, until reboot ECC can be turned off makes more GPU memory available at cost of error correction/detection Configured using NVML or nvidia-smi # nvidia-smi -e 0 Requires reboot to take effect
20 Set GPU power limits Power consumption limits can be set with NVML/nvidia-smi Set on a per-gpu basis Useful in power-constrained environments nvidia-smi pl <power in watts> Settings don t persist across reboots set this in your init script Requires driver persistence
21 Set up your cluster for GPU jobs Enable GPU integration in resource manager and MPI Set up GPU process accounting to measure usage Configure GPU Boost clocks (or allow users to do so) Managing job topology on GPU compute nodes
22 Resource manager integration Most popular resource managers have some NVIDIA integration features available: SLURM, Torque, PBS Pro, Univa Grid Engine, LSF GPU status monitoring: Report current config, load sensor for utilization Managing process topology: GPUs as consumables, assignment using CUDA_VISIBLE_DEVICES Set GPU configuration on a per-job basis Health checks: Run nvidia-healthmon or integrate with monitoring system NVIDIA integration usually configured at compile time (open source) or as a plugin
23 GPU process accounting Provides per-process accounting of GPU usage using Linux PID Accessible via NVML or nvidia-smi (in comma-separated format) Requires driver be continuously loaded (i.e. persistence mode) No RM integration yet, use site scripts i.e. prologue/epilogue Enable accounting mode: $ sudo nvidia-smi am 1 Human-readable accounting output: $ nvidia-smi q d ACCOUNTING Output comma-separated fields: $ nvidia-smi --query-accountedapps=gpu_name,gpu_util format=csv Clear current accounting logs: $ sudo nvidia-smi -caa
24 MPI integration with CUDA Most recent versions of most MPI libraries support sending/receiving directly from CUDA device memory OpenMPI 1.7+, mvapich2 1.8+, Platform MPI, Cray MPT Typically needs to be enabled for the MPI at compile time Depending on version and system topology, may also support GPUDirect RDMA Non-CUDA apps can use the same MPI without problems (but might link libcuda.so even if not needed) Enable this in MPI modules provided for users
25 GPU Boost (user-defined clocks) Use Power Headroom to Run at Higher Clocks 25% Faster 20% Faster 14% Faster 17% Faster % Faster AMBER SPFP-TRPCage LAMMPS-EAM NAMD 2.9-APOA1 11% Faster Tesla K40 (base) Tesla K40 with GPU Boost
26 GPU Boost (user-defined clocks) Configure with nvidia-smi: nvidia-smi q d SUPPORTED_CLOCKS nvidia-smi ac <MEM clock, Graphics clock> nvidia-smi q d CLOCK shows current mode nvidia-smi rac resets all clocks nvidia-smi acp 0 allows non-root to change clocks Changing clocks doesn t affect power cap; configure separately Requires driver persistence Currently supported on K20, K20X and K40
27 Managing CUDA contexts with compute mode Compute mode: determines how GPUs manage multiple CUDA contexts 0/DEFAULT: Accept simultaneous contexts. 1/EXCLUSIVE_THREAD: Single context allowed, from a single thread. 2/PROHIBITED: No CUDA contexts allowed. 3/EXCLUSIVE_PROCESS: Single context allowed, multiple threads OK. Most common setting in clusters. Changing this setting requires root access, but it sometimes makes sense to make this user-configurable.
28 N processes on 1 GPU: MPS Multi-Process Server allows multiple processes to share a single CUDA context GPU0 Improved performance where multiple processes share GPU (vs multiple open contexts) Easier porting of MPI apps: can continue to use one rank per CPU, but all ranks can access the GPU MPS Persistent context Server process: nvidia-cuda-mps-server Control daemon: nvidia-cuda-mps-control Application ranks
29 PCIe-aware process affinity To get good performance, CPU processes should be scheduled on cores local to the GPUs they use GPU 0 GPU 1 No good out of box tools for this yet! hwloc can be help identify CPU <-> GPU locality CPU0 PCI-e x16 x16 Can use PCIe dev ID with NVML to get CUDA rank Set process affinity with MPI or numactl CPU1 PCI-e x16 x16 Possible admin actions: Documentation: node toplogy & how to set affinity Wrapper scripts using numactl to set recommended affinity GPU 2 GPU 3
30 Multiple user jobs on a multi-gpu node CUDA_VISIBLE_DEVICES environment variable controls which GPUs are visible to a process Comma-separated list of devices export CUDA_VISIBLE_DEVICES= 0,2 Tooling and resource manager support exists but limited Example: configure SLURM with CPU<->GPU mappings SLURM will use cgroups and CUDA_VISIBLE_DEVICES to assign resources Limited ability to manage process affinity this way Where possible, assign all a job s resources on same PCIe root complex
31 Monitor and test your cluster Use nvidia-healthmon to do GPU health checks on each job Use a cluster monitoring system to watch GPU behavior Stress test the cluster
32 Automatic health checks: nvidia-healthmon Runs a set of fast sanity checks against each GPU in system Basic sanity checks PCIe link config and bandwith between host and peers GPU temperature All checks are configurable set them up based on your system s expected values Use cluster health checker to run this for every job Single command to run all checks Returns 0 if successful, non-zero if a test fails Does not require root to run
33 Use a monitoring system with NVML support Examples: Ganglia, Nagios, Bright Cluster Manager, Platform HPC Or write your own plugins using NVML
34 Good things to monitor GPU Temperature Check for hot spots Monitor w/ NVML or OOB via system BMC GPU Power Usage Higher than expected power usage => possible HW issues Current clock speeds Lower than expected => power capping or HW problems Check Clocks Throttle Reasons in nvidia-smi ECC error counts
35 Good things to monitor Xid errors in syslog May indicate HW error or programming error Common non-hw causes: out-of-bounds memory access (13), illegal access (31), bad termination of program (45) Turn on PCIe parity checking with EDAC modprobe edac_core echo 1 > /sys/devices/system/edac/pci/check_pci_parity Monitor value of /sys/devices/<pciaddress>/broken_parity_status
36 Stress-test your cluster Best workload for testing is the user application Alternatively use CUDA Samples or benchmarks (like HPL) Stress entire system, not just GPUs Do repeated runs in succession to stress the system Things to watch for: Inconsistent perf between nodes: config errors on some nodes Inconsistent perf between runs: cooling issues, check GPU Temps Slow GPUs / PCIe transfers: misconfigured SBIOS, seating issues Get pilot users with stressful workloads, monitor during their runs Use successful test data for stricter bounds on monitoring and healthmon
37 Always use serial number to identify bad boards Multiple possible ways to enumerate GPUs: PCIe NVML CUDA runtime These may not be consistent with each other or between boots! Serial number will always map to the physical board and is printed on the board. UUID will always map to the individual GPU. (I.e., 2 UUIDs and 1 SN if a board has 2 GPUs.)
38 Key take-aways Topology matters! For both HW selection and job configuration You should provide tools which expose this to your users Use NVML-enabled tools for GPU cofiguration and monitoring (or write your own!) Lots of hooks exist for cluster integration and management, and third-party tools
39 Where to find more information docs.nvidia.com developer.nvidia.com/cluster-management Documentation in GPU Deployment Kit man pages for the tools (nvidia-smi, nvidia-healthmon, etc) Other talks in the Clusters and GPU Management tag here at GTC
40 #GTC14
Cluster Monitoring and Management Tools RAJAT PHULL, NVIDIA SOFTWARE ENGINEER ROB TODD, NVIDIA SOFTWARE ENGINEER
Cluster Monitoring and Management Tools RAJAT PHULL, NVIDIA SOFTWARE ENGINEER ROB TODD, NVIDIA SOFTWARE ENGINEER MANAGE GPUS IN THE CLUSTER Administrators, End users Middleware Engineers Monitoring/Management
XID ERRORS. vr352 May 2015. XID Errors
ID ERRORS vr352 May 2015 ID Errors Introduction... 1 1.1. What Is an id Message... 1 1.2. How to Use id Messages... 1 Working with id Errors... 2 2.1. Viewing id Error Messages... 2 2.2. Tools That Provide
Clusters with GPUs under Linux and Windows HPC
Clusters with GPUs under Linux and Windows HPC Massimiliano Fatica (NVIDIA), Calvin Clark (Microsoft) Hillsborough Room Oct 2 2009 Agenda Overview Requirements for GPU Computing Linux clusters Windows
Know your Cluster Bottlenecks and Maximize Performance
Know your Cluster Bottlenecks and Maximize Performance Hands-on training March 2013 Agenda Overview Performance Factors General System Configuration - PCI Express (PCIe) Capabilities - Memory Configuration
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist
The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing
How To Run A Multi Process Powerpoint On A Multi Threaded Cuda (Nvidia) Powerpoint (Powerpoint) On A Single Process (Nvdv) On An Uniden-Cuda) On Multiple Processes (Nvm)
MULTI-PROCESS SERVICE vr331 March 2015 Multi-Process Service Introduction... 1 1.1. AT A GLANCE... 1 1.1.1. MPS...1 1.1.2. Intended Audience... 1 1.1.3. Organization of This Document... 2 1.2. Prerequisites...
Resource Scheduling Best Practice in Hybrid Clusters
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Resource Scheduling Best Practice in Hybrid Clusters C. Cavazzoni a, A. Federico b, D. Galetti a, G. Morelli b, A. Pieretti
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
nvidia smi(1) NVIDIA nvidia smi(1)
NAME nvidia smi NVIDIA System Management Interface program SYNOPSIS nvidia-smi [OPTION1 [ARG1]] [OPTION2 [ARG2]]... DESCRIPTION NVSMI provides monitoring information for each of NVIDIA's Tesla devices
Better Integration of Systems Management Hardware with Linux
Better Integration of Systems Management Hardware with Linux LINUXCON NORTH AMERICA Aug 2014 Charles Rose Engineer Dell Inc. Agenda Introduction Systems Management Hardware/Software Information Available
Release Notes for Open Grid Scheduler/Grid Engine. Version: Grid Engine 2011.11
Release Notes for Open Grid Scheduler/Grid Engine Version: Grid Engine 2011.11 New Features Berkeley DB Spooling Directory Can Be Located on NFS The Berkeley DB spooling framework has been enhanced such
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
HP ProLiant SL270s Gen8 Server. Evaluation Report
HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich [email protected]
TEGRA X1 DEVELOPER TOOLS SEBASTIEN DOMINE, SR. DIRECTOR SW ENGINEERING
TEGRA X1 DEVELOPER TOOLS SEBASTIEN DOMINE, SR. DIRECTOR SW ENGINEERING NVIDIA DEVELOPER TOOLS BUILD. DEBUG. PROFILE. C/C++ IDE INTEGRATION STANDALONE TOOLS HARDWARE SUPPORT CPU AND GPU DEBUGGING & PROFILING
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X DU-05348-001_v6.5 August 2014 Installation and Verification on Mac OS X TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2. About
Agenda. Context. System Power Management Issues. Power Capping Overview. Power capping participants. Recommendations
Power Capping Linux Agenda Context System Power Management Issues Power Capping Overview Power capping participants Recommendations Introduction of Linux Power Capping Framework 2 Power Hungry World Worldwide,
GRID VGPU FOR VMWARE VSPHERE
GRID VGPU FOR VMWARE VSPHERE DU-07354-001 March 2015 Quick Start Guide DOCUMENT CHANGE HISTORY DU-07354-001 Version Date Authors Description of Change 0.1 7/1/2014 AC Initial draft for vgpu early access
Bright Cluster Manager
Bright Cluster Manager For HPC, Hadoop and OpenStack Craig Hunneyman Director of Business Development Bright Computing [email protected] Agenda Who is Bright Computing? What is Bright
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University
Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules
MATLAB Distributed Computing Server Installation Guide. R2012a
MATLAB Distributed Computing Server Installation Guide R2012a How to Contact MathWorks www.mathworks.com Web comp.soft-sys.matlab Newsgroup www.mathworks.com/contact_ts.html Technical Support [email protected]
McAfee Firewall Enterprise
Hardware Guide Revision C McAfee Firewall Enterprise S1104, S2008, S3008 The McAfee Firewall Enterprise Hardware Product Guide describes the features and capabilities of appliance models S1104, S2008,
Spectrum Technology Platform. Version 9.0. Spectrum Spatial Administration Guide
Spectrum Technology Platform Version 9.0 Spectrum Spatial Administration Guide Contents Chapter 1: Introduction...7 Welcome and Overview...8 Chapter 2: Configuring Your System...9 Changing the Default
NVIDIA Tools For Profiling And Monitoring. David Goodwin
NVIDIA Tools For Profiling And Monitoring David Goodwin Outline CUDA Profiling and Monitoring Libraries Tools Technologies Directions CScADS Summer 2012 Workshop on Performance Tools for Extreme Scale
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS DU-05349-001_v6.0 February 2014 Installation and Verification on TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2.
SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
ANDROID DEVELOPER TOOLS TRAINING GTC 2014. Sébastien Dominé, NVIDIA
ANDROID DEVELOPER TOOLS TRAINING GTC 2014 Sébastien Dominé, NVIDIA AGENDA NVIDIA Developer Tools Introduction Multi-core CPU tools Graphics Developer Tools Compute Developer Tools NVIDIA Developer Tools
GRID VGPU FOR CITRIX XENSERVER
GRID VGPU FOR CITRIX XENSERVER DU-06920-001 March 2014 User Guide DOCUMENT CHANGE HISTORY DU-06920-001 Version Date Authors Description of Change 0.3 7/1/2013 AC Initial release for vgpu private beta 0.9
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
Automation Tools for UCS Sysadmins
Automation Tools for UCS Sysadmins Eric Williams Technical Marketing Engineer What is the Cisco UCS XML API? Cisco Unified Computing System Optimized and Designed as an Integrated System Cisco UCS Manager
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
IBM Upward Integration Module (UIM) Advanced Technical Sales Wayne Wigley
IBM Upward Integration Module (UIM) Advanced Technical Sales Wayne Wigley 1 Agenda vsphere Architecture with Required IBM Components for UIM ESXi Images IBM Customized Image IBM Upward Integration for
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Moab and TORQUE Highlights CUG 2015
Moab and TORQUE Highlights CUG 2015 David Beer TORQUE Architect 28 Apr 2015 Gary D. Brown HPC Product Manager 1 Agenda NUMA-aware Heterogeneous Jobs Ascent Project Power Management and Energy Accounting
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM
IBM Platform Computing : infrastructure management for HPC solutions on OpenPOWER Jing Li, Software Development Manager IBM #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 Scale-out and Cloud
SUN HPC SOFTWARE CLUSTERING MADE EASY
SUN HPC SOFTWARE CLUSTERING MADE EASY New Software Access Visualization Workstation, Thin Clients, Remote Access Developer Management OS Compilers, Workload, Linux, Solaris Debuggers, Systems and Optimization
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
The Evolution of Cray Management Services
The Evolution of Cray Management Services Tara Fly, Alan Mutschelknaus, Andrew Barry and John Navitsky OS/IO Cray, Inc. Seattle, WA USA e-mail: {tara, alanm, abarry, johnn}@cray.com Abstract Cray Management
Intrusion Detection Architecture Utilizing Graphics Processors
Acta Informatica Pragensia 1(1), 2012, 50 59, DOI: 10.18267/j.aip.5 Section: Online: aip.vse.cz Peer-reviewed papers Intrusion Detection Architecture Utilizing Graphics Processors Liberios Vokorokos 1,
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale
TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale Toshio Endo,Akira Nukada, Satoshi Matsuoka GSIC, Tokyo Institute of Technology ( 東 京 工 業 大 学 ) Performance/Watt is the Issue
ENTERPRISE LINUX SYSTEM ADMINISTRATION
ENTERPRISE LINUX SYSTEM ADMINISTRATION The GL250 is an in-depth course that explores installation, configuration and maintenance of Linux systems. The course focuses on issues universal to every workstation
Maintaining Non-Stop Services with Multi Layer Monitoring
Maintaining Non-Stop Services with Multi Layer Monitoring Lahav Savir System Architect and CEO of Emind Systems [email protected] www.emindsys.com The approach Non-stop applications can t leave on their
Bright Cluster Manager 5.2. Administrator Manual. Revision: 6776. Date: Fri, 27 Nov 2015
Bright Cluster Manager 5.2 Administrator Manual Revision: 6776 Date: Fri, 27 Nov 2015 2012 Bright Computing, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless
REMOTE VISUALIZATION ON SERVER-CLASS TESLA GPUS
REMOTE VISUALIZATION ON SERVER-CLASS TESLA GPUS WP-07313-001_v01 June 2014 White Paper TABLE OF CONTENTS Introduction... 4 Challenges in Remote and In-Situ Visualization... 5 GPU-Accelerated Remote Visualization
Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive
Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive This guide explains how to create and use a Rescue USB flash drive to reinstall and recover the ExtraHop system. When booting
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
SAPPHIRE TOXIC R9 270X 2GB GDDR5 WITH BOOST
SAPPHIRE TOXIC R9 270X 2GB GDDR5 WITH BOOST Specification Display Support Output GPU Video Memory Dimension Software Accessory supports up to 4 display monitor(s) without DisplayPort 4 x Maximum Display
WHITE PAPER. ClusterWorX 2.1 from Linux NetworX. Cluster Management Solution C ONTENTS INTRODUCTION
WHITE PAPER A PRIL 2002 C ONTENTS Introduction 1 Overview 2 Features 2 Architecture 3 Monitoring 4 ICE Box 4 Events 5 Plug-ins 6 Image Manager 7 Benchmarks 8 ClusterWorX Lite 8 Cluster Management Solution
RedHat (RHEL) System Administration Course Summary
Contact Us: (616) 875-4060 RedHat (RHEL) System Administration Course Summary Length: 5 Days Prerequisite: RedHat fundamentals course Recommendation Statement: Students should have some experience with
Hybrid Cluster Management: Reducing Stress, increasing productivity and preparing for the future
Hybrid Cluster Management: Reducing Stress, increasing productivity and preparing for the future Clement Lau, Ph. D. Sales Director, APJ Bright Computing Agenda 1.Reduce 2.IncRease 3.PrepaRe Reduce System
Table of Contents Introduction and System Requirements 9 Installing VMware Server 35
Table of Contents Introduction and System Requirements 9 VMware Server: Product Overview 10 Features in VMware Server 11 Support for 64-bit Guest Operating Systems 11 Two-Way Virtual SMP (Experimental
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
GPU Performance Analysis and Optimisation
GPU Performance Analysis and Optimisation Thomas Bradley, NVIDIA Corporation Outline What limits performance? Analysing performance: GPU profiling Exposing sufficient parallelism Optimising for Kepler
Command Line Interface User Guide for Intel Server Management Software
Command Line Interface User Guide for Intel Server Management Software Legal Information Information in this document is provided in connection with Intel products. No license, express or implied, by estoppel
Intel Simple Network Management Protocol (SNMP) Subagent v6.0
Intel Simple Network Management Protocol (SNMP) Subagent v6.0 User Guide March 2013 ii Intel SNMP Subagent User s Guide Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X DU-05348-001_v5.5 July 2013 Installation and Verification on Mac OS X TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2. About
NVIDIA GRID VGPU Steve Harpster SA NALA October 2013
NVIDIA GRID VGPU Steve Harpster SA NALA October 2013 Overview Prerequisites XenServer Installation/Setup AGENDA Booting vgpu-enabled VMs Basic Perf. Tuning Support Resources Q&A Graphics Options in Virtualization
Acronis Backup & Recovery 10 Server for Linux. Update 5. Installation Guide
Acronis Backup & Recovery 10 Server for Linux Update 5 Installation Guide Table of contents 1 Before installation...3 1.1 Acronis Backup & Recovery 10 components... 3 1.1.1 Agent for Linux... 3 1.1.2 Management
UPSMON PRO Linux --- User Manual
UPSMON PRO Linux --- User Manual Version : 2.1 *Attention : root authority is necessary to execute at Linux here AA. UPSMON PRO Install 2 BB. UPSMON PRO Start 3 CC. UPSMON PRO Status 6 DD. UPSMON PRO Config
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Maximize Your Productivity with Flexible, High-Performance Cray CS400 Cluster Supercomputers In science and business, as soon as one question
:Introducing Star-P. The Open Platform for Parallel Application Development. Yoel Jacobsen E&M Computing LTD [email protected]
:Introducing Star-P The Open Platform for Parallel Application Development Yoel Jacobsen E&M Computing LTD [email protected] The case for VHLLs Functional / applicative / very high-level languages allow
GRID VGPU FOR CITRIX XENSERVER
GRID VGPU FOR CITRIX XENSERVER DU-06920-001 October 2013 User Guide DOCUMENT CHANGE HISTORY DU-06920-001 Version Date Authors Description of Change 0.3 7/1/2013 AC Initial release for vgpu private beta
What s new in Hyper-V 2012 R2
What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows
RH033 Red Hat Linux Essentials or equivalent experience with Red Hat Linux..
RH131 Red Hat Linux System Administration Course Summary For users of Linux (or UNIX) who want to start building skills in systems administration on Red Hat Linux, to a level where they can attach and
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
Running applications on the Cray XC30 4/12/2015
Running applications on the Cray XC30 4/12/2015 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes
Gigabyte Content Management System Console User s Guide. Version: 0.1
Gigabyte Content Management System Console User s Guide Version: 0.1 Table of Contents Using Your Gigabyte Content Management System Console... 2 Gigabyte Content Management System Key Features and Functions...
The QEMU/KVM Hypervisor
The /KVM Hypervisor Understanding what's powering your virtual machine Dr. David Alan Gilbert [email protected] 2015-10-14 Topics Hypervisors and where /KVM sits Components of a virtual machine KVM Devices:
MATLAB Distributed Computing Server System Administrator's Guide
MATLAB Distributed Computing Server System Administrator's Guide R2015b How to Contact MathWorks Latest news: www.mathworks.com Sales and services: www.mathworks.com/sales_and_services User community:
Managing GPUs by Slurm. Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano
Managing GPUs by Slurm Massimo Benini HPC Advisory Council Switzerland Conference March 31 - April 3, 2014 Lugano Agenda General Slurm introduction Slurm@CSCS Generic Resource Scheduling for GPUs Resource
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE QSG-07847-001_v01 October 2015 Quick Start Guide Requirements REQUIREMENTS This Quick Start Guide is intended for those who are technically comfortable with minimal
ITG Software Engineering
IBM WebSphere Administration 8.5 Course ID: Page 1 Last Updated 12/15/2014 WebSphere Administration 8.5 Course Overview: This 5 Day course will cover the administration and configuration of WebSphere 8.5.
America s Most Wanted a metric to detect persistently faulty machines in Hadoop
America s Most Wanted a metric to detect persistently faulty machines in Hadoop Dhruba Borthakur and Andrew Ryan dhruba,[email protected] Presented at IFIP Workshop on Failure Diagnosis, Chicago June
White Paper. System Administration for the. Intel Xeon Phi Coprocessor
White Paper System Administration for the Intel Xeon Phi Coprocessor 1 Preface This document provides a general overview of system administration on the Intel Xeon Phi coprocessor. It is written with the
Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming
Overview Lecture 1: an introduction to CUDA Mike Giles [email protected] hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
SLURM Resources isolation through cgroups. Yiannis Georgiou email: [email protected] Matthieu Hautreux email: matthieu.hautreux@cea.
SLURM Resources isolation through cgroups Yiannis Georgiou email: [email protected] Matthieu Hautreux email: [email protected] Outline Introduction to cgroups Cgroups implementation upon
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
McAfee Data Loss Prevention
Hardware Guide Revision B McAfee Data Loss Prevention 1650, 3650, 4400, 5500 This guide describes the features and capabilities of McAfee Data Loss Prevention (McAfee DLP) appliances to help you to manage
SAM XFile. Trial Installation Guide Linux. Snell OD is in the process of being rebranded SAM XFile
SAM XFile Trial Installation Guide Linux Snell OD is in the process of being rebranded SAM XFile Version History Table 1: Version Table Date Version Released by Reason for Change 10/07/2014 1.0 Andy Gingell
AST2150 IPMI Configuration Guide
AST2150 IPMI Configuration Guide Version 1.1 Copyright Copyright 2011 MiTAC International Corporation. All rights reserved. No part of this manual may be reproduced or translated without prior written
Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual
Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual Overview Metrics Monitor is part of Intel Media Server Studio 2015 for Linux Server. Metrics Monitor is a user space shared library
A High Performance Computing Scheduling and Resource Management Primer
LLNL-TR-652476 A High Performance Computing Scheduling and Resource Management Primer D. H. Ahn, J. E. Garlick, M. A. Grondona, D. A. Lipari, R. R. Springmeyer March 31, 2014 Disclaimer This document was
GPU Profiling with AMD CodeXL
GPU Profiling with AMD CodeXL Software Profiling Course Hannes Würfel OUTLINE 1. Motivation 2. GPU Recap 3. OpenCL 4. CodeXL Overview 5. CodeXL Internals 6. CodeXL Profiling 7. CodeXL Debugging 8. Sources
