Berkeley Research Computing. Town Hall Meeting Savio Overview
|
|
|
- Scott Phelps
- 10 years ago
- Views:
Transcription
1 Berkeley Research Computing Town Hall Meeting Savio Overview
2 SAVIO - The Need Has Been Stated Inception and design was based on a specific need articulated by Eliot Quataert and nine other faculty: Dear Graham, We are writing to propose that UC Berkeley adopt a condominium computing model, i.e., a more centralized model for supporting research computing on campus...
3 SAVIO - Condo Service Offering Purchase into Savio by contributing standardized compute hardware An alternative for running a cluster in a closet with grad students and postdocs The condo trade-off: Idle resources are made available to others There are no (ZERO) operational costs for administration, colocation, base storage, optimized networking and access methods, and user services Scheduler gives priority access to resources equivalent to the hardware contribution
4 SAVIO - Faculty Computing Allowance Provides allocations to run on Savio as well as support to researchers who have not purchased Condo nodes 200k Service Units (core hours) annually More than just compute: File systems Training/support User services PIs request their allocation via survey Early user access (based on readiness) now General availability planned for fall semester
5 SAVIO - System Overview Similar in design to a typical research cluster Master Node role has been broken out (management, scheduling, logins, file system, etc..) Home storage: Enterprise level, backups, quotaed Scratch space: Large and fast (Lustre) Multiple login/interactive nodes DTN: Data Transfer Node Compute nodes are delineated based on role
6 SAVIO - System Architecture
7 SAVIO - Specification Hardware Compute Nodes: 20-core, 64GB, InfiniBand BigMem Nodes: 20-core, 512GB, InfiniBand Software Stack Scientific Linux 6 (equivalent to Red Hat Enterprise Linux 6) Parallelization: OpenMPI, OpenMP, POSIX threads Intel Compiler SLURM job scheduler Software Environment Modules
8 SAVIO - OTP The biggest security threat that we encounter... STOLEN CREDENTIALS Credentials are stolen via keyboard sniffers installed on researchers laptops or workstations, incorrectly assumed to be secure OTP (One Time Passwords) offers mitigation Easy to learn, simple to use, and works on both computers and smartphones!
9 SAVIO - Future Services Serial/HTC Jobs Expanding the initial architecture beyond just HPC Specialized node hardware (12-core, 128GB, PCI flash storage) Designed for jobs that use <= 1 node Nodes are shared between jobs GPU nodes GPUs are optimal for massively parallel algorithms Specialized node hardware (8-core, 64GB, 2x Nvidia K80)
10 Questions
11 Berkeley Research Computing Town Hall Meeting Savio User Environment
12 SAVIO - Faculty Computing Allowance Eligibility requirements ladder-rank faculty or PI on UCB campus. In need of compute power to solve a research problem. Allowance Request Procedure First fill out the Online Requirements Survey Allowance can be used either by the faculty or by immediate group members. For additional cluster accounts fill out - Additional User Account Request Form Allowances New allowances start on June 1st of every year. Mid-year requests are granted a prorated allocation A cluster specific project (fc_projectname) with all user accounts is setup Scheduler account (fc_projectname) with 200K core hours is setup Annual allocation exipres on May 31st of the following year
13 SAVIO - Access Cluster access Connect using SSH (server name - hpc.brc.berkeley.edu) Uses OTP - One Time Passwords (Multifactor authentication) Multiple login nodes (randomly distribute users) Coming in future NERSC s NEWT REST API for web portal development ipython notebooks & Jupyter hub integration
14 SAVIO - Data Storage Options Storage No local storage on compute nodes All storage accessed over network Either NFS or Lustre protocol Multiple file systems HOME - NFS, 10GB quota, Backed up, No purge. SCRATCH - Lustre, No quota, No Backups, can be purged Project (GROUP) space - NFS, 200GB quota, No Backups, No Purge. No long term archive.
15 SAVIO - Data Transfers Use only the dedicated Data Transfer Node (DTN) Server name - dtn.brc.berkeley.edu Highly recommend using Globus (Web interface) for management Many other traditional tools are also supported on the DTN SCP/SFTP Rsync BBCP
16 SAVIO - Software Support Software module farm Many of the most commonly used packages are already available. In most cases packages compiled from source Easy command line tools to browse and access packages ($ module cmd) Supported package list Open Source Tools - octave, gnuplot, imagemagick, visit, qt, ncl, paraview, lz4, git, valgrind, etc.. Languages - GNU C/C++/Fortran compilers, Java (JRE), Python, R, etc.. Commercial Intel C/C++/Fortran compiler suite, Matlab with 80 core license for MDCS User applications Individual user/group specific packages can be built from source by users Recommend using GROUP storage space for sharing with others in group. SAVIO consultants available to answer your questions.
17 SAVIO - Job Scheduler SLURM Quality of Service Max allowed running time/job Max number of nodes/job savio_debug 30 minutes 4 savio_normal 72 hours (i.e 3 days) 24 Multiple Node Options (partitions) Partition # of nodes # of cores/node Memory/node Local Storage savio GB No local storage savio_bigmem GB No local storage savio_htc GB Local PCI Flash Interaction with Scheduler Only with command line tools and utilities. Online web interfaces for job management can be supported in future via NERSC s NEWT REST API or ipython/jupyter or both.
18 SAVIO - Job Accounting Jobs gain exclusive access to assigned compute nodes. Jobs are expected to be highly parallel and capable of using all the resources on assigned nodes. For example: Running on one standard node for 5 hours uses 1 (nodes) * 20 (cores) * 5 (hours) = 100 core-hours (or Service Units).
19 SAVIO - How to Get Help Online User Documentation User Guide - New User Information - Helpdesk [email protected] Monday - Friday, 9:00 am to 5:00 pm Best effort in non working hours
20 Thank you Questions
Visualization Cluster Getting Started
Visualization Cluster Getting Started Contents 1 Introduction to the Visualization Cluster... 1 2 Visualization Cluster hardware and software... 2 3 Remote visualization session through VNC... 2 4 Starting
Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research
! Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research! Cynthia Cornelius! Center for Computational Research University at Buffalo, SUNY! cdc at
An introduction to compute resources in Biostatistics. Chris Scheller [email protected]
An introduction to compute resources in Biostatistics Chris Scheller [email protected] 1. Resources 1. Hardware 2. Account Allocation 3. Storage 4. Software 2. Usage 1. Environment Modules 2. Tools 3.
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
PetaLibrary Storage Service MOU
University of Colorado Boulder Research Computing PetaLibrary Storage Service MOU 1. INTRODUCTION This is the memorandum of understanding (MOU) for the Research Computing (RC) PetaLibrary Storage Service.
icer Bioinformatics Support Fall 2011
icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)
Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014
Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan
Remote & Collaborative Visualization. Texas Advanced Compu1ng Center
Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java
Introduction to Supercomputing with Janus
Introduction to Supercomputing with Janus Shelley Knuth [email protected] Peter Ruprecht [email protected] www.rc.colorado.edu Outline Who is CU Research Computing? What is a supercomputer?
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
Getting Started with HPC
Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration
Data Movement and Storage. Drew Dolgert and previous contributors
Data Movement and Storage Drew Dolgert and previous contributors Data Intensive Computing Location Viewing Manipulation Storage Movement Sharing Interpretation $HOME $WORK $SCRATCH 72 is a Lot, Right?
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
How To Run A Cloud At Cornell Cac
On-Demand Research Computing Infrastructure as a Service Software as a Service Cloud Storage Solutions David A. Lifka Cornell Center for Advanced Computing [email protected] www.cac.cornell.edu 1 Cornell
Part I Courses Syllabus
Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment
LANL Computing Environment for PSAAP Partners
LANL Computing Environment for PSAAP Partners Robert Cunningham [email protected] HPC Systems Group (HPC-3) July 2011 LANL Resources Available To Alliance Users Mapache is new, has a Lobo-like allocation Linux
SLURM Workload Manager
SLURM Workload Manager What is SLURM? SLURM (Simple Linux Utility for Resource Management) is the native scheduler software that runs on ASTI's HPC cluster. Free and open-source job scheduler for the Linux
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course
Cornell University Center for Advanced Computing
Cornell University Center for Advanced Computing David A. Lifka - [email protected] Director - Cornell University Center for Advanced Computing (CAC) Director Research Computing - Weill Cornell Medical
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
ICMS IT HANDBOOK. IT Handbook for Students. Logins. Student Systems
IT Handbook for Students Logins Username In most cases, your username is comprised of your first initial, your last name and the year of enrolment. So Michael Smith (enrolled in 2015) would be: msmith15.
Deploying and managing a Visualization Farm @ Onera
Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC
SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education www.accre.vanderbilt.
SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!
Research Technologies Data Storage for HPC
Research Technologies Data Storage for HPC Supercomputing for Everyone February 17-18, 2014 Research Technologies High Performance File Systems [email protected] Indiana University Intro to HPC on Big
How To Install Linux Titan
Linux Titan Distribution Presented By: Adham Helal Amgad Madkour Ayman El Sayed Emad Zakaria What Is a Linux Distribution? What is a Linux Distribution? The distribution contains groups of packages and
An Introduction to High Performance Computing in the Department
An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software
XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
Introduction to parallel computing and UPPMAX
Introduction to parallel computing and UPPMAX Intro part of course in Parallel Image Analysis Elias Rudberg [email protected] March 22, 2011 Parallel computing Parallel computing is becoming increasingly
Red Hat Enterprprise Linux - Renewals DETAILS SUPPORTED ARCHITECTURE
Red Hat Enterprprise Linux - Renewals PRODUCT CODE DESCRIPTION 1 Year DETAILS SUPPORTED ARCHITECTURE Red Hat Enterprise Linux Advanced Platform Red Hat Enterprise Linux Advanced Platform, (unlimited Red
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
Introduction to SDSC systems and data analytics software packages "
Introduction to SDSC systems and data analytics software packages " Mahidhar Tatineni ([email protected]) SDSC Summer Institute August 05, 2013 Getting Started" System Access Logging in Linux/Mac Use available
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.
bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
Cornell University Center for Advanced Computing A Sustainable Business Model for Advanced Research Computing
Cornell University Center for Advanced Computing A Sustainable Business Model for Advanced Research Computing David A. Lifka [email protected] 4/20/13 www.cac.cornell.edu 1 My Background 2007 Cornell
Globus and the Centralized Research Data Infrastructure at CU Boulder
Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, [email protected] Conan Moore, [email protected] Thomas Hauser, [email protected] Peter Ruprecht,
CONDOR CLUSTERS ON EC2
CONDOR CLUSTERS ON EC2 Val Hendrix, Roberto A. Vitillo Lawrence Berkeley National Lab ATLAS Cloud Computing R & D 1 INTRODUCTION This is our initial work on investigating tools for managing clusters and
Using NeSI HPC Resources. NeSI Computational Science Team ([email protected])
NeSI Computational Science Team ([email protected]) Outline 1 About Us About NeSI Our Facilities 2 Using the Cluster Suitable Work What to expect Parallel speedup Data Getting to the Login Node 3 Submitting
Matlab on a Supercomputer
Matlab on a Supercomputer Shelley L. Knuth Research Computing April 9, 2015 Outline Description of Matlab and supercomputing Interactive Matlab jobs Non-interactive Matlab jobs Parallel Computing Slides
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
IT at D-PHYS - Tutorial
IT at D-PHYS A tutorial Christian Herzog Head of IT, D-PHYS [email protected] http://wiki.phys.ethz.ch/readme/introductory_workshop_it_at_d-phys http://isg.phys.ethz.ch Objective This tutorial is meant
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
Data Analytics at NERSC. Joaquin Correa [email protected] NERSC Data and Analytics Services
Data Analytics at NERSC Joaquin Correa [email protected] NERSC Data and Analytics Services NERSC User Meeting August, 2015 Data analytics at NERSC Science Applications Climate, Cosmology, Kbase, Materials,
Novell to Microsoft Conversion: Identity Management Design & Plan
Novell to Microsoft Conversion: Identity Management Design & Plan Presented To: 3/2/2011 1215 Hamilton Lane, Suite 200 Naperville, IL 60540 www.morantechnology.com Voice & Fax: 877-212-6379 Version History
Parallel Large-Scale Visualization
Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU
Caltech Center for Advanced Computing Research System Guide: MRI2 Cluster (zwicky) January 2014
1. How to Get An Account CACR Accounts 2. How to Access the Machine Connect to the front end, zwicky.cacr.caltech.edu: ssh -l username zwicky.cacr.caltech.edu or ssh [email protected] Edits,
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
One-Time Password Contingency Access Process
Multi-Factor Authentication: One-Time Password Contingency Access Process Presenter: John Kotolski HRS Security Officer Topics Contingency Access Scenarios Requesting a Temporary One-Time Password Reporting
Microsoft Compute Clusters in High Performance Technical Computing. Björn Tromsdorf, HPC Product Manager, Microsoft Corporation
Microsoft Compute Clusters in High Performance Technical Computing Björn Tromsdorf, HPC Product Manager, Microsoft Corporation Flexible and efficient job scheduling via Windows CCS has allowed more of
How to Use NoMachine 4.4
How to Use NoMachine 4.4 Using NoMachine What is NoMachine and how can I use it? NoMachine is a software that runs on multiple platforms (ie: Windows, Mac, and Linux). It is an end user client that connects
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC
Parallel Computing using MATLAB Distributed Compute Server ZORRO HPC Goals of the session Overview of parallel MATLAB Why parallel MATLAB? Multiprocessing in MATLAB Parallel MATLAB using the Parallel Computing
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
HPC at IU Overview. Abhinav Thota Research Technologies Indiana University
HPC at IU Overview Abhinav Thota Research Technologies Indiana University What is HPC/cyberinfrastructure? Why should you care? Data sizes are growing Need to get to the solution faster Compute power is
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
Developing High-Performance, Scalable, cost effective storage solutions with Intel Cloud Edition Lustre* and Amazon Web Services
Reference Architecture Developing Storage Solutions with Intel Cloud Edition for Lustre* and Amazon Web Services Developing High-Performance, Scalable, cost effective storage solutions with Intel Cloud
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
High Performance Computing Infrastructure at DESY
High Performance Computing Infrastructure at DESY Sven Sternberger & Frank Schlünzen High Performance Computing Infrastructures at DESY DV-Seminar / 04 Feb 2013 Compute Infrastructures at DESY - Outline
Using the Yale HPC Clusters
Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: [email protected] Read documentation at: http://research.computing.yale.edu/hpc-support
Running Native Lustre* Client inside Intel Xeon Phi coprocessor
Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the
APPENDIX 1 SUBSCRIPTION SERVICES
APPENDIX 1 SUBSCRIPTION SERVICES Red Hat sells subscriptions that entitle you to receive Red Hat services and/or Software during the period of the subscription (generally, one or three years). This Appendix
Using Google Compute Engine
Using Google Compute Engine Chris Paciorek January 30, 2014 WARNING: This document is now out-of-date (January 2014) as Google has updated various aspects of Google Compute Engine. But it may still be
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Science Gateway Services for NERSC Users
Science Gateway Services for NERSC Users Shreyas Cholia NERSC User Group Meeting October 7, 2009 Science Gateways at NERSC Web access methods to NERSC resources Much is possible beyond yesterday s ssh+pbs
FREE computing using Amazon EC2
FREE computing using Amazon EC2 Seong-Hwan Jun 1 1 Department of Statistics Univ of British Columbia Nov 1st, 2012 / Student seminar Outline Basics of servers Amazon EC2 Setup R on an EC2 instance Stat
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
Open Cloud System. (Integration of Eucalyptus, Hadoop and AppScale into deployment of University Private Cloud)
Open Cloud System (Integration of Eucalyptus, Hadoop and into deployment of University Private Cloud) Thinn Thu Naing University of Computer Studies, Yangon 25 th October 2011 Open Cloud System University
Getting help - guide to the ticketing system. Thomas Röblitz, UiO/USIT/UAV/ITF/FI ;)
Getting help - guide to the ticketing system Thomas Röblitz, UiO/USIT/UAV/ITF/FI ;) World is perfect, isn t? Why do we need a ticket system? Example 1 User: I am having trouble logging on to abel this
OLCF Best Practices. Bill Renaud OLCF User Assistance Group
OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from
