Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Similar documents
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

High Performance Computing in CST STUDIO SUITE

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Sun Constellation System: The Open Petascale Computing Architecture

Scientific Computing Data Management Visions

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

CONSTRUCTION / SERVICE BILLING SYSTEM SPECIFICATIONS

CQG/LAN Technical Specifications. January 3, 2011 Version

Building a Top500-class Supercomputing Cluster at LNS-BUAP

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Oracle Software. Hardware. Training. Consulting. Mythics Complete.

Terms of Reference Microsoft Exchange and Domain Controller/ AD implementation

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)


Parallel Programming Survey

Michael Kagan.

HP reference configuration for entry-level SAS Grid Manager solutions

Cornell University Center for Advanced Computing

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

10.2 Requirements for ShoreTel Enterprise Systems

Parallel Computing with MATLAB

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

IBM System x family brochure

C460 M4 Flexible Compute for SAP HANA Landscapes. Judy Lee Released: April, 2015

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

HPC Update: Engagement Model

Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems

Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network

RWTH GPU Cluster. Sandra Wienke November Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Ignify ecommerce. Item Requirements Notes

FLOW-3D Performance Benchmark and Profiling. September 2012

Cisco SmartPlay Select. Cisco Global Data Center Promotional Program

Estonian Scientific Computing Infrastructure (ETAIS)

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Accelerating CFD using OpenFOAM with GPUs

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

Clusters: Mainstream Technology for CAE

Configuring and Launching ANSYS FLUENT Distributed using IBM Platform MPI or Intel MPI

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Current Status of FEFS for the K computer

The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices

Turn on and immediately start with the engineering! siemens.com/simatic-pg

Cisco IP Communicator (Softphone) Compatibility

EMC ISILON NL-SERIES. Specifications. EMC Isilon NL400. EMC Isilon NL410 ARCHITECTURE

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

Cornell University Center for Advanced Computing

IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server

Scaling from Workstation to Cluster for Compute-Intensive Applications

Advancing Applications Performance With InfiniBand

Adaptec: Snap Server NAS Performance Study

HUAWEI TECHNOLOGIES CO., LTD. HUAWEI FusionServer X6800 Data Center Server

3 Red Hat Enterprise Linux 6 Consolidation

Fujitsu PRIMERGY Servers Portfolio

2009 Oracle Corporation 1

præsentation oktober 2011

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

CFD Implementation with In-Socket FPGA Accelerators

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

Parallels Desktop 4 for Windows and Linux Read Me

Implementing Enterprise Disk Arrays Using Open Source Software. Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012

Setting up Windows Phone 8 environment in VMWare

The Asterope compute cluster

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre

Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers

Test Report Newtech Supremacy II NAS 05/31/2012. Newtech Supremacy II NAS storage system

New Storage System Solutions

A National Computing Grid: FGI

Adaptec: Snap Server NAS Performance Study

Main Memory Data Warehouses

Overview: X5 Generation Database Machines

How To Build A Scada System

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Table 1. Requirements for Domain Controller. You will need a Microsoft Active Directory domain. Microsoft SQL Server. SQL Server Reporting Services

N /150/151/160 RAID Controller. N MegaRAID CacheCade. Feature Overview

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren

Brainlab Node TM Technical Specifications

Appendix 3. Specifications for e-portal

SUN ORACLE DATABASE MACHINE

Transcription:

Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing (HPC) compute resources that faculty engaged in research may purchase through Northwestern University Information Technology (NUIT). Summary of HPC Compute Resources for Research Northwestern University Information Technology (NUIT) operates Quest through the acquisition of funds from faculty grants, startup packages, and institutional sources. Housed at the University's central data centers, these resources benefit from a secure, reliable, and stable environment and are designed to meet the compute needs of our researchers. The Applications and Allocations Committee, comprised of Northwestern faculty, with staff support from members of NUIT, allocates access to Quest through a quarterly review process. Full Access Users can obtain full access to Quest as described in this document. Quest has been designed to support three types of InfiniBand-connected cluster servers: 1. Compute-Intensive nodes without local storage, 2. Data-Intensive nodes with local storage, and 3. Visualization nodes with local storage and GPU appliances In addition to the resources listed above, NUIT will create dedicated virtual guest servers for Northwestern faculty researchers to host web and database services that may facilitate the sharing of research results or otherwise fulfill requirements of funding agencies. Virtual servers are provided upon request at no charge. Quest Account Creation and Job Prioritization The Applications and Allocations Committee allocates access to Quest. Procedures for applying to the Committee for a Quest account or for obtaining access to an existing Quest account can be found at: http://www.it.northwestern.edu/research/adv-research/hpc/quest/quest-account.html 1. Authorized users are granted access to Quest and given a 50 GB home directory. Full access to Quest is only granted to Full Access Users. With the purchase of guaranteed computing time on Revised: June 19 th 2015 Page 1

Quest, as detailed in the remainder of this document, the Full Access User is guaranteed launch of jobs on the Quest HPC cluster within 4 hours of scheduling the job. Full Access Users are guaranteed the use of an annual compute-core-hours allocation resulting from their equipment purchases for a period of five years. Full Access Users may divide their compute-core-hours allocation among members of their research group. Joining Quest as a Full Access user has several benefits: Professional System Administration NUIT HPC administrators handle security patches, operating system updates, hardware repairs, account management, and system troubleshooting so faculty and graduate students can focus on their research. Cost Effective Full Access Users are not charged for basic infrastructure resources and technical staff support. These institutional costs include: core InfiniBand and Ethernet network switches, power distribution units; a secure data center that is monitored 24/7; the data center operations staff, and the technical HPC consulting staff supporting researchers. Application and Code Support NUIT HPC Specialists help users with the installation, porting, and optimization of code on the Quest high performance computing system. The installation cycles for new Quest nodes, the specifications for various new Quest nodes and the duration of Full Access Use of hardware are detailed in the remainder of this document. Purchasing Full Access User Allocations on Quest To purchase Full Access User allocations on Quest, the requesting researcher must be a member of the Northwestern faculty. A letter of intent must be completed and signed by the requesting faculty member and approved by NUIT, The Office of the Provost, and The Office for Research. The letter of intent must contain the amount of funds that the researcher will make available to the University for the purchase of the requested compute resources, and a chart string from which the funds will be transferred. If funds are not available when the letter of intent is submitted, the requesting faculty member must provide documentation specifying when funds will be available. (Please see Appendix A for the Letter of Intent form.) If sponsored funds are used for purchase of a Full Access User allocation, the faculty member is responsible for assuring the purchase is required to achieve and is consistent with the purpose and aims of the grant. The faculty member will complete any necessary re-budgeting of the sponsored project, if required, to ensure appropriate management of the funds. Technical Research Computing Staff Support for Researchers To facilitate access to and use of research computing resources, NUIT research computing technical staff work directly with Northwestern University researchers, postdocs, and students. This includes providing workshops and training sessions on using Quest, Vault and other computing resources. Each course is designed to train researchers on how to use high performance computing and research storage Revised: June 19 th 2015 Page 2

resources for their research projects as well as to address individual questions and concerns. They also describe new applications and technologies related to research in high performance computing and research computing. Additional details can be found at: http://www.it.northwestern.edu/research/advresearch/hpc/quest/workshops.html Quest Compute Resources The Quest HPC cluster is designed to run both compute-intensive and data-intensive application codes with the greatest economy. The Quest HPC system stands today at 7,056 compute cores hosting over 500 account holders representing 46 academic departments, over 150 research projects, and five schools. Ten academic units have contributed funds to Quest in order to gain Full Access to the system and dedicated access to large volumes of storage. Approximately 20% of Quest nodes have local storage drives for facilitating data-intensive job runs. The types of InfiniBand-connected server clusters that Quest has been designed to support are summarized below: Equipment Cost and Description 1. Quest Data-Intensive. Compute Node with Local Storage a. Intel Haswell i. Characteristics 1. Processors: 2x Intel Haswell 12-core E5-2680 v3 (24 cores total), 64-bit, 30MB Cache, 2.5 GHz 2. Memory: 128GB per node, Type: DDR4 3. Interconnect: InfiniBand FDR/14 4. Local Disk: 1TB SATA HDD 5. Quest Parallel Storage: 48 GBs ii. Full Access User Cost per Node: $8,457 iii. Purchase of a Full Access data-intensive node will provide the researcher with a minimum of 201,600 compute-core-hours of Quest HPC computing resource per year, for a duration of five years. 2. Quest Visualization. Intel Haswell Compute Node with Local Storage and GPGPU a. Characteristics i. Processors: 2x Intel Haswell 12-core E5-2680 v3 (24 cores total), 64-bit, 30MB Cache, 2.5 GHz ii. GPGPU: 2x nvidia Tesla K10 iii. Memory: 128GB per node, Type: DDR4 iv. Interconnect: InfiniBand FDR/14 v. Local Disk: 1TB SATA HDD vi. Quest Parallel Storage: 48 GBs b. Full Access User Cost per Node: Estimated at $15,100 Revised: June 19 th 2015 Page 3

Visualization/GPGPU nodes are designed to facilitate the analysis and visualization of large data sets. Consisting of GPU appliances, such as the nvidia Tesla, these servers are integrated with Quest through high-speed InfiniBand connections to Quest parallel storage systems. Installation Cycles Satisfying routine orders for Full Access computing and visualization nodes occurs monthly, provided that nodes are in stock or chassis capacity is available. Orders for large numbers of nodes, or when chassis space is insufficient, are combined into an annual incremental purchase and installation cycle. This cycle begins with researcher commitments in October through January. Equipment is purchased in February and installation is completed by May. The per node cost charged to Full Access Users for the purchase of equipment is determined as an integral part of this process at the beginning of each fiscal year. Full Access node pricing in this document is good until August 31, 2016. NUIT will periodically review pricing for Quest computing resources. The latest version of this document, along with most current pricing, is available at: http://www.it.northwestern.edu/bin/docs/research/quest-full-access.pdf Duration of Full Access Use Full Access Users are guaranteed full access to Quest compute cores for a period of five years from the date of the installation of the funded hardware purchase. Revised: June 19 th 2015 Page 4

Appendix A LETTER OF INTENT: Full Access User of Quest To purchase Full Access User allocations on Quest, the requesting researcher must be a member of the Northwestern faculty. A letter of intent must be completed and signed by the requesting faculty member and reviewed by NUIT. It will then be forwarded with NUIT s approvals to The Office of the Provost and The Office for Research and/or Accounting Services for Research and Sponsored Projects. The letter of intent must contain the amount of funds that the researcher will make available to the University to purchase the requested resources, and a chart string from which the funds will be transferred. If funds are not available when the letter of intent is submitted, the requesting faculty member must provide documentation specifying when funds will be available. If sponsored funds are used for the Full Access User allocations on Quest, the faculty member is responsible for assuring the purchase is required to achieve and is consistent with the purpose and aims of the grant. For planning purposes, a plan describing how the investment should be distributed among research computing resources must be provided. This initial distribution will be considered an estimate only. When final costs are determined the Full Access User may make adjustments to the distribution of the investment. See the following page for a template letter of intent. Revised: June 19 th 2015 Page 5

Letter of Intent to purchase equipment and become a Full Access User of Quest : Faculty Sponsor: Funding Total: Funds availability date: Services availability date The Faculty Sponsor commits the indicated funding amount(s) and source(s) to NUIT to purchase and install the following equipment in exchange for the described Full Access user privileges: Item Description of equipment and user privileges Funding Amount Chart string Revised: June 19 th 2015 Page 6

Faculty Sponsor: NU Information Technology Office of the Provost Office for Research Dean s Office Revised: June 19 th 2015 Page 7