Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC



Similar documents
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

Overview of HPC systems and software available within

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

HPC Update: Engagement Model

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

High Performance Computing in CST STUDIO SUITE

FLOW-3D Performance Benchmark and Profiling. September 2012

A-CLASS The rack-level supercomputer platform with hot-water cooling

How Cineca supports IT

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

Estonian Scientific Computing Infrastructure (ETAIS)

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015

Up to 4 PCI-E SSDs Four or Two Hot-Pluggable Nodes in 2U

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Cluster Implementation and Management; Scheduling

Journée Mésochallenges 2015 SysFera and ROMEO Make Large-Scale CFD Simulations Only 3 Clicks Away

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Supercomputing Resources in BSC, RES and PRACE

Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2

Sun Constellation System: The Open Petascale Computing Architecture

Hadoop on the Gordon Data Intensive Cluster

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Building Clusters for Gromacs and other HPC applications

Scaling from Workstation to Cluster for Compute-Intensive Applications

Remote Visualization and Collaborative Design for CAE Applications

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

TSUBAME-KFC : a Modern Liquid Submersion Cooling Prototype Towards Exascale

SMB Direct for SQL Server and Private Cloud

Supercomputing Status und Trends (Conference Report) Peter Wegner

Can High-Performance Interconnects Benefit Memcached and Hadoop?

EMC ISILON NL-SERIES. Specifications. EMC Isilon NL400. EMC Isilon NL410 ARCHITECTURE

第 十 三 回 PCクラスタシンポジウム. Cray クラスタ 製 品 のご 紹 介 クレイ ジャパン インク

Running Native Lustre* Client inside Intel Xeon Phi coprocessor

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

Cluster Computing at HRI

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

High Performance Computing and Big Data: The coming wave.

Lecture 1: the anatomy of a supercomputer

SR-IOV In High Performance Computing

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

The Hartree Centre helps businesses unlock the potential of HPC

Jean-Pierre Panziera Teratec 2011

Evaluation of 40 Gigabit Ethernet technology for data servers

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

Designed for Maximum Accelerator Performance

A National Computing Grid: FGI

Parallel Computing. Introduction

Linux Cluster Computing An Administrator s Perspective

Kriterien für ein PetaFlop System

Transforming the UL into a Big Data University. Current status and planned evolutions

Getting Started with HPC

Scientific Computing Data Management Visions

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

Trends in High-Performance Computing for Power Grid Applications

High-Performance Computing and Big Data Challenge

Jezelf Groen Rekenen met Supercomputers

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

Overview of HPC Resources at Vanderbilt

HP ProLiant SL270s Gen8 Server. Evaluation Report

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Pedraforca: ARM + GPU prototype

BSC - Barcelona Supercomputer Center

SURFsara HPC Cloud Workshop

WHITE PAPER SGI ICE X. Ultimate Flexibility for the World s Fastest Supercomputer

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (


SOSCIP Platforms. SOSCIP Platforms

RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer

Parallel Programming Survey

Transcription:

Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC

Mississippi State University Public university (Land Grant) founded in 1878 Traditional strengths are agriculture and engineering Student enrollment exceeds 20,000 20% are graduate students Carnegie classification (US only) is now RU/VH One of 108 cited for very high research activity One of 40 public universities with RU/VH and Community Engagement classifications MSU ranks 60 th among public US universities for R&D expenditures 6 th in agricultural sciences 31 st in computer sciences 35 th in engineering research 39 th in social sciences

History of the High Performance Computing Collaboratory (HPC 2 ) Evolution of the National Science Foundation Engineering Research Center for Computational Field Simulation (1990-2001) Mission: To reduce the time and cost of complex field simulations for engineering analysis and design 1998 NASA STS-95 (John Glenn) Mission: the drag chute door fell off at launch. A Shuttle simulation was completed by the ERC during the Mission. This demonstrated that the ERC had reduced the CAD to solution time from 2 months to 2 days. Significant Accomplishment & Milestone

History of the HPC 2 Over the 11-year life cycle as NSF ERC, annual funding increased by an order of magnitude Graduated from the ERC program in 2001 Life after NSF ERC Maintain same philosophies: HPC and multidisciplinary research Broaden scope beyond CFS Become fiscally self-sufficient Renamed as the High Performance Computing Collaboratory

The HPC 2 Today A coalition of centers and institutes that share a common focus on Advancing the state-of-the-art in computational science and engineering through the utilization of high performance computing Multi-disciplinary research Education, research and service HPC 2 provides computing, operational, and administrative (business) functions for a group of affiliated centers/institutes More than just a supercomputing center

The Centers/Institutes of the HPC 2 Currently comprised of 7 independent centers and institutes: Center for Advanced Vehicular Systems (CAVS) Center for Battlefield Innovation (CBI) Center for Computational Sciences (CCS) Distributed Analytics and Security Institute (DASI) Geosystems Research Institute (GRI) Institute for Genomics, Biocomputing and Biotechnology (IGBB) Northern Gulf Institute (NGI)

HPC 2 Research Focus Areas Astrophysics Computational Fluid Dynamics Data Analytics Computational Manufacturing Cyber Security and Forensics Geographic Information Systems Genomics and Proteomics Human Factors Materials Modeling Molecular Modeling Scientific Visualization Transportation Modeling and Planning Unmanned Aerial Systems and Remote Sensing Weather and Ocean Modeling

HPC 2 Facilities: Buildings Thad Cochran Research, Technology & Economic Development Park Starkville, MS HPC Building 71,000 square feet CAVS Building 57,000 square feet NASA Stennis Space Center MS Gulf Coast Science and Technology Building 38,000 square feet

Technology Innovations at MSU HPC 2 Myrinet One of the first Myricom customers Wrote Myrinet drivers for several platforms MPI MPICH-1 the MSU/ANL Implementation MPI implementations for Sun and SGI InfiniBand One of first large-scale, production IB HPC clusters Warm-water cooling First to implement WWC on Xeon Phi First to use WWC in sub-tropical environment

TOP500 History at MSU

HPC 2 Computational Resources High Performance Computing (HPC) systems Shadow: Cray CS300-LC 316 TeraFLOPS 18,240 cores (2640 traditional/15,600 Xeon Phi) 8 TB of main RAM; 2TB of co-processor RAM FDR Infiniband (56 Gbps) Direct warm water cooling Talon: IBM idataplex 34.4 TeraFLOPS 3072 cores; 6 TB of RAM QDR Infiniband (40 Gbps) Chilled rear-door cooling Raptor: Sun X2200m2 Cluster 10.6 TeraFLOPS 2048 cores; 4 TB of RAM 10Gig/1Gig Ethernet C: Cray XT5 8 TeraFLOPS 1440 cores; 360 GB of RAM

Shadow (CS300-LC) configuration 128 compute nodes Two Intel E5-2680 v2 Ivy Bridge processors, 10 cores (2.8 GHz) 64 GB memory (DDR3-1866) 80GB SSD drive Mellanox ConnectX-3 FDR InfiniBand (56Gb/s) Two Intel Xeon Phi 5110P coprocessors 60 cores (1.053 GHz) 8 GB GDDR5 memory Two redundant management nodes Three login/development nodes Fully non-blocking Mellanox FDR InfiniBand network Compute node peak performance: 316 TFLOPS Achieved 80.45% efficiency on Linpack (254.328 TFLOPS)

Cray CS300-LC Direct, warm-water cooled Input water temperature up to 40C (104F) Only system on the market that could water cool the processor, memory, and Intel Xeon Phi (or NVIDIA GPU) Secondary water loop with low pressure and low flow into each node Each CPU (and each Xeon Phi) has its own water pump, so built in redundancy. Lots of water sensors with a nice monitoring and alert capability

HPC 2 Computational Resources (continued) Storage ~1 Petabyte (PB) of disk storage 9 Petabytes (PB) of near-line tape storage Desktops/Laptops 210 Faculty/Staff desktops and laptops 165 Student desktops 120 Lab, meeting room, and other systems Networking LAN: 10 Gigabit Ethernet Backbone 10 Gigabit connected servers 1 Gigabit Ethernet to every other device WAN: 20 Gigabit connectivity via two geographically diverse routes

Questions?