Cyberinfrastructure Resources at Clemson University

Similar documents
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Charting the Evolution of Campus Cyberinfrastructure: Where Do We Go From Here? 2015 National Science Foundation NSF CC*NIE/IIE/DNI Principal

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Clusters: Mainstream Technology for CAE

Building Clusters for Gromacs and other HPC applications

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Lecture 1: the anatomy of a supercomputer

Sun Constellation System: The Open Petascale Computing Architecture

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

LANL Computing Environment for PSAAP Partners

Parallel Processing using the LOTUS cluster

HPC Update: Engagement Model

The CNMS Computer Cluster

Upgrading Small Business Client and Server Infrastructure E-LEET Solutions. E-LEET Solutions is an information technology consulting firm

Parallel Programming Survey

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Kriterien für ein PetaFlop System

Getting Started with HPC

Robert Ping UITS Research Technologies, Cyberinfrastructure and Service Center Indiana University Pervasive Technology Institute

icer Bioinformatics Support Fall 2011

SERVER CLUSTERING TECHNOLOGY & CONCEPT

FOR SERVERS 2.2: FEATURE matrix

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

Recommended hardware system configurations for ANSYS users

Manual for using Super Computing Resources

IMPLEMENTING GREEN IT

High-Performance Computing Clusters

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Cloud Computing through Virtualization and HPC technologies

High Performance Computing in Aachen

New Storage System Solutions

Overview of HPC systems and software available within

IBM System x family brochure

Scientific Computing Data Management Visions

Crossing the Performance Chasm with OpenPOWER

Cornell University Center for Advanced Computing

benchmarking Amazon EC2 for high-performance scientific computing

The Asterope compute cluster

An Oracle White Paper March Oracle s Single Server Solution for VDI

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Miami University RedHawk Cluster Working with batch jobs on the Cluster

Hadoop on the Gordon Data Intensive Cluster

Using the Intel Xeon Phi (with the Stampede Supercomputer) ISC 13 Tutorial

Overview of HPC Resources at Vanderbilt

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Introduction to Supercomputing with Janus

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

Autodesk Revit 2016 Product Line System Requirements and Recommendations

SUN ORACLE DATABASE MACHINE

Business white paper. HP Process Automation. Version 7.0. Server performance

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing

Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed

CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012

Windows Server 2008 Essentials. Installation, Deployment and Management

Infrastructure Matters: POWER8 vs. Xeon x86

TREND MICRO SOFTWARE APPLIANCE SUPPORT

Copyright by Parallels Holdings, Ltd. All rights reserved.

HP Z Workstations graphics card options

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

RWTH GPU Cluster. Sandra Wienke November Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

Cluster Computing and Network Marketing Systems

ABAQUS High Performance Computing Environment at Nokia

7 Real Benefits of a Virtual Infrastructure

SUN ORACLE EXADATA STORAGE SERVER

Microsoft Dynamics CRM 2011 Guide to features and requirements

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

ORACLE BIG DATA APPLIANCE X3-2

The Green Index: A Metric for Evaluating System-Wide Energy Efficiency in HPC Systems


Windows Server 2008 R2 Hyper V. Public FAQ

Virtualization of a Cluster Batch System

System Center 2012 R2 Configuration Manager System Requirements

An Introduction to High Performance Computing in the Department

Transcription:

Cyberinfrastructure Resources at Clemson University Jill Gemmill Galen Collier Cyberinfrastructure Technology Integration (CITI) November 2, 2011

Outline Vision: SC Cloud Sharing Resources to build common cyberinfrastructure The Palmetto Cluster (HPC) Condor Pool (HTC) C-Light R&E Network Outreach, Training, User Support HUBZero collaborative software platform

Clemson Datacenter 16,000 sq. feet of Powered and Cooled Raised Floor 50 Tons of HPC equipment moved over 2010 holidays into new area Upgrade of 2 to 4.5mW power Computational Barn Raising Dec. 27-30, 2010

The Palmetto Cluster Both Shared and Distributed systems Operates at over 115 teraflops (TF) #97 on June 2011 Top 500 list #2 among public academic institutions 1,616 compute nodes (14,168 cores) Operating System: Scientific Linux 6 Myrinet 10G network interconnect Data Storage: 115 TB scratch space (not backed up) 72 TB of backed-up storage

Distributed Memory Systems Name Count Model Processor L2 Cache Cores Memory Local Disk compute node phase 1 node0001-0257 257 Dell PE 1950 Intel Xeon E5345 @2.33GHz x 2 4 MB 8 12 GB 80 GB (SATA) compute node phase 2 node0258-0515 258 Dell PE 1950 Intel Xeon E5410 @2.33GHz x 2 6 MB 8 12 GB 80 GB (SATA) compute node phase 3 node0516-0771 256 Sun X2200 M2 x64 AMD Opteron 2356 @ 2.3GHz x 2 4 MB 8 16 GB 250 GB (SATA) compute node phase 4 node0772-1023,1108-1111 256 IBM dx340 Intel Xeon E5410 @2.33GHz x 2 6MB 8 16 GB 160 GB (SATA) compute node phase 4.1 node1024-1107 84 IBM dx340 Intel Xeon E5410 @2.33GHz x 2 6MB 8 16 GB 160 GB (SATA) compute node (former CCMS nodes) node1112-1541 430 Sun X6250 Intel Xeon L5420 @2.5GHz x 2 6MB 8 32 GB 160 GB (SATA) compute node phase 6 nodes 1553-1622 70 HP DL 165 G7 AMD Opteron 6172 @2.1GHz x 2 12MB 24 48 GB 250 GB (SATA)

Shared Memory Systems and GPUs Name Count Model Processor L2 Cache Cores Memory Local Disk regular large shared memory systems nodelm01- nodelm04 4 HP DL 580 G7 Intel Xeon 7542 @ 2.66 GHz x 4 18MB 24 512 GB 146 GB (SAS) math sciences large memory nodemath 1 HP DL 980 G7 Intel Xeon 7560 @ 2.66 GHz x 8 64 2 TB Four AMD FirePro V7800 cards accessible via select compute nodes = Total of 5,760 stream processors

Who Uses Palmetto? Core Hours (millions) 20 18 16 14 12 10 8 6 4 2 0 Since October 1, 2010 (1 year of data)

Who Built Palmetto? Clemson Condominium Program: a faculty/university partnership CCIT provides system administration and HPC user support Cyberinfrastructure is a university strategic priority Collaborative opportunities: Hosting of external research clusters is possible External condo ownership is possible (faculty and/or college funded) Collaborative instruction Grid Classroom NSF EPS 0919440

Software Available to Users Numerous Open-Source Packages Available: Abaqus, ABINIT, AMPL, GROMACS, LAMMPS, MCNP, mpiblast, NAMD, PerfSuite, PETSc, R, TotalView, VisIt, and much more Palmetto-Based Software Development Projects: Combustion modeling, data-intensive non-parametric estimation in economics, molecular dynamics FF development, census and education efficacy data analysis, manufacturing & scheduling optimization, finite element analysis, and much more Both Serial and Parallel program code can be used

PBS Professional 11.1 PBS Pro 11.1 is the new resource management service used on the Palmetto Cluster Enables you to make more efficient use of your time through scripting computational tasks PBS takes care of running these tasks and returning the results If the cluster is full, PBS holds your tasks and runs them when the resources are available PBS ensures fair sharing of cluster resources (policy enforcement) PBS ensures optimal/efficient use of available resources

OrangeFS Filesystem High-performance parallel filesystem Supports very high I/O activity Specialized hardware and software Development team based at Clemson 115 TB of space, open to all users Temporary work directory for all jobs

External Use of Palmetto NSF (EPSCoR Track 2) and DoE (CyberInstitute) funding have purchased 60+ nodes on Palmetto cluster on behalf of SC Cloud members These users have Condo Owner status Currently 80+ non-clemson users of Palmetto Funding extends thru end of 2012 SUSTAINABILITY MODEL: Condo Model shared across SC institutions

Easy Access to Palmetto Command-line interface, any Secure Shell (ssh) client Transfer files to/from using FileZilla (or scp)

Additional Resources for External Users User accounts with 100 GB backed-up /home directories 10 Palmetto nodes + 6 TB backed-up storage Priority job queue for EPSCoR users Special attention from cluster support staff

Desktop2Petascale.org Online resource for regularly updated training material and user guides Host site for community interaction Customizable virtual Linux environment Easy access to the Palmetto cluster, and other resources via terminal Built using HubZero platform

External User Support Non-Clemson users simply contact Galen to have a Palmetto account created Galen serves as each new user s primary pointof-contact for all support issues Most users become independent after just a few e-mails or conversations Users have access to regularly updated user documentation (D2P hub)

External User Support Clayton McCauley, C of C Starr Hazard, MUSC Jerry Ebalunode, USC Support-focused community of partners Galen Collier, Clemson Jacek Jakowski, UTK Barr von Oehsen, Clemson Bhanu Rekepalli, UTK

Condor at Clemson High-Throughput Computing Typically, over 10,000 cores available Windows and Linux environments Available to OSG community We can train you to do this at your institution

SC has a network of networks In March 2007, all SC HE was at 200 Mbps or less (competitive disadvantage). Clemson has brought in over $30M in CI related funding for South Carolina

C-Light Connectivity Connector to National/International R&E networks High throughput/greater bandwidth needed by higher ed for research/education Some CIOs don t think this is needed if you disagree, you should go talk with him/her. NSF EPSCoR $6M Award

Cyberinfrastructure Ecosystem Small college faculty needs go beyond desktop XSEDE TG/XD allocation requires demonstration of success Last mile issue is being addressed Bridging human expertise: run jobs, scale up, training, friendly interface development Campus Champion A campus-based regional facility and science gateway bridges campus researchers to national facilities 21

Science Outcomes/Impact Conference and Journal Publications Student Training New Collaborations Leading to new capabilities Discovery 22

Opportunities for Collaboration Collaboration in Learning Grid Classroom CI Days Workshops Online training resources Classroom guest teaching Training Boot Camps

Collaboration in Research CI Days presentations CI Days poster sessions HubZero platform New Areas for Collaboration Digital Humanities Social Media Listening Center Scientific Visualization Opportunities for Collaboration

Contact Info gemmill@clemson.edu galen@clemson.edu