Altix Usage and Application Programming. Welcome and Introduction



Similar documents
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

AT&T Global Network Client for Windows Product Support Matrix January 29, 2015

HPC Update: Engagement Model

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*

Performance Characteristics of Large SMP Machines

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Clusters: Mainstream Technology for CAE

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

Enhanced Vessel Traffic Management System Booking Slots Available and Vessels Booked per Day From 12-JAN-2016 To 30-JUN-2017

OpenMP Programming on ScaleMP

A Crash course to (The) Bighouse

Kriterien für ein PetaFlop System

PCIe Over Cable Provides Greater Performance for Less Cost for High Performance Computing (HPC) Clusters. from One Stop Systems (OSS)

Cloud Computing through Virtualization and HPC technologies

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

Deploying and managing a Visualization Onera

Case 2:08-cv ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138. Exhibit 8

Sun Constellation System: The Open Petascale Computing Architecture

Analysis One Code Desc. Transaction Amount. Fiscal Period

Building Clusters for Gromacs and other HPC applications

High Performance Computing in Aachen

Building a Private Cloud with Eucalyptus

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

PRIMERGY server-based High Performance Computing solutions

Apache Hadoop Cluster Configuration Guide

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

GOLDEN EGGS. ORACLE SUN x86-64 Servers 2P 4P 8P. Westmere-EP Intel Xeon X5680 x MHz 2P-Six-C. Visual Cook Book for: ORACLE SUN X86-64 Servers

Appendix 3. Specifications for e-portal

Hadoop: Embracing future hardware

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

Parallel Programming Survey

Recommended hardware system configurations for ANSYS users

Supercomputing Status und Trends (Conference Report) Peter Wegner

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

An Introduction to High Performance Computing in the Department

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Main Memory Data Warehouses

Alexandria Overview. Sept 4, 2015

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)

LS DYNA Performance Benchmarks and Profiling. January 2009

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Improved LS-DYNA Performance on Sun Servers

Enabling Technologies for Distributed and Cloud Computing

GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

64-Bit versus 32-Bit CPUs in Scientific Computing

Multi-Threading Performance on Commodity Multi-Core Processors

Building a Top500-class Supercomputing Cluster at LNS-BUAP

BSC - Barcelona Supercomputer Center

High Performance Computing in CST STUDIO SUITE

Enabling Technologies for Distributed Computing

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

How To Write An Article On An Hp Appsystem For Spera Hana

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

ORACLE BIG DATA APPLIANCE X3-2

Oracle Database Scalability in VMware ESX VMware ESX 3.5


PTC Creo 2.0 Hardware Support Dell

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

ORACLE EXADATA STORAGE SERVER X4-2

A Highly Versatile Virtual Data Center Ressource Pool Benefits of XenServer to virtualize services in a virtual pool

Cluster Implementation and Management; Scheduling

Sun Microsystems Special Promotions for Education and Research January 9, 2007

SMB Direct for SQL Server and Private Cloud

Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network

CAS2K5. Jim Tuccillo

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

FUJITSU Enterprise Product & Solution Facts

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Maurice Askinazi Ofer Rind Tony Wong. Cornell Nov. 2, 2010 Storage at BNL

numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT

White Paper The Numascale Solution: Extreme BIG DATA Computing

Lecture 1: the anatomy of a supercomputer

SGI High Performance Computing

Virtualised MikroTik

Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015

SUN ORACLE DATABASE MACHINE

In-Memory Data Management for Enterprise Applications

Estonian Scientific Computing Infrastructure (ETAIS)

Picking the right number of targets per server for BeeGFS. Jan Heichler March 2015 v1.2

Current Status of FEFS for the K computer

Solution Brief July All-Flash Server-Side Storage for Oracle Real Application Clusters (RAC) on Oracle Linux

Transcription:

Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang E. Nagel, Matthias S. Müller

Center for Information Services and HPC (ZIH) Central Scientific Unit at TU Dresden Merged institution: TUD Computing Center (URZ) and Center for High Performance Computing (ZHR) Competence Center for Parallel Computing and Software Tools Strong commitment to support real users Development of algorithms and methods: Cooperation with users from all departments

Responsibilities of ZIH Providing infrastructure and qualified service for TU Dresden and Saxony Research topics Architecture and performance analysis of High Performance Computers Programming methods and techniques for HPC systems Software tools to support programming and optimization Modeling algorithms of biological processes Mathematical models, algorithms, and efficient implementations Role of mediator between vendors, developers, and users Pick up and preparation of new concepts, methods, and techniques Teaching and Education

Procurement: Overall Infrastructure / Future Directions HPC-Server Main memory : 4 TB PC-Farm 8 GB/s 4 GB/s 4 GB/s HPC-SAN Capacity: > 50 TB PC-SAN Capacity: > 50 TB 1,8 GB/s PetaByte Tape Silo Capacity: 1 PB

Contract: SGI HPC-Component Next generation Shared-Memory-Plattform SGI Altix (Codename Tollhouse ) more than 1500 of the latest Intel -Itanium -Cores 6 TByte main memory Capability Computing system Focus: In-Memory -Computing New research instrument to extract knowledge from data Accelerator of the Theory

Contract: SGI PC-Farm System from Linux Networx CPUs: Always latest most powerful chips from AMD Overall more than 700 boards with more than 2000 cores Infiniband networks between the nodes Capacity Computing system Focus: Throughput with excellent performance Heterogeneous diversity is part of the design

Center for Information Services and High Performance Computing (ZIH) Stage1a Zellescher Weg 12 Tel. +49 351-463 - 35450 Wolfgang E. Nagel, Matthias S. Müller

HRSK Stage 1a HPC-Server SGI Altix 3700 Bx2 192x 1.5GHz/4MB L3 Cache Itanium2 CPUs 1152 GFlops/s Peak Performance 768 GB Shared Memory (4 GB/CPU), NUMA 1 TB lokal discs + 34 TB SAN SuSE SLES 9 inkl. SGI ProPack 4 Intel Compiler and Tools: C/C++, Fortran Compiler MKL Vtune, Trace Collector, Trace Analyzer Alinea DDT Debugger merkur.hrsk.tu-dresden.de Batchsystem LSF

HPC-SAN Stage 1a 1x DDN RAID System S2A9500: 2x 38/42U Racks (air cooled system) 2x S2A9500 Couplet (5GB Cache, 8x FC4 Host Ports 16x Optical SFP Interface Module 20x Blue 16-Slot Dual-Loop FC 2Gb JBOD, 20x Rack Mount Kits 292x 146GB 10k RPM FC Disks (4 hot spare) 34 TB net capacity DIRECT-GUI

PC-Farm Stage 1a 64 dual CPU nodes 128 AMD Opteron DP248 2.2 GHz (Single-Core) CPUs 563,2 GFlops/s peak performance 256 GB main memory ( 4GB per node) SUSE operating system Infiniband 4x Interconnect 80 GB local disc per node 21,2 TB shared disc space: 2x DDN RAID System S2A9500 phobos.hrsk.tu-dresden.de

PC-SAN Stage 1a DDN RAID System S2A9500: 2x 38/42U Racks (air cooled), 1x S2A9500 Couplet (5GB Cache, 6x FC4 Host Ports), 8x Optical SFP Interface Module, 19x Blue 16-Slot Dual-Loop FC 2Gb JBOD, 19x Rack Mount Kits, 184x 146GB 10k RPM FC discs (4 Hot-Spare), 21,2 TB netto capacity

Center for Information Services and High Performance Computing (ZIH) Stage1b,2 Zellescher Weg 12 Tel. +49 351-463 - 35450 Wolfgang E. Nagel, Matthias S. Müller

Comparison of systems in 2005 HPC Server PC Farm Peak Performance 1152 + 563 + #CPUs 192 128 Nodes 1 + 64 Total Memory 768 GB 256 GB Total Disc 34 TB 21 TB MPI + + OpenMP ++ Usable Shared Memory 768 GB ++ 4 GB - MPI Bandwidth ~ 3.2 GB/s ++ ~ 1GB/s + MPI Latency ~ 1.5-3 microsecond ++ ~ 4-5 microseconds +

PC-Farm Stage 1b Compute Partition: 128 single CPU, Opteron dual core 104 dual CPU, Opteron dual core 24 quad CPU, Opteron dual core AMD Opteron dual core 2.2GHz 3801,6 GFlops/s peak performance 864GB main memory ( 2GB per CPU chip) SUSE operating system Infiniband 4x Interconnect 21,2 TB shared disc space: DDN RAID System S2A9500

PC-Farm Stage 2 Compute Partition: 256 single CPU, Opteron dual core 128 dual CPU, Opteron dual core 64 quad CPU, Opteron dual core 12 octal CPU, Opteron dual core AMD Opteron dual core 2.4 GHz 8294,4 GFlops/s peak performance 3456 GB main memory ( 2GB per core) SUSE operating system Infiniband 4x Interconnect 51 TB total shared disc space: DDN RAID System S2A9500

PC-Farm final config Compute Partition: 128+256 single CPU, Opteron dual core 104+128 dual CPU, Opteron dual core 24+64 quad CPU, Opteron dual core 12 octal CPU, Opteron dual core AMD Opteron dual core 2,2-2.4 GHz 12096 GFlops/s peak performance 4320GB main memory 1-2GB per core 2-32GB per node SUSE operating system Infiniband 4x Interconnect 51 TB total shared disc space: DDN RAID System S2A9500

HPC-Server Stage 2 SGI Tollhouse 832 MonteCito CPUs 10649.6 GFlops/s peak performance 6 TB Shared Memory, NUMA 68 TB Disk SUSE SLES 9 Intel Compiler and Tools: C/C++, Fortran Compiler MKL Vtune, Trace Collector, Trace Analyzer Alinea DDT debugger LSF batch system

Timeline 2005 2006 Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Machine Room Upgrade Installation Stage 1a (Test operation) Installation Stage 1b Installation Stage 2

Trefftz and Willers Building

New Extension

Location of Computer Rooms

Main Contractor: SGI HPC-component More than 1500 CPU cores (Intel Itanium 2, Montecito) 6 TB Main memory with high bandwidth (major decision criteria) SAN for HPC / PC 68 / 51 TB Capacity Tape Silo 1 PB Capacity PC-Farm > 700 System boards (AMD Opteron) Infiniband network build from 288-port switches

Summary Excellent Components High Bandwidth system in all components (design principle) Heterogeneous structure, support for Capability Computing and Capacity Computing Challenge for all scientific areas Highly balanced and versatile Data Intensive Computing Complex with excellent performance to support Computational Science and Engineering in Saxony! It will be lots of fun!!!!!!