COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY



Similar documents
Supercomputing Status und Trends (Conference Report) Peter Wegner

Clusters: Mainstream Technology for CAE

Improved LS-DYNA Performance on Sun Servers

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Scaling from Workstation to Cluster for Compute-Intensive Applications

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server

Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

Building a Top500-class Supercomputing Cluster at LNS-BUAP

ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

Building Clusters for Gromacs and other HPC applications

ALPS Supercomputing System A Scalable Supercomputer with Flexible Services

Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability

LS DYNA Performance Benchmarks and Profiling. January 2009

Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Panasas: High Performance Storage for the Engineering Workflow

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

CAS2K5. Jim Tuccillo

Green HPC - Dynamic Power Management in HPC

Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015

Cloud Computing through Virtualization and HPC technologies

High Performance Computing in CST STUDIO SUITE

PRIMERGY server-based High Performance Computing solutions

1 Bull, 2011 Bull Extreme Computing

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

SGI HPC Systems Help Fuel Manufacturing Rebirth

OpenPower: IBM s Strategy for Best of Breed 64-bit Linux

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Remote Visualization and Collaborative Design for CAE Applications

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Kriterien für ein PetaFlop System

Overview of HPC Resources at Vanderbilt

HPC Software Requirements to Support an HPC Cluster Supercomputer

HPC Update: Engagement Model

Cluster Computing at HRI

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Cluster Implementation and Management; Scheduling

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer

SERVER CLUSTERING TECHNOLOGY & CONCEPT

MPI / ClusterTools Update and Plans

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Multicore Parallel Computing with OpenMP

A-CLASS The rack-level supercomputer platform with hot-water cooling

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Trends in High-Performance Computing for Power Grid Applications

Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

Recommended hardware system configurations for ANSYS users

Sun Constellation System: The Open Petascale Computing Architecture

Parallel Computing. Introduction

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

SUN ORACLE EXADATA STORAGE SERVER

Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science

THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC

Parallel Programming Survey

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

IBM System x family brochure

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

The CNMS Computer Cluster

Recent Advances in HPC for Structural Mechanics Simulations

SAS Business Analytics. Base SAS for SAS 9.2

Scaling Study of LS-DYNA MPP on High Performance Servers

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

ABAQUS High Performance Computing Environment at Nokia

HP ProLiant SL270s Gen8 Server. Evaluation Report

Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA

Smarter Cluster Supercomputing from the Supercomputer Experts

Designed for Maximum Accelerator Performance

Improving Grid Processing Efficiency through Compute-Data Confluence

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie

Transcription:

COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY

HPC USAGE Less time to market for the new products in several industrial areas: oil&gas, automotive, airspace, energy, pharmaceutical and many others Strategic research: climate and weather forecasting, ecology, sociology, space and nuclear research, security, financial and economic analysis Business applications: database management, ERP for large and medium enterprises, financial databases, e- commerce systems Scientific research: nanotechnology, genetics, material science, quantum chemistry, molecular dynamics, particle physics, astrophysics, etc.

HPC IN INDUSTRY WORLDWIDE 2 reports of US Council on Competitiveness and IDC (a survey of industrial companies, collaborating with supercomputer centers) Conclusions: Companies access to the most powerful supercomputer resources allows to significantly increase United States competitiveness in airspace and automotive industry, energy, national security, pharmaceutics, electronics, etc A nation loosing in computing power is not competitive enough Results: February 2006: George Bush announced a decision to double the budgets of Department of Energy and National Science Foundation for the programs of development and application of supercomputer technologies

HPC IN INDUSTRY WORLDWIDE «We could not be in business without HPC» Survey of companies, collaborating with SC centers: 52 companies (energy, automotive, airspace, pharmaceutical, biomedicine, chemistry, semiconductors, software) revenues > $1 billion, industry leaders Reasons for companies collaboration with SC centers: For 100% of companies HPC is a fundamental business element, 75% of companies can not exist without the supercomputers Access to the most powerful supercomputers + assistance of the highly qualified staff of SC centers

HPC IN INDUSTRY WORLDWIDE Collaboration results for the companies: Increased revenues $200K up to $57M Development cycle reduction (up to 2 times for airspace) Expences reduction (up to 40% for automotive) and profit increase Access to more powerful supercomputers provided the next level of solutions: 55% of companies made the inventions 73% of companies reduced their expenses and increased the profits 60% of companies moved new products to the market faster 38% of companies increased their revenues, 30% increased market share 38% of companies purchased their own HPC solutions 100% of companies want to continue the collaboration

HPC IN INDUSTRY WORLDWIDE Report #3 of US Council on Competitivenes (survey of companies using their own HPC resources) 33 companies (oil&gas, airspace, automotive, telecommunications, entertainment, finance, semiconductors) Most companies can not exist or compete on the market without HPC HPC usage gives to the respondents: 50% return of capital 500% return of investments 5 to 2 years development cycle reduction (airspace) Tens of millions dollars profit One year payback after purchasing an HPC system

HPC IN INDUSTRY WORLDWIDE Boeing corporation: Computer-aided modeling allowed to reduce the number of samples 7 times (11 wing models for 787 Dreamliner versus 77 models for 767) Fuel and pollutions reduction is 20% less 1 year of development cycle reduction = 2 billion dollars economy INSITE program grant for 200 000 CPU hours of Oak Ridge laboratory Cray X1E supercomputer: Project for CFD stress analysis of airplane wing Testing of jet interaction with broken turbine blade: to minimize the probability of third party vendor jet failure during the testing to prevent potential millions dollar losses for jet vendor and development time increase for Boeing Pratt & Whitney: INSITE program grant for 750 000 CPU hours of Argonne laboratory IBM BlueGene/L supercomputer CFD modeling of gas dynamic processes in combustion chambers of air jets

Boeing 787: a first airplane created using computer modeling

Pratt & Whitney air jet modeling on a BlueGene/L supercomputer in Stanford university

VIRTUAL AERODYNAMIC TUBE FOR TESTING BMW-SAUBER F1 CAR Albert2 supercomputer

HPC IN RUSSIAN INDUSTRY According to NPO Saturn (StorageNews magazine, may 2007) Only virtual modeling of the product allows to reduce the costs for its finishing in metal Reducing the number of samples = reducing the development cycle of air jet production up to 3-4 times and reducing of the costs up to 5-6 times 0.9TFlops cluster in 2005: Provided a possibility to standardize the process of 3D calculations in aerodynamic and mechanics Next step: To implement the methods of multi-parameter optimization using strength, weight and aerodynamic parameters: requires more than 500 calculations for different variants 1 variant of fan and booster 150M operations (40 hours on a 1TFlops cluster) Jet components analysis for the transitional modes of jet operation while changing rotor rotation speed: about 10TFlops of compute power is required

HPC IN RUSSIAN INDUSTRY Accordin to OAO KAMAZ (StorageNews magazine, 2007) High-performance computing systems will provide cost reduction in automotive industry up to 10 times, and will decrease the development cycle up to 2 2.5 times Only virtual modeling of automotive products allows To reduce the costs of product finishing by reducing the number of samples and experiments To significantly improve the customer quality (resources, capacity, fuel consumption, services and repairs costcs, etc. To conform the legislation requirements (ecology, safety, ergonomics) Most important tasks: Strength and aerodynamics calculations, including air conditioning and heat exchanging in the cabin Transition to Euro-3 and Euro-4 requires a numeric modeling of the diesel engine working process using not only gas dynamics laws, but also a theory of phase changing

T-PLATFORMS leading Russian-based developer of integrated solutions for high performance computing over 20% of HPC solutions market according to IDC 46% of systems in the???50 list of the most powerful computers of Russia and CIS 5 in-house developed systems in???500 list of world s most powerful supercomputers patents in supercomputer area, development of own electronic components full product and services line in HPC, more than 50 HPC installations optimized turn-key hardware and software solutions for manufacturing, oil&gas, management

CUSTOMERS Government and science organizations: SKIF government supercomputer program, more than 10 of state universities in Russia (MSU, TSU, SPbSPU ), ISTC, JSCC, ITMiVT, JINR, SINP MSU, NIIPA, etc. Telecommunications: Comstar, Yandex, Rambler, Webalta, Headhunter.ru, Infotel, Batline, etc. Oil&Gas: Paradigm Geophysical, Schlumberger, Landmark, Mezhregiongaz, SibSAC, Gazpromtrans, etc. CAE&Manufacturing: RUSAL, LMZ,??? «??????», Sarov engineering center, CIAM, NPO Energomash, CAE center of St.Petersburg state university, etc. Banks&Finance: Investsberbank, Bank of Housing Finances, Promenergobank, Rusfininvest, etc.

T-PLATFORMS SUPERCOMPUTING CENTER STORAGE parallel access HARDWARE fault tolerance POWER SUPPLY AND COOLING INFRASTRUCTIRE best price/performance for customer applications FILE BACKUP COMPUTING CORE SERVERS MINI-CLUSTERS WORKSTATIONS turn-key optimized solution SOFTWARE MANAGEMENT fine-tuning for maximum sustained performance of real-life tasks SERVICE APPLICATION fast integration, optimal use TECHNICAL SUPPORT EDUCATION CONSULTING

COMPUTING CORE individually designed using standard components for the best price/performance and the best customer applications efficiency any modern processor architectures and interconnect technologies proprietary patented design of compute node barebone in 1U or blade chassis provides optimal cooling and high reliability with the ambient temperature up to 30º

STORAGE SYSTEMS ReadyStorage ActvieScale Cluster designed for Linuxclusters direct parallel data path between the storage and compute nodes ReadyStorage SAN Solutions SAN solutions from the low level to corporate storage modular architecture ReadyStorage NAS Solutions efficient storage for the best price ideal for file services and backup servers

SYSTEM SOFTWARE Operating systems: Red Hat Enterprise Linux, SUSE LINUX Enterprise Server, Windows Compute Cluster Server 2003 Development tools: compilers Absoft Fortran 95, Intel Fortran Compiler for Linux, Intel C++ Compiler for Linux, PathScale EKOPath Compiler Suite, PGI Server, PGI CDK, debuggers Allinea DDT, Etnus TotalView, Intel Vtune performance analyzer, PathScale OptiPath MPI implementations: Intel MPI Library, Scali MPI Connect, Verari MPI/Pro, Verari ChaMPIon/Pro Cluster and resource management tools: Intel cluster toolkit, Altair PBS Professional, Platform LSF HPC, Scali Manage, Sun N1 Grid Engine

APPLICATION SOFTWARE Virtual Product Development (engineering simulation): Ansys, MSC/NASTRAN, MSC/MARC and other, ABAQUS, LS-DYNA Gas and fluid dynamics: CFX, Fluent, STAR-CD, Flow Vision, MAGMASOFT Geophysical exploration and geoscience: Paradigm Geophysical, Schlumberger, Landmark Molecular dynamics: AMBER, CHARMM, CNARMm, GROMACS, GROMOS, NAMD Genomics and bioinformatics: BLAST, ASG, FDA et al. Climate research: WRF-chem, MM5

CAE SOLUTIONS INDIVIDUAL CHOICE individual design to achive the best price/performance for the customer applications, any processor architectures, interconnect technologies, parallel storage solutions OPTIMAL CHOICE from personal supercomputers to ready-to-use supercomputer centers based on T-Platforms, HP, IBM, SGI, Sun Microsystems and other vendors solutions INTEGRATED SOLUTION tightly integrated computing hardware, system and applications software, storage, infrastructure, security solutions, virtualization, etc. MAINTENANCE complete suite of technical, consulting, educational and financial services to organize a life cycle of computing center for CAE BUSINESS CONTINUITY Individual service contracts to provide necessary level of high availability and data integrity

CAE SOLUTIONS T-Forge CAE? T-Edge CAE Complete cluster solutions for computations in industry compute cluster, optimized for the best efficiency for engineering applications like Fluent, CFX, STAR-CD? LS-DYNA from 8 to thousands compute nodes in 1U or blade form factor based on quadcore AMD Opteron or Intel Xeon processors Gigabit Ethernet or InfiniBand interconnect, QLogic InfiniPath support T-Platforms ReadyStorage SAN storage system supporting SATA and FC HDD with the total volume up to 67TB preinstalled application software Linux or Microsoft WCCS 2003 operating system

PERSONAL SUPERCOMPUTERS KEY FEATUERS high performance computing cluster in a box small size, low power consumption, low noise cost-effective, low total cost of ownership easy to use and maintain ideal to switch to high performance computing technologies easy to combine to scale performance TASKS individual and workgroup computational tasks development and debugging of parallel HPC applications preliminary calculations and data preparation for a larger supercomputing system

T-EDGE SMP shared memory x86-64 system scalable up to 24 Intel Xeon 5100 or 5300 processors (up to 96 cores) up to 768GB RAM single OS Linux image optimized for mechanics, gas and molecular dynamics easy to use and manage: does not require knowledge of cluster systems administration based on standard, massively produced components rackmount or tower form factor

REAL-LIFE TASKS PERFORMANCE the architecture of T-Platforms solutions is based on the benchmark results of several commercial applications such as Fluent, CFX, ANSYS, STAR-CD for the most applications high-speed interconnect plays a key role to achieve calculations efficience InfiniBand usage gives a typical performance benefit of 30-40% even on a small number of nodes comparing to standard Gigabit Ethernet scalability of e.g. Fluent on a small number of nodes using InfiniBand can achieve 80% and more depending on exact problem to be solved

REAL-LIFE TASKS PERFORMANCE SKIF Cyberia supercomputer: standard STAR-CD benchmark showed superlinear performance speed-up on 128 compute nodes (up to 290 times) with InfiniPath interconnect with Gigabit Ethernet effectiveness decreases steeply already on 16-32???????????????????? compute nodes Performance speed-up 1000????????? speed-up 100 10 InfiniPath TCP (HP MPI) TCP (MPICH) 1 1 2 4 8 16 32 64 128 number of nodes???????????????

T-EDGE SMP: CFD & MECHANICS Elapsed??????????????? time, sec. Less??????, is better???:?????????????????????? Elapsed time, sec. Less is better T-Edge SMP T-Edge SMP T-Edge SMP T-Edge SMP Relative performance. More is better T-Edge SMP T-Edge SMP

T-PLATFORMS ADVANTAGES System Peak performance, Tflops Linpack performance, Tflops Efficiency*, % Normalized performance * *, Gflops SKIF Cyberia T-Platforms DC Intel Xeon 2.66GHz InfninBand Bladecenter HS21 IBM D? Intel Xeon 3.0???, InfninBand Endeavor intel Cluster Intel DC Intel Xeon 3.0???, InfninBand 12,00 9,019 75,1 5,97 12,86 8,564 66,6 5,3 12,28 8,564 69,7 5,5 * Ratio of Linpack performance and peak performance * * Linpack performance per 1GHz of CPU frequency

T-PLATFORMS ADVANTAGES STAR-CD perfomance according to CD-adapco (Benchmarks STAR-CD V3240/V3260, A-Class Dataset) System Application speedup, 8 nodes Application speedup, 16 nodes Application speedup, 32 nodes Application speedup, 64 nodes Application speedup, 128 nodes Cray XD1 DC AMD Opteron, RapidArray HP XC DC AMD Opteron, InfiniBand 7.3 13.9 25 46.9-7.2 13.6 26.5 - - IBM p5-575 (Power5) 5.8 - - - - T-Platforms SKIF Cyberia DC Intel Xeon, InfiniPath 7.9 16.5 35.4 91.9 290

COMPLETE SOLUTION EXAMPLES SKIF MSU SUPERCOMPUTER most powerful in Russia and CIS #36 in current???500 list the best result for the supercomputer developed in Russia: built in terms of SKIF-GRID joint supercomputing program 60Tflops peak performance 47.17Tflops Linpack performance uses Russian blade solutions and software

COMPLETE SOLUTION EXAMPLES Regional supercomputing center in Tomsk State University (2007): SKIF Cyberia cluster at the moment of delivery was one of 100 most powerful computers in the world (corresponding to #72 in Top500) most powerful computer in Russia, CIS and Eastern Europe (March 2007): 12TFlops of peak performance, 9.019TFlops of Linpack performance (75% of peak) proprietary compute node design the first time in Russian HPC industry most recent interconnect, storage, infrastructure and system software technologies were used unique solution in Russia, CIS and Eastern Europe taking in account innovations, completeness and complexity of the system

COMPLETE SOLUTION EXAMPLES T-Forge?AE128, T-ForgeCAE 24 for St.Petersburg State Politechnical University (2006 2007) 156 dual-core AMD Opteron processors, InfiniBand More than 1.5TFlops aggregated performance 82% Linpack performance T-Platforms ReadyStorage SAN and management software suite uninterruptible power infrastructure system management software and also application software: ANSYS, Fluent, CFX? Cadence

COMPLETE SOLUTION EXAMPLES?-EdgeCAE64? T-ForgeCAE16 for Sarob engineering center (2005, 2006): complete cluster solutions based on Intel Xeon and AMD Opteron processors; optimized for the best performance of STAR-CD clusters are used to optimize heat output of nuclear reactors for US power plants using STAR-CD software package

SERVICES FOR HPC CUSTOMERS PROJECT PLANNING design of individual solution architecture based on customer applications performance T-PLATFORMS HPC SERVICES for the customers of high-performance computing and datacenters HARDWARE SUPPORT maintenance and technical support SOFTWARE SUPPORT Individual tuning and support INFRASTRUCTURE planning, implementation, maintenance and support TECHNOLOGY SERVICES virtualization, GRID, security, etc. CONSULTING SERVICES management, budgets, etc. EDUCATION maintenance, administration, programming

COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY