COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY
|
|
|
- Tobias Garrett
- 10 years ago
- Views:
Transcription
1 COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY
2 HPC USAGE Less time to market for the new products in several industrial areas: oil&gas, automotive, airspace, energy, pharmaceutical and many others Strategic research: climate and weather forecasting, ecology, sociology, space and nuclear research, security, financial and economic analysis Business applications: database management, ERP for large and medium enterprises, financial databases, e- commerce systems Scientific research: nanotechnology, genetics, material science, quantum chemistry, molecular dynamics, particle physics, astrophysics, etc.
3 HPC IN INDUSTRY WORLDWIDE 2 reports of US Council on Competitiveness and IDC (a survey of industrial companies, collaborating with supercomputer centers) Conclusions: Companies access to the most powerful supercomputer resources allows to significantly increase United States competitiveness in airspace and automotive industry, energy, national security, pharmaceutics, electronics, etc A nation loosing in computing power is not competitive enough Results: February 2006: George Bush announced a decision to double the budgets of Department of Energy and National Science Foundation for the programs of development and application of supercomputer technologies
4 HPC IN INDUSTRY WORLDWIDE «We could not be in business without HPC» Survey of companies, collaborating with SC centers: 52 companies (energy, automotive, airspace, pharmaceutical, biomedicine, chemistry, semiconductors, software) revenues > $1 billion, industry leaders Reasons for companies collaboration with SC centers: For 100% of companies HPC is a fundamental business element, 75% of companies can not exist without the supercomputers Access to the most powerful supercomputers + assistance of the highly qualified staff of SC centers
5 HPC IN INDUSTRY WORLDWIDE Collaboration results for the companies: Increased revenues $200K up to $57M Development cycle reduction (up to 2 times for airspace) Expences reduction (up to 40% for automotive) and profit increase Access to more powerful supercomputers provided the next level of solutions: 55% of companies made the inventions 73% of companies reduced their expenses and increased the profits 60% of companies moved new products to the market faster 38% of companies increased their revenues, 30% increased market share 38% of companies purchased their own HPC solutions 100% of companies want to continue the collaboration
6 HPC IN INDUSTRY WORLDWIDE Report #3 of US Council on Competitivenes (survey of companies using their own HPC resources) 33 companies (oil&gas, airspace, automotive, telecommunications, entertainment, finance, semiconductors) Most companies can not exist or compete on the market without HPC HPC usage gives to the respondents: 50% return of capital 500% return of investments 5 to 2 years development cycle reduction (airspace) Tens of millions dollars profit One year payback after purchasing an HPC system
7 HPC IN INDUSTRY WORLDWIDE Boeing corporation: Computer-aided modeling allowed to reduce the number of samples 7 times (11 wing models for 787 Dreamliner versus 77 models for 767) Fuel and pollutions reduction is 20% less 1 year of development cycle reduction = 2 billion dollars economy INSITE program grant for CPU hours of Oak Ridge laboratory Cray X1E supercomputer: Project for CFD stress analysis of airplane wing Testing of jet interaction with broken turbine blade: to minimize the probability of third party vendor jet failure during the testing to prevent potential millions dollar losses for jet vendor and development time increase for Boeing Pratt & Whitney: INSITE program grant for CPU hours of Argonne laboratory IBM BlueGene/L supercomputer CFD modeling of gas dynamic processes in combustion chambers of air jets
8 Boeing 787: a first airplane created using computer modeling
9 Pratt & Whitney air jet modeling on a BlueGene/L supercomputer in Stanford university
10 VIRTUAL AERODYNAMIC TUBE FOR TESTING BMW-SAUBER F1 CAR Albert2 supercomputer
11 HPC IN RUSSIAN INDUSTRY According to NPO Saturn (StorageNews magazine, may 2007) Only virtual modeling of the product allows to reduce the costs for its finishing in metal Reducing the number of samples = reducing the development cycle of air jet production up to 3-4 times and reducing of the costs up to 5-6 times 0.9TFlops cluster in 2005: Provided a possibility to standardize the process of 3D calculations in aerodynamic and mechanics Next step: To implement the methods of multi-parameter optimization using strength, weight and aerodynamic parameters: requires more than 500 calculations for different variants 1 variant of fan and booster 150M operations (40 hours on a 1TFlops cluster) Jet components analysis for the transitional modes of jet operation while changing rotor rotation speed: about 10TFlops of compute power is required
12 HPC IN RUSSIAN INDUSTRY Accordin to OAO KAMAZ (StorageNews magazine, 2007) High-performance computing systems will provide cost reduction in automotive industry up to 10 times, and will decrease the development cycle up to times Only virtual modeling of automotive products allows To reduce the costs of product finishing by reducing the number of samples and experiments To significantly improve the customer quality (resources, capacity, fuel consumption, services and repairs costcs, etc. To conform the legislation requirements (ecology, safety, ergonomics) Most important tasks: Strength and aerodynamics calculations, including air conditioning and heat exchanging in the cabin Transition to Euro-3 and Euro-4 requires a numeric modeling of the diesel engine working process using not only gas dynamics laws, but also a theory of phase changing
13 T-PLATFORMS leading Russian-based developer of integrated solutions for high performance computing over 20% of HPC solutions market according to IDC 46% of systems in the???50 list of the most powerful computers of Russia and CIS 5 in-house developed systems in???500 list of world s most powerful supercomputers patents in supercomputer area, development of own electronic components full product and services line in HPC, more than 50 HPC installations optimized turn-key hardware and software solutions for manufacturing, oil&gas, management
14 CUSTOMERS Government and science organizations: SKIF government supercomputer program, more than 10 of state universities in Russia (MSU, TSU, SPbSPU ), ISTC, JSCC, ITMiVT, JINR, SINP MSU, NIIPA, etc. Telecommunications: Comstar, Yandex, Rambler, Webalta, Headhunter.ru, Infotel, Batline, etc. Oil&Gas: Paradigm Geophysical, Schlumberger, Landmark, Mezhregiongaz, SibSAC, Gazpromtrans, etc. CAE&Manufacturing: RUSAL, LMZ,??? «??????», Sarov engineering center, CIAM, NPO Energomash, CAE center of St.Petersburg state university, etc. Banks&Finance: Investsberbank, Bank of Housing Finances, Promenergobank, Rusfininvest, etc.
15 T-PLATFORMS SUPERCOMPUTING CENTER STORAGE parallel access HARDWARE fault tolerance POWER SUPPLY AND COOLING INFRASTRUCTIRE best price/performance for customer applications FILE BACKUP COMPUTING CORE SERVERS MINI-CLUSTERS WORKSTATIONS turn-key optimized solution SOFTWARE MANAGEMENT fine-tuning for maximum sustained performance of real-life tasks SERVICE APPLICATION fast integration, optimal use TECHNICAL SUPPORT EDUCATION CONSULTING
16 COMPUTING CORE individually designed using standard components for the best price/performance and the best customer applications efficiency any modern processor architectures and interconnect technologies proprietary patented design of compute node barebone in 1U or blade chassis provides optimal cooling and high reliability with the ambient temperature up to 30º
17 STORAGE SYSTEMS ReadyStorage ActvieScale Cluster designed for Linuxclusters direct parallel data path between the storage and compute nodes ReadyStorage SAN Solutions SAN solutions from the low level to corporate storage modular architecture ReadyStorage NAS Solutions efficient storage for the best price ideal for file services and backup servers
18 SYSTEM SOFTWARE Operating systems: Red Hat Enterprise Linux, SUSE LINUX Enterprise Server, Windows Compute Cluster Server 2003 Development tools: compilers Absoft Fortran 95, Intel Fortran Compiler for Linux, Intel C++ Compiler for Linux, PathScale EKOPath Compiler Suite, PGI Server, PGI CDK, debuggers Allinea DDT, Etnus TotalView, Intel Vtune performance analyzer, PathScale OptiPath MPI implementations: Intel MPI Library, Scali MPI Connect, Verari MPI/Pro, Verari ChaMPIon/Pro Cluster and resource management tools: Intel cluster toolkit, Altair PBS Professional, Platform LSF HPC, Scali Manage, Sun N1 Grid Engine
19 APPLICATION SOFTWARE Virtual Product Development (engineering simulation): Ansys, MSC/NASTRAN, MSC/MARC and other, ABAQUS, LS-DYNA Gas and fluid dynamics: CFX, Fluent, STAR-CD, Flow Vision, MAGMASOFT Geophysical exploration and geoscience: Paradigm Geophysical, Schlumberger, Landmark Molecular dynamics: AMBER, CHARMM, CNARMm, GROMACS, GROMOS, NAMD Genomics and bioinformatics: BLAST, ASG, FDA et al. Climate research: WRF-chem, MM5
20 CAE SOLUTIONS INDIVIDUAL CHOICE individual design to achive the best price/performance for the customer applications, any processor architectures, interconnect technologies, parallel storage solutions OPTIMAL CHOICE from personal supercomputers to ready-to-use supercomputer centers based on T-Platforms, HP, IBM, SGI, Sun Microsystems and other vendors solutions INTEGRATED SOLUTION tightly integrated computing hardware, system and applications software, storage, infrastructure, security solutions, virtualization, etc. MAINTENANCE complete suite of technical, consulting, educational and financial services to organize a life cycle of computing center for CAE BUSINESS CONTINUITY Individual service contracts to provide necessary level of high availability and data integrity
21 CAE SOLUTIONS T-Forge CAE? T-Edge CAE Complete cluster solutions for computations in industry compute cluster, optimized for the best efficiency for engineering applications like Fluent, CFX, STAR-CD? LS-DYNA from 8 to thousands compute nodes in 1U or blade form factor based on quadcore AMD Opteron or Intel Xeon processors Gigabit Ethernet or InfiniBand interconnect, QLogic InfiniPath support T-Platforms ReadyStorage SAN storage system supporting SATA and FC HDD with the total volume up to 67TB preinstalled application software Linux or Microsoft WCCS 2003 operating system
22 PERSONAL SUPERCOMPUTERS KEY FEATUERS high performance computing cluster in a box small size, low power consumption, low noise cost-effective, low total cost of ownership easy to use and maintain ideal to switch to high performance computing technologies easy to combine to scale performance TASKS individual and workgroup computational tasks development and debugging of parallel HPC applications preliminary calculations and data preparation for a larger supercomputing system
23 T-EDGE SMP shared memory x86-64 system scalable up to 24 Intel Xeon 5100 or 5300 processors (up to 96 cores) up to 768GB RAM single OS Linux image optimized for mechanics, gas and molecular dynamics easy to use and manage: does not require knowledge of cluster systems administration based on standard, massively produced components rackmount or tower form factor
24 REAL-LIFE TASKS PERFORMANCE the architecture of T-Platforms solutions is based on the benchmark results of several commercial applications such as Fluent, CFX, ANSYS, STAR-CD for the most applications high-speed interconnect plays a key role to achieve calculations efficience InfiniBand usage gives a typical performance benefit of 30-40% even on a small number of nodes comparing to standard Gigabit Ethernet scalability of e.g. Fluent on a small number of nodes using InfiniBand can achieve 80% and more depending on exact problem to be solved
25 REAL-LIFE TASKS PERFORMANCE SKIF Cyberia supercomputer: standard STAR-CD benchmark showed superlinear performance speed-up on 128 compute nodes (up to 290 times) with InfiniPath interconnect with Gigabit Ethernet effectiveness decreases steeply already on 16-32???????????????????? compute nodes Performance speed-up 1000????????? speed-up InfiniPath TCP (HP MPI) TCP (MPICH) number of nodes???????????????
26 T-EDGE SMP: CFD & MECHANICS Elapsed??????????????? time, sec. Less??????, is better???:?????????????????????? Elapsed time, sec. Less is better T-Edge SMP T-Edge SMP T-Edge SMP T-Edge SMP Relative performance. More is better T-Edge SMP T-Edge SMP
27 T-PLATFORMS ADVANTAGES System Peak performance, Tflops Linpack performance, Tflops Efficiency*, % Normalized performance * *, Gflops SKIF Cyberia T-Platforms DC Intel Xeon 2.66GHz InfninBand Bladecenter HS21 IBM D? Intel Xeon 3.0???, InfninBand Endeavor intel Cluster Intel DC Intel Xeon 3.0???, InfninBand 12,00 9,019 75,1 5,97 12,86 8,564 66,6 5,3 12,28 8,564 69,7 5,5 * Ratio of Linpack performance and peak performance * * Linpack performance per 1GHz of CPU frequency
28 T-PLATFORMS ADVANTAGES STAR-CD perfomance according to CD-adapco (Benchmarks STAR-CD V3240/V3260, A-Class Dataset) System Application speedup, 8 nodes Application speedup, 16 nodes Application speedup, 32 nodes Application speedup, 64 nodes Application speedup, 128 nodes Cray XD1 DC AMD Opteron, RapidArray HP XC DC AMD Opteron, InfiniBand IBM p5-575 (Power5) T-Platforms SKIF Cyberia DC Intel Xeon, InfiniPath
29 COMPLETE SOLUTION EXAMPLES SKIF MSU SUPERCOMPUTER most powerful in Russia and CIS #36 in current???500 list the best result for the supercomputer developed in Russia: built in terms of SKIF-GRID joint supercomputing program 60Tflops peak performance 47.17Tflops Linpack performance uses Russian blade solutions and software
30 COMPLETE SOLUTION EXAMPLES Regional supercomputing center in Tomsk State University (2007): SKIF Cyberia cluster at the moment of delivery was one of 100 most powerful computers in the world (corresponding to #72 in Top500) most powerful computer in Russia, CIS and Eastern Europe (March 2007): 12TFlops of peak performance, 9.019TFlops of Linpack performance (75% of peak) proprietary compute node design the first time in Russian HPC industry most recent interconnect, storage, infrastructure and system software technologies were used unique solution in Russia, CIS and Eastern Europe taking in account innovations, completeness and complexity of the system
31 COMPLETE SOLUTION EXAMPLES T-Forge?AE128, T-ForgeCAE 24 for St.Petersburg State Politechnical University ( ) 156 dual-core AMD Opteron processors, InfiniBand More than 1.5TFlops aggregated performance 82% Linpack performance T-Platforms ReadyStorage SAN and management software suite uninterruptible power infrastructure system management software and also application software: ANSYS, Fluent, CFX? Cadence
32 COMPLETE SOLUTION EXAMPLES?-EdgeCAE64? T-ForgeCAE16 for Sarob engineering center (2005, 2006): complete cluster solutions based on Intel Xeon and AMD Opteron processors; optimized for the best performance of STAR-CD clusters are used to optimize heat output of nuclear reactors for US power plants using STAR-CD software package
33 SERVICES FOR HPC CUSTOMERS PROJECT PLANNING design of individual solution architecture based on customer applications performance T-PLATFORMS HPC SERVICES for the customers of high-performance computing and datacenters HARDWARE SUPPORT maintenance and technical support SOFTWARE SUPPORT Individual tuning and support INFRASTRUCTURE planning, implementation, maintenance and support TECHNOLOGY SERVICES virtualization, GRID, security, etc. CONSULTING SERVICES management, budgets, etc. EDUCATION maintenance, administration, programming
34 COMPLETE SOLUTIONS FOR HIGH-PERFORMANCE COMPUTING IN INDUSTRY
Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner
(Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Improved LS-DYNA Performance on Sun Servers
8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Scaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server IBM FLUENT Benchmark Results IBM & FLUENT Recommended Configurations IBM 16-Core BladeCenter S Cluster for FLUENT Systems: Up to Six
Sun in HPC. Update for IDC HPC User Forum Tucson, AZ, Sept 2008
Sun in HPC Update for IDC HPC User Forum Tucson, AZ, Sept 2008 Bjorn Andersson Director, HPC Marketing Makia Minich Lead Architect, Sun HPC Software, Linux Edition Sun Microsystems Core Focus Areas for
Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1
Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Comparing the performance of the Landmark Nexus reservoir simulator on HP servers
WHITE PAPER Comparing the performance of the Landmark Nexus reservoir simulator on HP servers Landmark Software & Services SOFTWARE AND ASSET SOLUTIONS Comparing the performance of the Landmark Nexus
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
ECLIPSE Best Practices Performance, Productivity, Efficiency. March 2009
ECLIPSE Best Practices Performance, Productivity, Efficiency March 29 ECLIPSE Performance, Productivity, Efficiency The following research was performed under the HPC Advisory Council activities HPC Advisory
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin. http://www.dell.com/clustering
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin Reza Rooholamini, Ph.D. Director Enterprise Solutions Dell Computer Corp. [email protected] http://www.dell.com/clustering
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability
Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability Betty Huang, Jeff Williams, Richard Xu Baker Hughes Incorporated Abstract: High-performance computing (HPC), the
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing
Interconnect Analysis: 10GigE and InfiniBand in High Performance Computing WHITE PAPER Highlights: There is a large number of HPC applications that need the lowest possible latency for best performance
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Panasas: High Performance Storage for the Engineering Workflow
9. LS-DYNA Forum, Bamberg 2010 IT / Performance Panasas: High Performance Storage for the Engineering Workflow E. Jassaud, W. Szoecs Panasas / transtec AG 2010 Copyright by DYNAmore GmbH N - I - 9 High-Performance
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
CAS2K5. Jim Tuccillo [email protected] 912.576.5215
CAS2K5 Jim Tuccillo [email protected] 912.576.5215 Agenda icorporate Overview isystem Architecture inode Design iprocessor Options iinterconnect Options ihigh Performance File Systems Lustre isystem Management
Green HPC - Dynamic Power Management in HPC
Gr eenhpc Dynami cpower Management i nhpc AT ECHNOL OGYWHI T EP APER Green HPC Dynamic Power Management in HPC 2 Green HPC - Dynamic Power Management in HPC Introduction... 3 Green Strategies... 4 Implementation...
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Xian Shi 1 bio I am a second-year Ph.D. student from Combustion Analysis/Modeling Lab,
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
PRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
SGI HPC Systems Help Fuel Manufacturing Rebirth
SGI HPC Systems Help Fuel Manufacturing Rebirth Created by T A B L E O F C O N T E N T S 1.0 Introduction 1 2.0 Ongoing Challenges 1 3.0 Meeting the Challenge 2 4.0 SGI Solution Environment and CAE Applications
OpenPower: IBM s Strategy for Best of Breed 64-bit Linux
HRG Harvard Research Group HRG Assessment: OpenPower: IBM s Strategy for Best of Breed 64-bit Linux Overview As users migrate from RISC/UNIX platforms to Linux, to which 64-bit architecture will they move
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
Remote Visualization and Collaborative Design for CAE Applications
Remote Visualization and Collaborative Design for CAE Applications Giorgio Richelli [email protected] http://www.ibm.com/servers/hpc http://www.ibm.com/servers/deepcomputing http://www.ibm.com/servers/deepcomputing/visualization
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
HPC Software Requirements to Support an HPC Cluster Supercomputer
HPC Software Requirements to Support an HPC Cluster Supercomputer Susan Kraus, Cray Cluster Solutions Software Product Manager Maria McLaughlin, Cray Cluster Solutions Product Marketing Cray Inc. WP-CCS-Software01-0417
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer
Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: [email protected] Press Release RSC presents SPbPU supercomputer center and new scientific
SERVER CLUSTERING TECHNOLOGY & CONCEPT
SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: [email protected] Abstract Server Cluster is one of the clustering technologies; it is use for
MPI / ClusterTools Update and Plans
HPC Technical Training Seminar July 7, 2008 October 26, 2007 2 nd HLRS Parallel Tools Workshop Sun HPC ClusterTools 7+: A Binary Distribution of Open MPI MPI / ClusterTools Update and Plans Len Wisniewski
Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Res. Lett. Inf. Math. Sci., 2003, Vol.5, pp 1-10 Available online at http://iims.massey.ac.nz/research/letters/ 1 Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
Multicore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
A-CLASS The rack-level supercomputer platform with hot-water cooling
A-CLASS The rack-level supercomputer platform with hot-water cooling INTRODUCTORY PRESENTATION JUNE 2014 Rev 1 ENG COMPUTE PRODUCT SEGMENTATION 3 rd party board T-MINI P (PRODUCTION): Minicluster/WS systems
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2
Fraunhofer Institute for Algorithms and Scientific Computing SCAI Performance Comparison of ISV Simulation Codes on Microsoft HPC Server 28 and SUSE Enterprise Server 1.2 Karsten Reineck und Horst Schwichtenberg
HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA
HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Parallel Computing. Introduction
Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband
Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
SUN ORACLE EXADATA STORAGE SERVER
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science
Thematic Unit of Excellence on Computational Materials Science Solid State and Structural Chemistry Unit, Indian Institute of Science Call for Expression of Interest (EOI) for the Supply, Installation
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC The Right Data, in the Right Place, at the Right Time José Martins Storage Practice Sun Microsystems 1 Agenda Sun s strategy and commitment to the HPC or technical
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
IBM System x family brochure
IBM Systems and Technology System x IBM System x family brochure IBM System x rack and tower servers 2 IBM System x family brochure IBM System x servers Highlights IBM System x and BladeCenter servers
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
The CNMS Computer Cluster
The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
SAS Business Analytics. Base SAS for SAS 9.2
Performance & Scalability of SAS Business Analytics on an NEC Express5800/A1080a (Intel Xeon 7500 series-based Platform) using Red Hat Enterprise Linux 5 SAS Business Analytics Base SAS for SAS 9.2 Red
Scaling Study of LS-DYNA MPP on High Performance Servers
Scaling Study of LS-DYNA MPP on High Performance Servers Youn-Seo Roh Sun Microsystems, Inc. 901 San Antonio Rd, MS MPK24-201 Palo Alto, CA 94303 USA [email protected] 17-25 ABSTRACT With LS-DYNA MPP,
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
ABAQUS High Performance Computing Environment at Nokia
ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance
HP ProLiant SL270s Gen8 Server. Evaluation Report
HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich [email protected]
Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA
Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA This white paper outlines the benefits of using Windows HPC Server as part of a cluster computing solution for performing realistic simulation.
Smarter Cluster Supercomputing from the Supercomputer Experts
Smarter Cluster Supercomputing from the Supercomputer Experts Maximize Your Productivity with Flexible, High-Performance Cray CS400 Cluster Supercomputers In science and business, as soon as one question
Designed for Maximum Accelerator Performance
Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can
Improving Grid Processing Efficiency through Compute-Data Confluence
Solution Brief GemFire* Symphony* Intel Xeon processor Improving Grid Processing Efficiency through Compute-Data Confluence A benchmark report featuring GemStone Systems, Intel Corporation and Platform
Kashif Iqbal - PhD [email protected]
HPC/HTC vs. Cloud Benchmarking An empirical evalua.on of the performance and cost implica.ons Kashif Iqbal - PhD [email protected] ICHEC, NUI Galway, Ireland With acknowledgment to Michele MicheloDo
