Wind-Tunnel Simulation using TAU on a PC-Cluster: Resources and Performance Stefan Melber-Wilkending / DLR Braunschweig
|
|
|
- Dina Chandler
- 10 years ago
- Views:
Transcription
1 Wind-Tunnel Simulation using TAU on a PC-Cluster: Resources and Performance Stefan Melber-Wilkending / DLR Braunschweig Folie 1 > Vortrag > Stefan Melber-Wilkending
2 Wind-Tunnel Simulation using TAU on a PC-Cluster: Resources and Performance Outline New Linux PC-Cluster at Braunschweig (DLR-AS) Performance Measurements of TAU on PCClusters: Platforms Results Example of an application on a PCCluster: Wind-Tunnel Simulation Wind-Tunnel Boundary Condition Example: Simulation of DLR ALVAST High-Lift Configuration in Low-Speed Wind-Tunnel DNW-NWB
3 New Linux PC-Cluster at DLR-AS Technical Data - General New Linux PC-Cluster at DLR-AS / Braunschweig: For middle-sized CFD-problems Production-usage for research and contract-work Size: 276 Opteron 2.6 GHz CPUs Hardware installation and testing: 09/2005 Open for user-access: 10/2005
4 New Linux PC-Cluster at DLR-AS Technical Data - Nodes 138 Dual-Opteron (AMD) Nodes (V20z, SUN) CPU-clockspeed: 2.6 GHz 4 GByte DDR1/400 memory 2 x 73 GB Ultra320 SCSI HDs Management processor ( remote power reset, monitoring, error-analysis...) Infiniband HPC interconnect 100 MBit Ethernet interconnect 1 HU - size SuSE Linux 9.3 professional
5 New Linux PC-Cluster at DLR-AS Technical Data - Frontends 2 Frontends (V40z, SUN) 4x Opteron 2.2 GHz (AMD) 8 GByte DDR1/333 memory 2 x 73 GB Ultra320 SCSI HDs 100MBit Ethernet interconnect 3 HU - size SuSE Linux 9.3 professional RAID system 10 TByte Infiniband switch 144 ports (Voltaire) PBS Pro queuing-system / MAUI sheduler
6 New Linux PC-Cluster at DLR-AS Technical Data - Setup
7 New Linux PC-Cluster at DLR-AS Performance Compared Systems 32 Nodes / 64 CPUs Intel Xeon 3.06 GHz NEC-Cluster (DLR-AS): 2 GByte RAM / Node, Myrinet 2000 Interconnect 128 Nodes / 256 CPU AMD Opteron 2.0 GHz Cray-Cluster (HWW) 4 GByte RAM / Node, Myrinet 2000 Interconnect 192 Nodes / 384 CPUs AMD Opteron 2.4 GHz SUN-Cluster (DLR-AT) 4 GByte RAM / Node, Infiniband (Voltaire) Interconnect 36 Nodes / 72 CPUs AMD Opteron 2.2 GHz Cray XD1-Cluster (Cray) 4 GByte RAM / Node, RapidArray Interconnect (direct connection between network and Hybertransport-channel on the CPU) 72 Nodes / 144 CPUs AMD Opteron 2.4 GHz Cray XD1-Cluster (Cray) 8 GByte RAM / Node, RapidArray Interconnect
8 New Linux PC-Cluster at DLR-AS Performance Setup All Clusters running under Linux Operating-System Compiler: GnuCC TAU-Code, Version with typical settings for complex configurations: Central discretization Implicit time integration (LU-SGS) CFL-number: 5 Multigrid: 3v Turbulence model: Menter k-ω SST Low-Mach-number preconditioning Cache-optimization Case: glider with laminar-turbulent transition Free-stream conditions: Ma = 0.078, Re = 1.1e6 Grid: 10 million points, 30 layers
9 New Linux PC-Cluster at DLR-AS Performance Test Results CPU-Time for 50 cycles [s] for different CPU-numbers CPUs NEC Xeon Cray Opteron Cray Opteron Cray Opteron SUN Opteron SUN Opteron 3.06 Ghz (AS) 2.0 Ghz 2.2 Ghz 2.4 Ghz 2.4 Ghz (AT) 2.6 Ghz (AS)
10 New Linux PC-Cluster at DLR-AS Performance Test Results Relative Speedup compared to Cray Opteron-Cluster at HWW CPUs NEC Xeon Cray Opteron Cray Opteron Cray Opteron SUN Opteron SUN Opteron 3.06 Ghz (AS) 2.0 Ghz 2.2 Ghz 2.4 Ghz 2.4 Ghz (AT) 2.6 Ghz (AS)
11 New Linux PC-Cluster at DLR-AS Performance Test Results Speed of TAU on Opteron CPUs is a linear function of CPU clockspeed Compared to CrayOpteron 2.0 GHz new cluster is about 1.5 times faster Compared to NEC Xeon 3.06 GHz (standard cluster at AS-BS) new cluster is about 2.1 times faster Folie 11 > Vortrag > Stefan Melber-Wilkending
12 New Linux PC-Cluster at DLR-AS Performance Test Results Speedup compared to 8 CPUs (memory restrictions of the test-case) Nearly linear scalability of the TAU-Code up to 60 CPUs Tested Interconnects (Myrintet, Infiniband, RapidArray) have enough reserve for TAU parallelisation
13 Wind-Tunnel simulation using TAU-Code General Simulation of a wind-tunnel including test-section and nozzle Background: Avoid uncertainties of wind-tunnel corrections Uncorrected measurements directly comparable to CFD Validation of wind-tunnel corrections Extrapolation of wind-tunnel results at free-flight using CFD DLR project ForMEx (Fortschrittliche Methoden zur Extrapolation von Windkanalergebnissen auf den Freiflug) Problem : Numerical simulation of wind-tunnel including model big grids (about 20 million points) HPC-resources needed new PCcluster / AS-BS
14 Wind-Tunnel simulation using TAU-Code Wind-Tunnel Boundary Condition Idea: Usage and extension of engine boundary-condition Wind-tunnel inlet: Total-pressure and -temperature are given Regulation of flow-speed in windtunnel: Imaginary probe in numerical test-section (same position as in experiment) Comparison with given Machnumber Input for static pressure regulation on tunnel-outlet Applyable for 0 < Ma < 1 Numerical Wind-Tunnel Pressure on Outlet Imaginary Probe Bound. Cond. TAU-Code
15 Wind-Tunnel simulation using TAU-Code Validation Measurements in empty low-speed windtunnel DNW-NWB Database for validation of numerical results Measurements: Boundary layer profiles Static pressure on tunnel-outlet
16 Wind-Tunnel simulation using TAU-Code Preliminary Results DNW-NWB / ALVAST DLR-ALVAST half-model in high-lift configuration in DNW-NWB DLR-ALVAST: analoge to AIRBUS A320 Half model mounted on peniche Grids: Hybrid unstructured Centaur grid generator 20 million points Full Navier-Stokes Chimera-Technique: rotation of model without grid-generation
17 Wind-Tunnel simulation using TAU-Code Preliminary Results DNW-NWB / ALVAST Simulation of complete lift-polars including maximum lift Geometry variations: Wing-root geometry (e.g. slathorn, 16 configurations) Comparison of wind-tunnelsimulation against free-flight wind-tunnel-corrections Influence of peniche height
18 Wind-Tunnel simulation using TAU-Code Preliminary Results DNW-NWB / ALVAST ALVAST TAU F11 Wind-Tunnel Horse-shoe vortex around peniche
19 Wind-Tunnel simulation using TAU-Code Preliminary Results DNW-NWB / ALVAST
20 Conclusions TAU tested on PC-Linux Clusters: Good scalability and performance New Cluster at AS/BS available for production: 10/2005 Implementation of an wind-tunnel boundary condition in TAU: Validation with empty wind-tunnel measurements First results of simulation of ALVAST high-lift configuration at DNW-NWB compared to the experiment Further work: Investigation of half-model influence, variation of geometry,...
21 Special thanks for testing-support and debugging of TAU-parallelisation W. Hafemann, C. Simmendinger (T-Systems) N. Gal, Y. Shahar (Voltaire) J. Redmer, T. Warschko (Linux NetWorx) Axel Köhler (SUN) Institute of Propulsion Technology (DLR-AT) R. Dwight, T. Alrutz (DLR-AS) M. Wierse (Cray)
Improved LS-DYNA Performance on Sun Servers
8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Building Clusters for Gromacs and other HPC applications
Building Clusters for Gromacs and other HPC applications Erik Lindahl [email protected] CBR Outline: Clusters Clusters vs. small networks of machines Why do YOU need a cluster? Computer hardware Network
Performance Comparison of ISV Simulation Codes on Microsoft Windows HPC Server 2008 and SUSE Linux Enterprise Server 10.2
Fraunhofer Institute for Algorithms and Scientific Computing SCAI Performance Comparison of ISV Simulation Codes on Microsoft HPC Server 28 and SUSE Enterprise Server 1.2 Karsten Reineck und Horst Schwichtenberg
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
vrealize Business System Requirements Guide
vrealize Business System Requirements Guide vrealize Business Advanced and Enterprise 8.2.1 This document supports the version of each product listed and supports all subsequent versions until the document
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
A Flexible Cluster Infrastructure for Systems Research and Software Development
Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure
System requirements for A+
System requirements for A+ Anywhere Learning System: System Requirements Customer-hosted Browser Version Web-based ALS (WBA) Delivery Network Requirements In order to configure WBA+ to properly answer
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering
INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering Enquiry No: Enq/IITK/ME/JB/02 Enquiry Date: 14/12/15 Last Date of Submission: 21/12/15 Formal quotations are invited for HPC cluster.
Scalable Distributed Schur Complement Solvers for Internal and External Flow Computations on Many-Core Architectures
Scalable Distributed Schur Complement Solvers for Internal and External Flow Computations on Many-Core Architectures Dr.-Ing. Achim Basermann, Dr. Hans-Peter Kersken, Melven Zöllner** German Aerospace
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Business white paper. HP Process Automation. Version 7.0. Server performance
Business white paper HP Process Automation Version 7.0 Server performance Table of contents 3 Summary of results 4 Benchmark profile 5 Benchmark environmant 6 Performance metrics 6 Process throughput 6
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.
Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)
MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit Prof. Dieter Kranzlmüller (LRZ) LRZ HPC-Systems at the End of the UNIX-Era (Years 2000-2002) German national supercomputer Hitachi SR800
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
IT Business Management System Requirements Guide
IT Business Management System Requirements Guide IT Business Management 8.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA
HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture
Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner
(Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015
Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Xian Shi 1 bio I am a second-year Ph.D. student from Combustion Analysis/Modeling Lab,
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
Multicore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
Scaling from Workstation to Cluster for Compute-Intensive Applications
Cluster Transition Guide: Scaling from Workstation to Cluster for Compute-Intensive Applications IN THIS GUIDE: The Why: Proven Performance Gains On Cluster Vs. Workstation The What: Recommended Reference
SUN ORACLE EXADATA STORAGE SERVER
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
A Scalable Ethernet Clos-Switch
A Scalable Ethernet Clos-Switch Norbert Eicker John von Neumann-Institute for Computing Research Centre Jülich Technisches Seminar Desy Zeuthen 9.5.2006 Outline Motivation Clos-Switches Ethernet Crossbar
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin. http://www.dell.com/clustering
Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin Reza Rooholamini, Ph.D. Director Enterprise Solutions Dell Computer Corp. [email protected] http://www.dell.com/clustering
How To Visualize At The Dlr
Interactive Visualization of Large Simulation Datasets Andreas Gerndt, Rolf Hempel, Robin Wolff DLR Simulation and Software Technology EuroMPI 2010 Conference, Stuttgart, Sept. 13-15, 2010 Folie 1 Introduction
Comparing the performance of the Landmark Nexus reservoir simulator on HP servers
WHITE PAPER Comparing the performance of the Landmark Nexus reservoir simulator on HP servers Landmark Software & Services SOFTWARE AND ASSET SOLUTIONS Comparing the performance of the Landmark Nexus
wu.cloud: Insights Gained from Operating a Private Cloud System
wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we
Ansys & optislang on a HPC-Cluster
Ansys & optislang on a HPC-Cluster Optimierungsergebnis nach 50 Iterationen Evolutionary Algorithm 1,00E+05 1,00E+04 Messkurve Best-Fit 1,00E+03 Dipl.-Ing. (FH) Holger Mai Engineering GmbH Holunderweg
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
When EP terminates the use of Hosting CC OG, EP is required to erase the content of CC OG application at its own cost.
Explanatory Note 1 (Hosting CC OG - For Trading) Section A Notes to the Application a. China Connect Open Gateway (CC OG) : CC OG is a hardware and software component operated by the Exchange Participant,
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
ABAQUS High Performance Computing Environment at Nokia
ABAQUS High Performance Computing Environment at Nokia Juha M. Korpela Nokia Corporation Abstract: The new commodity high performance computing (HPC) hardware together with the recent ABAQUS performance
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
The Green Index: A Metric for Evaluating System-Wide Energy Efficiency in HPC Systems
202 IEEE 202 26th IEEE International 26th International Parallel Parallel and Distributed and Distributed Processing Processing Symposium Symposium Workshops Workshops & PhD Forum The Green Index: A Metric
Turbomachinery CFD on many-core platforms experiences and strategies
Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29
Virtual Compute Appliance Frequently Asked Questions
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document
Packet Capture in 10-Gigabit Ethernet Environments Using Contemporary Commodity Hardware
Packet Capture in 1-Gigabit Ethernet Environments Using Contemporary Commodity Hardware Fabian Schneider Jörg Wallerich Anja Feldmann {fabian,joerg,anja}@net.t-labs.tu-berlin.de Technische Universtität
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania)
Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania) Outline Introduction EO challenges; EO and classical/cloud computing; EO Services The computing platform Cluster -> Grid -> Cloud
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing.
Appro HyperBlade A Smart Investment for Flexible, Modular and Scalable Blade Architecture Designed for High-Performance Computing. Appro HyperBlade clusters are flexible, modular scalable offering a high-density
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
Performance Guide. 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317. http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494
Performance Guide ANSYS, Inc. Release 12.1 Southpointe November 2009 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317 certified to ISO [email protected] 9001:2008. http://www.ansys.com (T) 724-746-3304
Current Status and Challenges in CFD at the DLR Institute of Aerodynamics and Flow Technology
Current Status and Challenges in CFD at the DLR Institute of Aerodynamics and Flow Technology N. Kroll, C.-C. Rossow DLR, Institute of Aerodynamics and Flow Technology DLR Institute of Aerodynamics and
LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.
LS-DYNA Scalability on Cray Supercomputers Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. WP-LS-DYNA-12213 www.cray.com Table of Contents Abstract... 3 Introduction... 3 Scalability
Lustre Networking BY PETER J. BRAAM
Lustre Networking BY PETER J. BRAAM A WHITE PAPER FROM CLUSTER FILE SYSTEMS, INC. APRIL 2007 Audience Architects of HPC clusters Abstract This paper provides architects of HPC clusters with information
Streamline Computing: Cluster Integration and Effective Software
Streamline Computing: Cluster Integration and Effective Software Nick Birkett Streamline Computing Ltd Barclays Venture Centre Sir William Lyons Road Coventry CV4 7EZ, UK http://www.streamline-computing.com
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
Ignify ecommerce. Item Requirements Notes
wwwignifycom Tel (888) IGNIFY5 sales@ignifycom Fax (408) 516-9006 Ignify ecommerce Server Configuration 1 Hardware Requirement (Minimum configuration) Item Requirements Notes Operating System Processor
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
SYSTEM SETUP FOR SPE PLATFORMS
BEST PRACTICE SYSTEM SETUP FOR SPE PLATFORMS Product Snow License Manager Version 7.0 Content System requirements SQL Server configuration Maintenance Test environment Document date 2015-10-15 ABOUT THIS
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
High performance parallel computing for Computational Fluid
Technical Paper High performance parallel computing for Computational Fluid Atsushi Itou Toshikazu Nakanishi Takashi Mizuguchi Masatake Yoshida Tei Saburi The supersonic aerodynamic coefficient of a sphere
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
theguard! Service Management Center (Valid for Version 6.3 and higher)
theguard! Service Management Center (Valid for Version 6.3 and higher) Introduction Introduction about the minimal system requirements of theguard! Service Management Center. The theguard! Service Management
Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com [email protected] Microsoft Windows
CAS2K5. Jim Tuccillo [email protected] 912.576.5215
CAS2K5 Jim Tuccillo [email protected] 912.576.5215 Agenda icorporate Overview isystem Architecture inode Design iprocessor Options iinterconnect Options ihigh Performance File Systems Lustre isystem Management
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
System Requirements. SuccessMaker 5
System Requirements SuccessMaker 5 System requirements are subject to change. For the latest information on system requirements, go to support.pearsonschool.com. For more information about Digital Learning
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server
IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server IBM FLUENT Benchmark Results IBM & FLUENT Recommended Configurations IBM 16-Core BladeCenter S Cluster for FLUENT Systems: Up to Six
ORACLE BIG DATA APPLIANCE X3-2
ORACLE BIG DATA APPLIANCE X3-2 BIG DATA FOR THE ENTERPRISE KEY FEATURES Massively scalable infrastructure to store and manage big data Big Data Connectors delivers load rates of up to 12TB per hour between
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
PRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
Accelerating CFD using OpenFOAM with GPUs
Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide
Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure
White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This
High-Performance Computing Clusters
High-Performance Computing Clusters 7401 Round Pond Road North Syracuse, NY 13212 Ph: 800.227.3432 Fx: 315.433.0945 www.nexlink.com What Is a Cluster? There are several types of clusters and the only constant
P013 INTRODUCING A NEW GENERATION OF RESERVOIR SIMULATION SOFTWARE
1 P013 INTRODUCING A NEW GENERATION OF RESERVOIR SIMULATION SOFTWARE JEAN-MARC GRATIEN, JEAN-FRANÇOIS MAGRAS, PHILIPPE QUANDALLE, OLIVIER RICOIS 1&4, av. Bois-Préau. 92852 Rueil Malmaison Cedex. France
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC
THE SUN STORAGE AND ARCHIVE SOLUTION FOR HPC The Right Data, in the Right Place, at the Right Time José Martins Storage Practice Sun Microsystems 1 Agenda Sun s strategy and commitment to the HPC or technical
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
Hardware/Software Guidelines
There are many things to consider when preparing for a TRAVERSE v11 installation. The number of users, application modules and transactional volume are only a few. Reliable performance of the system is
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
AeroFluidX: A Next Generation GPU-Based CFD Solver for Engineering Applications
AeroFluidX: A Next Generation GPU-Based CFD Solver for Engineering Applications Dr. Bjoern Landmann Dr. Kerstin Wieczorek Stefan Bachschuster 18.03.2015 FluiDyna GmbH, Lichtenbergstr. 8, 85748 Garching
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
