HPC enabling of OpenFOAM R for CFD applications



Similar documents
HPC Deployment of OpenFOAM in an Industrial Setting

AeroFluidX: A Next Generation GPU-Based CFD Solver for Engineering Applications

Accelerating CFD using OpenFOAM with GPUs

OpenFOAM Optimization Tools

FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG

P013 INTRODUCING A NEW GENERATION OF RESERVOIR SIMULATION SOFTWARE

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes

Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

A Multi-layered Domain-specific Language for Stencil Computations

GPGPU accelerated Computational Fluid Dynamics

High Performance Computing in CST STUDIO SUITE

OpenFOAM Workshop. Yağmur Gülkanat Res.Assist.

Multicore Parallel Computing with OpenMP

Modeling of Earth Surface Dynamics and Related Problems Using OpenFOAM

OpenFOAM at FM LTH. Erdzan Hodzic. FM Seminars: 16-march Division of Fluid Mechanics, Department of Energy Sciences, Lund University

CFD modelling of floating body response to regular waves

Kriterien für ein PetaFlop System

Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms

Model of a flow in intersecting microchannels. Denis Semyonov

Basin simulation for complex geological settings

Part I Courses Syllabus

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

POLITECNICO DI MILANO Department of Energy

FEM Software Automation, with a case study on the Stokes Equations

Scientific Computing Programming with Parallel Objects

Marine CFD applications using OpenFOAM

Mesh Generation and Load Balancing

Solvers, Algorithms and Libraries (SAL)

Data Centric Systems (DCS)

Express Introductory Training in ANSYS Fluent Lecture 1 Introduction to the CFD Methodology

From CFD to computational finance (and back again?)

Unleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers

Parallel Computing. Benson Muite. benson.

Turbomachinery CFD on many-core platforms experiences and strategies

TESLA Report

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

Sourcery Overview & Virtual Machine Installation

Simulation of Fluid-Structure Interactions in Aeronautical Applications

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Benchmark Tests on ANSYS Parallel Processing Technology

ME6130 An introduction to CFD 1-1

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.

High Performance Computing. Course Notes HPC Fundamentals

Trends in High-Performance Computing for Power Grid Applications

Fast Multipole Method for particle interactions: an open source parallel library component

A Load Balancing Tool for Structured Multi-Block Grid CFD Applications

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

CCTech TM. ICEM-CFD & FLUENT Software Training. Course Brochure. Simulation is The Future

From CFD to computational finance (and back again?)

Large-Scale Reservoir Simulation and Big Data Visualization

Arcane/ArcGeoSim, a software framework for geosciences simulation

A CP Scheduler for High-Performance Computers

ABSTRACT FOR THE 1ST INTERNATIONAL WORKSHOP ON HIGH ORDER CFD METHODS

Module 6 Case Studies

APPENDIX 3 CFD CODE - PHOENICS

Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

HPC Wales Skills Academy Course Catalogue 2015

Numerix CrossAsset XL and Windows HPC Server 2008 R2

ACCELERATING COMMERCIAL LINEAR DYNAMIC AND NONLINEAR IMPLICIT FEA SOFTWARE THROUGH HIGH- PERFORMANCE COMPUTING

OpenFOAM: Year in Review

HPC Infrastructure Development in Bulgaria

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

AN APPROACH FOR SECURE CLOUD COMPUTING FOR FEM SIMULATION

STCE. Outline. Introduction. Applications. Ongoing work. Summary. STCE RWTH-Aachen, Industrial Applications of discrete adjoint OpenFOAM, EuroAD 2014

NEUROMATHEMATICS: DEVELOPMENT TENDENCIES. 1. Which tasks are adequate of neurocomputers?

Master s in Computational Mathematics. Faculty of Mathematics, University of Waterloo

walberla: Towards an Adaptive, Dynamically Load-Balanced, Massively Parallel Lattice Boltzmann Fluid Simulation

Lecture 16 - Free Surface Flows. Applied Computational Fluid Dynamics

The Value of High-Performance Computing for Simulation

Vista: A Multi-field Object Oriented CFD-package

GPGPU acceleration in OpenFOAM

Open Source CFD Solver - OpenFOAM

Multiphase Flow - Appendices

Course Outline for the Masters Programme in Computational Engineering

MEng, BSc Applied Computer Science

A New Unstructured Variable-Resolution Finite Element Ice Sheet Stress-Velocity Solver within the MPAS/Trilinos FELIX Dycore of PISCEES

Aeroacoustic Analogy for the Computation of Aeroacoustic Fields in Partially Closed Domains

Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms. Amani AlOnazi.

White Paper The Numascale Solution: Extreme BIG DATA Computing

The Basics of FEA Procedure

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.

Customer Training Material. Lecture 2. Introduction to. Methodology ANSYS FLUENT. ANSYS, Inc. Proprietary 2010 ANSYS, Inc. All rights reserved.

Computational Fluid Dynamics Research Projects at Cenaero (2011)

In-situ Visualization

High Performance Computing

Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age

PARALLEL PROGRAMMING

Supercomputing and Big Data: Where are the Real Boundaries and Opportunities for Synergy?

10- High Performance Compu5ng

Transcription:

HPC enabling of OpenFOAM R for CFD applications Towards the exascale: OpenFOAM perspective Ivan Spisso 25-27 March 2015, Casalecchio di Reno, BOLOGNA. SuperComputing Applications and Innovation Department, CINECA

1 Actual bottlenecks 2 The engine of OF and its bottleneck 3 Suggestion / Further Work Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 2 / 9

Actual bottlenecks Missing for a full enabling on Tier-0 Architecture: Improve the actual parallelism, to be able to scale from the actual 1,000 procs to at least one order of magnitude ( 10,000 or 100,000 procs). Improve the I/O, which is a bottleneck for big simulation with big data. For example LES/DNS with hundreds of procs that requires very often saving on disk. State of the art: A few million cells is now considered relatively small test case. Cases of this size will not scale usefully beyond 1K cores and there is not much to be done to improve this. Where we are looking at is radical scalability = The real issues are in the scaling of cases of 100 s of millions of cell on 10K+ cores. Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 3 / 9

1 Actual bottlenecks 2 The engine of OF and its bottleneck 3 Suggestion / Further Work Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 4 / 9

The engine of OF... OpenFOAM is first and foremost a C++ library used to solve in discretized form systems of Partial Differential Equations (PDE). The Engine of OF: the Numerical Method To solve equations for a continuum, OpenFOAM uses a numerical approach with the following features: Segregated, iterative solution: For the system of equations governing our problem of interest, separate matrix equations are created for each equation, and are solved within an iterative sequence (as opposed to created one, big matrix equation for the entire system of equations). Finite volume method: Matrix equations are constructed using the finite volume method applied to arbitrary shaped cells (any number of faces, any number of edges). Co-located variables: The solution variable for each matrix equation is defined at cell centres. Equation coupling: The coupling between equations, particularly pressure and velocity is performed using adapted versions of well-known algorithms such as e.g. PISO and SIMPLE. Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 5 / 9

... and its bottleneck The scalability of the linear solvers The linear algebra core libraries are the main communication bottlenecks for the scalability Whole bunch of MPI Allreduce stems from an algorithmic constraint and is unavoidable, increasing with the number of cores,... unless an algorithmic rewrite is proposed. Generally speaking, the fundamental difficulty is the inability to keep all the processors busy when operating on very coarse grids. Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 6 / 9

1 Actual bottlenecks 2 The engine of OF and its bottleneck 3 Suggestion / Further Work Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 7 / 9

Suggestions Choose the linear system solvers: use the geometrical multi-grid solver (GAMG) for very large problems [1]. The GAMG solver can often be the optimal choice, particularly for solving the pressure equation Extrapolate the optimal number of cells per cores for your test-case on the selected hardware [1] W. Briggs, V. Henson, and S. McCormick, A Multigrid Tutorial: Second Edition Society for Industrial and Applied Mathematics, 2000. Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 8 / 9

(Suggested) Further work Strategies for improvement the scalability the code algorithmic rewrite to reduce the communication: the GAMG solver need to be changed to agglomerate processors as well as cells as the mesh is coarsened to reduce the local transfer costs by matching the communications structure of the machine. use the accelerators today available on the HPC cluster (GPU enabling) or Intel-Phi (hybridization with OpenMP) co-design, that is design closely integrated to the systemware development: PGAS (Partitioned Global Address Space), UPC (Unified Parallel C), Chapel, X10 and Fortress language developments All these strategies requires a significant effort and very good OpenFOAM (C++) programmer Ivan Spisso / HPC enabling of OpenFOAM R for CFD applications / HPC facilities 9 / 9