walberla: A software framework for CFD applications on Compute Cores
|
|
|
- Elvin Cobb
- 10 years ago
- Views:
Transcription
1 walberla: A software framework for CFD applications on Compute Cores J. Götz (LSS Erlangen, [email protected]), K. Iglberger, S. Donath, C. Feichtinger, U. Rüde Lehrstuhl für Informatik 10 (Systemsimulation) www10.informatik.uni-erlangen.de Multiscale Fluid Dynamics with the Lattice Boltzmann Method 15 February, 2011 Lorentz Center Leiden 1
2 walberla: A software framework for CFD applications on Compute Cores J. Götz (LSS Erlangen, [email protected]), K. Iglberger, S. Donath, C. Feichtinger, U. Rüde Lehrstuhl für Informatik 10 (Systemsimulation) www10.informatik.uni-erlangen.de Multiscale Fluid Dynamics with the Lattice Boltzmann Method 15 February, 2011 Lorentz Center Leiden 1
3 Overview Motivation Why another CFD Package? Why Parallel Programming? The Software walberla: A Software Framework for CFD Fluid-Structure Interaction with Moving Rigid Objects Rigid Body Dynamics for Granular Media Free Surface Flow Simulations GPU Computing Conclusions 2
4 Motivation 3
5 Why we need another CFD program? In the last years many PhD students at our chair wrote nice programs for different CFD applications, but: Programming and testing basic functionality takes a lot of time Parallelizing takes even more time When the PhD student leaves the chair, the program most times was not used any more, since nobody knows how to use it 4
6 Why Parallel Programming? 5
7 Why Parallel Programming? 5
8 Why Parallel Programming? (2) Latest standard processors are multicore processors The free lunch is over To exploit multicore performance, parallel algorithms are essential CPUs will have 2, 4, 8, 16,..., 128,...,??? cores Want to simulate problems which are not possible on standard computers 6
9 The Software 7
10 walberla Created for desktop PCs and supercomputers Supporting multi-core PCs and GPUs Modular software concept Supports various applications: Blood flow in aneurysms Moving particles and agglomerates Free surfaces to simulate foams, fuel cells, a.m.m. Charged colloids 8
11 walberla Created for desktop PCs and supercomputers Supporting multi-core PCs and GPUs Modular software concept Supports various applications: Blood flow in aneurysms Moving particles and agglomerates Free surfaces to simulate foams, fuel cells, a.m.m. Charged colloids 8
12 Fluid-Structure Interaction with Moving Rigid Objects 9
13 Why Simulate Fluid-Structure Interaction? Transport of solid particles is crucial for: Understanding of physical phenomena Industrial processes But: Fully resolved simulation of the obstacles is computational expensive Up to now only moderate number of obstacles can be simulated 10
14 Fluid-Structure Interaction 11
15 Rigid Body Dynamics Newton s laws of motion including rotations Contact detection in each time step Collisions modelled by cofficient of restitution: forces in normal direction friction laws: forces in tangential direction 12
16 Rigid Body Dynamics Newton s laws of motion including rotations Contact detection in each time step Collisions modelled by cofficient of restitution: forces in normal direction friction laws: forces in tangential direction 12
17 Rigid Body Dynamics Newton s laws of motion including rotations Contact detection in each time step Collisions modelled by cofficient of restitution: forces in normal direction friction laws: forces in tangential direction 12
18 Hourglass Simulation spherical particles, 256 CPUs, time steps, runtime: 48 h (including data output) 13
19 Hourglass Simulation spherical particles, 256 CPUs, time steps, runtime: 48 h (including data output) 13
20 Mapping Moving Obstacles into the LBM Fluid Grid An Example 14
21 Mapping Moving Obstacles into the LBM Fluid Grid An Example 14
22 Mapping Moving Obstacles into the LBM Fluid Grid An Example 14
23 Mapping Moving Obstacles into the LBM Fluid Grid An Example (2) Cells with state change from Particle to Fluid Cell change from particle to fluid 15
24 Mapping Moving Obstacles into the LBM Fluid Grid An Example (2) Cells with state change from Fluid to Particle Cell change from fluid to particle 15
25 Mapping Moving Obstacles into the LBM Fluid Grid An Example (2) PDF acting as Force Momentum calculation 15
26 The Algorithm 16
27 Virtual Fluidized Bed 512 processors Simulation Domain Size: 180x198x360 cells of LBM 900 capsules and 1008 spheres = 1908 objects Number of time steps: Run Time: 252,000 07h 12 min 17
28 Virtual Fluidized Bed 512 processors Simulation Domain Size: 180x198x360 cells of LBM 900 capsules and 1008 spheres = 1908 objects Number of time steps: Run Time: 252,000 07h 12 min 17
29 Simulation of a Segregation Process Segregation simulation of objects. Density values of 0.8 kg/dm3 and 1.2 kg/dm3 are used for the objects in water. 18
30 Weak Scaling 1 Efficiency Jugene Blue Gene/P Jülich Supercomputer Center 40x40x40 lattice cells per core 80x80x80 lattice cells per core Number of Cores Scaling 64 to cores Densely packed particles lattice cells rigid spherical objects 19
31 Weak Scaling Efficiency Jugene Blue Gene/P Jülich Supercomputer Center Largest simulation to date : 8 Trillion (10 12 ) variables per time step (LBM alone) 50 TByte x40x40 lattice cells per core 80x80x80 lattice cells per core Number of Cores Scaling 64 to cores Densely packed particles lattice cells rigid spherical objects 19
32 Free Surface Flow Simulation for foams, fuel cells, food processing, etc. 20
33 Free Surface Flow Simulation Example applications: Engineering: metal foam simulations Food processing Fuel cells Based on LBM: Free surfaces Surface tension and wetting modell Parallelization with MPI 21
34 Free Surface Flow Simulation Example applications: Engineering: metal foam simulations Food processing Fuel cells Based on LBM: Free surfaces Surface tension and wetting modell Parallelization with MPI 21
35 Free Surface Flow Simulation Example applications: Engineering: metal foam simulations Food processing Fuel cells Based on LBM: Free surfaces Surface tension and wetting modell Parallelization with MPI 21
36 The interface between liquid and gas Volume-of-Fluids like approach Flag field: Compute only in fluid Special free surface conditions in interface cells 22
37 23
38 LBM on Clusters with GPUs 24
39 Motivation Why should we use GPUs for LBM simulations? GPUs currently offer a very high peak performance Basic LBM performs well on GPUs Programming GPUs is getting simpler than years ago Why should we use heterogeneous simulations CPUs are available anyway on GPU nodes Do not waste these resources NVidia Fermi Intel Xeon Node GFLOPS/sp GFLOPS/dp Peak Memory bandwidth GB/s
40 Conclusions 26
41 Conclusions Desktop PCs and notebooks will get 2,4,8,16,...,128,... cores
42 Conclusions Desktop PCs and notebooks will get 2,4,8,16,...,128,... cores Parallel programming is necessary for multicore usage
43 Conclusions Desktop PCs and notebooks will get 2,4,8,16,...,128,... cores Parallel programming is necessary for multicore usage walberla supports: Multicore systems Supercomputers Accelerators (GPU, Cell) A variety of different applications
44 Future Work OK, the framework is working fine for many applications
45 Future Work OK, the framework is working fine for many applications, but:
46 Future Work OK, the framework is working fine for many applications, but: Test cases for validation of particulate flows and free surfaces Any suggestions for moving particles (especially ensembles) are welcome ;-) Grid refinement + load balancing How to deal with massive parallelization: Node crashes Postprocessing Restart mechanisms How to maintain the software?
47 Thank you for your attention!
walberla: A software framework for CFD applications
walberla: A software framework for CFD applications U. Rüde, S. Donath, C. Feichtinger, K. Iglberger, F. Deserno, M. Stürmer, C. Mihoubi, T. Preclic, D. Haspel (all LSS Erlangen), N. Thürey (LSS Erlangen/
Fast Parallel Algorithms for Computational Bio-Medicine
Fast Parallel Algorithms for Computational Bio-Medicine H. Köstler, J. Habich, J. Götz, M. Stürmer, S. Donath, T. Gradl, D. Ritter, D. Bartuschat, C. Feichtinger, C. Mihoubi, K. Iglberger (LSS Erlangen)
walberla: Towards an Adaptive, Dynamically Load-Balanced, Massively Parallel Lattice Boltzmann Fluid Simulation
walberla: Towards an Adaptive, Dynamically Load-Balanced, Massively Parallel Lattice Boltzmann Fluid Simulation SIAM Parallel Processing for Scientific Computing 2012 February 16, 2012 Florian Schornbaum,
Turbomachinery CFD on many-core platforms experiences and strategies
Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29
Microsoft Windows Compute Cluster Server 2003 Evaluation
Microsoft Windows Compute Cluster Server 2003 Evaluation Georg Hager,, Johannes Habich (RRZE) Stefan Donath (Lehrstuhl für Systemsimulation) Universität Erlangen-Nürnberg rnberg ZKI AK Supercomputing 25./26.10.2007,
Accelerating CFD using OpenFOAM with GPUs
Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide
Design and Optimization of a Portable Lattice Boltzmann Code for Heterogeneous Architectures
Design and Optimization of a Portable Lattice Boltzmann Code for Heterogeneous Architectures E Calore, S F Schifano, R Tripiccione Enrico Calore INFN Ferrara, Italy Perspectives of GPU Computing in Physics
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR Frédéric Kuznik, frederic.kuznik@insa lyon.fr 1 Framework Introduction Hardware architecture CUDA overview Implementation details A simple case:
Large-Scale Reservoir Simulation and Big Data Visualization
Large-Scale Reservoir Simulation and Big Data Visualization Dr. Zhangxing John Chen NSERC/Alberta Innovates Energy Environment Solutions/Foundation CMG Chair Alberta Innovates Technology Future (icore)
Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms
Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms Björn Rocker Hamburg, June 17th 2010 Engineering Mathematics and Computing Lab (EMCL) KIT University of the State
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
Writing Applications for the GPU Using the RapidMind Development Platform
Writing Applications for the GPU Using the RapidMind Development Platform Contents Introduction... 1 Graphics Processing Units... 1 RapidMind Development Platform... 2 Writing RapidMind Enabled Applications...
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
HPC Deployment of OpenFOAM in an Industrial Setting
HPC Deployment of OpenFOAM in an Industrial Setting Hrvoje Jasak [email protected] Wikki Ltd, United Kingdom PRACE Seminar: Industrial Usage of HPC Stockholm, Sweden, 28-29 March 2011 HPC Deployment
Simulation Platform Overview
Simulation Platform Overview Build, compute, and analyze simulations on demand www.rescale.com CASE STUDIES Companies in the aerospace and automotive industries use Rescale to run faster simulations Aerospace
Optimized Hybrid Parallel Lattice Boltzmann Fluid Flow Simulations on Complex Geometries
Optimized Hybrid Parallel Lattice Boltzmann Fluid Flow Simulations on Complex Geometries Jonas Fietz 2, Mathias J. Krause 2, Christian Schulz 1, Peter Sanders 1, and Vincent Heuveline 2 1 Karlsruhe Institute
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING VISUALISATION GPU COMPUTING
www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING GPU COMPUTING VISUALISATION XENON Accelerating Exploration Mineral, oil and gas exploration is an expensive and challenging
How To Build An Ark Processor With An Nvidia Gpu And An African Processor
Project Denver Processor to Usher in a New Era of Computing Bill Dally January 5, 2011 http://blogs.nvidia.com/2011/01/project-denver-processor-to-usher-in-new-era-of-computing/ Project Denver Announced
HP ProLiant SL270s Gen8 Server. Evaluation Report
HP ProLiant SL270s Gen8 Server Evaluation Report Thomas Schoenemeyer, Hussein Harake and Daniel Peter Swiss National Supercomputing Centre (CSCS), Lugano Institute of Geophysics, ETH Zürich [email protected]
PyFR: Bringing Next Generation Computational Fluid Dynamics to GPU Platforms
PyFR: Bringing Next Generation Computational Fluid Dynamics to GPU Platforms P. E. Vincent! Department of Aeronautics Imperial College London! 25 th March 2014 Overview Motivation Flux Reconstruction Many-Core
How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications
Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected]
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected] Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) Mul(ple Socket
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms
Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms Amani AlOnazi, David E. Keyes, Alexey Lastovetsky, Vladimir Rychkov Extreme Computing Research Center,
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System Qingyu Meng, Alan Humphrey, Martin Berzins Thanks to: John Schmidt and J. Davison de St. Germain, SCI Institute Justin Luitjens
Robust Algorithms for Current Deposition and Dynamic Load-balancing in a GPU Particle-in-Cell Code
Robust Algorithms for Current Deposition and Dynamic Load-balancing in a GPU Particle-in-Cell Code F. Rossi, S. Sinigardi, P. Londrillo & G. Turchetti University of Bologna & INFN GPU2014, Rome, Sept 17th
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 10 (Systemsimulation) Massively Parallel Multilevel Finite
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland
The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM
Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1
Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these
Multicore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
Microscopic and mesoscopic simulations of amorphous systems using LAMMPS and GPU-based algorithms
Microscopic and mesoscopic simulations of amorphous systems using LAMMPS and GPU-based algorithms E. Ferrero and F. Puosi Laboratoire Interdisciplinaire de Physique (LIPhy), UJF and CNRS 14/5/214 Journée
High Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
High-Performance Computing and Big Data Challenge
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
GPGPU accelerated Computational Fluid Dynamics
t e c h n i s c h e u n i v e r s i t ä t b r a u n s c h w e i g Carl-Friedrich Gauß Faculty GPGPU accelerated Computational Fluid Dynamics 5th GACM Colloquium on Computational Mechanics Hamburg Institute
Real Time Simulation of Power Plants
Real Time Simulation of Power Plants Torsten Dreher 1 System Simulation Group Friedrich-Alexander-University Erlangen-Nuremberg Siemens Simulation Center, Erlangen December 14, 2008 1 [email protected]
Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age
Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age Xuan Shi GRA: Bowei Xue University of Arkansas Spatiotemporal Modeling of Human Dynamics
Computational Fluid Dynamics (CFD) and Multiphase Flow Modelling. Associate Professor Britt M. Halvorsen (Dr. Ing) Amaranath S.
Computational Fluid Dynamics (CFD) and Multiphase Flow Modelling Associate Professor Britt M. Halvorsen (Dr. Ing) Amaranath S. Kumara (PhD Student), PO. Box 203, N-3901, N Porsgrunn, Norway What is CFD?
Heat transfer in Rotating Fluidized Beds in a Static Geometry: A CFD study
Heat transfer in Rotating Fluidized Beds in a Static Geometry: A CFD study Nicolas Staudt, Juray De Wilde* * Université catholique de Louvain MAPR / IMAP Réaumur, Place Sainte Barbe 2 1348 Louvain-la-Neuve
Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.
Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU
Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents
Advanced discretisation techniques (a collection of first and second order schemes); Innovative algorithms and robust solvers for fast convergence.
New generation CFD Software APUS-CFD APUS-CFD is a fully interactive Arbitrary Polyhedral Unstructured Solver. APUS-CFD is a new generation of CFD software for modelling fluid flow and heat transfer in
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
LIGGGHTS OPEN SOURCE DEM: COUPLING TO DNS OF TURBULENT CHANNEL FLOW
LIGGGHTS OPEN SOURCE DEM: COUPLING TO DNS OF TURBULENT CHANNEL FLOW 6th ERCOFTAC SIG43 Workshop, Udine, October 2013 Daniel Queteschiner 1, Christoph Kloss 1, Stefan Pirker 1 Cristian Marchioli 2, Alfredo
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory 21 st Century Research Continuum Theory Theory embodied in computation Hypotheses tested through experiment SCIENTIFIC METHODS
HPC enabling of OpenFOAM R for CFD applications
HPC enabling of OpenFOAM R for CFD applications Towards the exascale: OpenFOAM perspective Ivan Spisso 25-27 March 2015, Casalecchio di Reno, BOLOGNA. SuperComputing Applications and Innovation Department,
Towards real-time image processing with Hierarchical Hybrid Grids
Towards real-time image processing with Hierarchical Hybrid Grids International Doctorate Program - Summer School Björn Gmeiner Joint work with: Harald Köstler, Ulrich Rüde August, 2011 Contents The HHG
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs Gregorio Bernabé Javier Cuenca Domingo Giménez Universidad de Murcia Scientific Computing and Parallel Programming Group XXIX Simposium Nacional de la
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
A VOXELIZATION BASED MESH GENERATION ALGORITHM FOR NUMERICAL MODELS USED IN FOUNDRY ENGINEERING
METALLURGY AND FOUNDRY ENGINEERING Vol. 38, 2012, No. 1 http://dx.doi.org/10.7494/mafe.2012.38.1.43 Micha³ Szucki *, Józef S. Suchy ** A VOXELIZATION BASED MESH GENERATION ALGORITHM FOR NUMERICAL MODELS
Scientific Computing Programming with Parallel Objects
Scientific Computing Programming with Parallel Objects Esteban Meneses, PhD School of Computing, Costa Rica Institute of Technology Parallel Architectures Galore Personal Computing Embedded Computing Moore
Parallel 3D Image Segmentation of Large Data Sets on a GPU Cluster
Parallel 3D Image Segmentation of Large Data Sets on a GPU Cluster Aaron Hagan and Ye Zhao Kent State University Abstract. In this paper, we propose an inherent parallel scheme for 3D image segmentation
BIG CPU, BIG DATA. Solving the World s Toughest Computational Problems with Parallel Computing. Alan Kaminsky
Solving the World s Toughest Computational Problems with Parallel Computing Alan Kaminsky Solving the World s Toughest Computational Problems with Parallel Computing Alan Kaminsky Department of Computer
White Paper The Numascale Solution: Extreme BIG DATA Computing
White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad ABOUT THE AUTHOR Einar Rustad is CTO of Numascale and has a background as CPU, Computer Systems and HPC Systems De-signer
Performance of the JMA NWP models on the PC cluster TSUBAME.
Performance of the JMA NWP models on the PC cluster TSUBAME. K.Takenouchi 1), S.Yokoi 1), T.Hara 1) *, T.Aoki 2), C.Muroi 1), K.Aranami 1), K.Iwamura 1), Y.Aikawa 1) 1) Japan Meteorological Agency (JMA)
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT
numascale Hardware Accellerated Data Intensive Computing White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad www.numascale.com Supemicro delivers 108 node system with Numascale
GPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles [email protected] Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
Modeling Rotor Wakes with a Hybrid OVERFLOW-Vortex Method on a GPU Cluster
Modeling Rotor Wakes with a Hybrid OVERFLOW-Vortex Method on a GPU Cluster Mark J. Stock, Ph.D., Adrin Gharakhani, Sc.D. Applied Scientific Research, Santa Ana, CA Christopher P. Stone, Ph.D. Computational
AeroFluidX: A Next Generation GPU-Based CFD Solver for Engineering Applications
AeroFluidX: A Next Generation GPU-Based CFD Solver for Engineering Applications Dr. Bjoern Landmann Dr. Kerstin Wieczorek Stefan Bachschuster 18.03.2015 FluiDyna GmbH, Lichtenbergstr. 8, 85748 Garching
Uintah Framework. Justin Luitjens, Qingyu Meng, John Schmidt, Martin Berzins, Todd Harman, Chuch Wight, Steven Parker, et al
Uintah Framework Justin Luitjens, Qingyu Meng, John Schmidt, Martin Berzins, Todd Harman, Chuch Wight, Steven Parker, et al Uintah Parallel Computing Framework Uintah - far-sighted design by Steve Parker
Hardware Acceleration for CST MICROWAVE STUDIO
Hardware Acceleration for CST MICROWAVE STUDIO Chris Mason Product Manager Amy Dewis Channel Manager Agenda 1. Introduction 2. Why use Hardware Acceleration? 3. Hardware Acceleration Technologies 4. Current
Interactive Data Visualization with Focus on Climate Research
Interactive Data Visualization with Focus on Climate Research Michael Böttinger German Climate Computing Center (DKRZ) 1 Agenda Visualization in HPC Environments Climate System, Climate Models and Climate
Euler-Euler and Euler-Lagrange Modeling of Wood Gasification in Fluidized Beds
Euler-Euler and Euler-Lagrange Modeling of Wood Gasification in Fluidized Beds Michael Oevermann Stephan Gerber Frank Behrendt Berlin Institute of Technology Faculty III: School of Process Sciences and
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Model of a flow in intersecting microchannels. Denis Semyonov
Model of a flow in intersecting microchannels Denis Semyonov LUT 2012 Content Objectives Motivation Model implementation Simulation Results Conclusion Objectives A flow and a reaction model is required
Evaluation of CUDA Fortran for the CFD code Strukti
Evaluation of CUDA Fortran for the CFD code Strukti Practical term report from Stephan Soller High performance computing center Stuttgart 1 Stuttgart Media University 2 High performance computing center
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
Multi-GPU Load Balancing for Simulation and Rendering
Multi- Load Balancing for Simulation and Rendering Yong Cao Computer Science Department, Virginia Tech, USA In-situ ualization and ual Analytics Instant visualization and interaction of computing tasks
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
Data Structures and Mesh Processing in Parallel CFD Project GIMM
John von Neumann Institute for Computing Data Structures and Mesh Processing in Parallel CFD Project GIMM B.N. Chetverushkin, V.A. Gasilov, S.V. Polyakov, M.V. Iakobovski, E.L. Kartasheva, A.S. Boldarev,
NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist
NVIDIA CUDA Software and GPU Parallel Computing Architecture David B. Kirk, Chief Scientist Outline Applications of GPU Computing CUDA Programming Model Overview Programming in CUDA The Basics How to Get
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
Lecture 16 - Free Surface Flows. Applied Computational Fluid Dynamics
Lecture 16 - Free Surface Flows Applied Computational Fluid Dynamics Instructor: André Bakker http://www.bakker.org André Bakker (2002-2006) Fluent Inc. (2002) 1 Example: spinning bowl Example: flow in
ACCELERATING COMMERCIAL LINEAR DYNAMIC AND NONLINEAR IMPLICIT FEA SOFTWARE THROUGH HIGH- PERFORMANCE COMPUTING
ACCELERATING COMMERCIAL LINEAR DYNAMIC AND Vladimir Belsky Director of Solver Development* Luis Crivelli Director of Solver Development* Matt Dunbar Chief Architect* Mikhail Belyi Development Group Manager*
Computer Graphics Hardware An Overview
Computer Graphics Hardware An Overview Graphics System Monitor Input devices CPU/Memory GPU Raster Graphics System Raster: An array of picture elements Based on raster-scan TV technology The screen (and
