MANUFACTURING WEATHER FORECASTING SIMULATIONS ON HPC INFRASTRUCTURES
|
|
|
- Priscilla Clark
- 9 years ago
- Views:
Transcription
1 MANUFACTURING WEATHER FORECASTING SIMULATIONS ON HPC INFRASTRUCTURES LADISLAV HLUCHÝ V. ŠIPKOVÁ, M. DOBRUCKÝ, J. BARTOK, B.M. NGUYEN INSTITUTE OF INFORMATICS, SLOVAK ACADEMY OF SCIENCES ECW ENVIRONMENTAL COMPUTING WORKSHOP - ESCIENCE 2016
2 PARTNERS IISAS: INSTITUTE OF INFORMATICS, SLOVAK ACADEMY OF SCIENCES (ACADEMIC SECTOR) MICROSTEP-MIS: MONITORING AND INFORMATION SYSTEMS (COMMERCIAL SECTOR) IMS MODEL SUITE: COMPLEX SOFTWARE SYSTEM FOR METEOROLOGY AND CRISIS MANAGEMENT THIS PAPER PRESENTS A PART OF MANUFACTURING WRF ON HPC INFRASTRUCTURE FOR IMS MODEL SUITE
3 WRF - WEATHER RESEARCH AND FORECASTING DESIGNED FOR RESEARCH AND OPERATIONAL PURPOSES NUMERICAL WEATHER PREDICTION ATMOSPHERIC SIMULATION TWO DYNAMIC SOLVERS ARW: ADVANCE RESEARCH WRF NMM: NON-HYDROSTATIC MESOSCALE MODEL FLEXIBLE AND PORTABLE CODE SEQUENTIAL PARALLEL (MPI) WITH OR WITHOUT MULTI-THREADING SUPPORTS A TWO-LEVEL DOMAIN DECOMPOSITION AT FIRST INTO PATCHES FOR DISTRIBUTED MEMORY, THEN WITHIN EACH PATCH MULTI-THREADING IS APPLIED FOR SHARED MEMORY
4 OBJECTIVES DEVELOPMENT OF MANAGEMENT TOOLS TO FACILITATE THE EXECUTION OF THE WRF SIMULATION PROCESS ON HPC INFRASTRUCTURES LOCAL HPC CLUSTER GRID INFRASTRUCTURE (EGI) PERFORMANCE INVESTIGATION OF PARALLEL WRF MODELS TO FIND OUT THE MOST SUITABLE CONFIGURATION WITH THE GIVEN INPUT SCENARIO FOR 3D METEOROLOGICAL MODELLING MPI MPI + OPENMP THE NUMBER OF COMPUTE NODES, CORES, MPI PROCESSES, OPENMP THREADS THE MANAGEMENT TOOLS ARE ALSO USED FOR PARAMETER TUNING OF THE MODELS (FOR IMS BY MICROSTEP-MIS) THAT REQUIRES TENS OF EVALUATIONS OF THE PARAMETERIZED MODEL ACCURACY EACH EVALUATION OF THE MODEL PARAMETERS REQUIRES RE-RUNNING OF THE HUNDREDS OF METEOROLOGICAL SITUATIONS COLLECTED OVER THE YEARS AND COMPARISON OF THE MODEL OUTPUT WITH THE OBSERVED DATA
5 3D METEOROLOGICAL MODELLING DOMAINS - WEATHER MODELLING HORIZONTAL, VERTICAL AND TIME RESOLUTION, SO THE MODEL CAN CATCH LOCAL CONDITIONS METEOROLOGICAL INITIAL AND BOUNDARY CONDITIONS FROM THE GLOBAL MODEL GFS (GLOBAL FORECASTING SYSTEM) OF US NATIONAL WEATHER SERVICE THE SETTING ENABLED TO MODEL THE ARABIAN PENINSULA WEATHER THE UPPERMOST DOMAIN WITH THE RESOLUTION 50X50 KM THE FINAL DOMAIN WITH THE RESOLUTION 1.8 KM, AROUND DUBAI AND ABU DHABI
6 WRF SIMULATION Pi MPI process Tj OpenMP thread WRF SIMULATION CONSISTS OF MANY EXECUTABLE PROGRAMS VARIOUS TYPE AND COMPLEXITY, SEQUENTIAL AND PARALLEL TAKING A DIFFERENT NUMBER OF PROCESSOR CORES FOR EXECUTION WRF WORKFLOW - DAG GRAPH (JOB 1) WPS PREPROCESSING, (JOB 2) WRF MODELING, (JOB 3) UPP POST-PROCESSING
7 WRF WORKFLOW MORE DETAILS JOB 1 - WPS PREPROCESSING: CONVERSION OF INPUTS FROM GRIB TO NETCDF FORMAT USING GEOGRID.EXE (SERIAL/MPI) UNGRIB.EXE (SERIAL) METGRID.EXE (SERIAL/MPI) JOB 2 - WRF MODELING - NUMERICAL MODELING USING REAL.EXE INITIALIZATION - REAL DATA PREPROCESSOR (MPI/MPI+OPENMP) WRF.EXE NUMERICAL INTEGRATION - ARW SOLVER (MPI/MPI+OPENMP) JOB 3 - UPP POST-PROCESSING CONVERSION OF OUTPUTS FROM NETCDF TO GRIB FORMAT USING UNIPOST.EXE (SERIAL/MPI) IN A NESTED CYCLE FOR ALL HOURS OF THE PREDICTED TIME PERIOD THERE IS NO DEPENDENCY BETWEEN PROCESSING DATA OF INDIVIDUAL HOURS, SO, THE JOB CAN BE STRUCTURED AS A PARAMETRIC STUDY (PS), WHERE EACH SUB-JOB HANDLES A SECTION OF THE TIME PERIOD
8 WRF WORKFLOW EXECUTION STARTS ON THE UI MACHINE THROUGH THE INVOCATION OF THE WRF WORKFLOW-MANAGER ENCOMPASSED WITH NEEDED INPUT PARAMETERS IS REALIZED WITHIN THE RUNNING-ENVIRONMENT LOCATED IN THE SHARED ADDRESS SPACE WHICH HAS THE DIRECTORY STRUCTURE GEOG CFG PARM BIN INPUT_ARCH OUTPUT_ARCH WPS_RUN MODEL_RUN POSTPR_RUN GEOGRAPHICAL DATA, SEVERAL GEO-TABLES CONFIGURATION FILES FOR INPUT SCENARIO AND SIMULATION OPTIONS UPP POST-PROCESSING PARAMETERS RUN-SCRIPTS AND EXECUTABLES INPUT DATA FILES OUTPUT DATA FILES WPS PREPROCESSING WRF MODELING UPP POST-PROCESSING
9 IISAS HPC CLUSTER v HARDWARE CONFIGURATION 52X IBM DX360 M3 (2X INTEL 48 GB RAM, 2X 500 GB SCRATCH DISK), 2X IBM DX360 M3 (2X INTEL 48 GB RAM, 2X 500 GB SCRATCH DISK, NVIDIA TESLA M2070: 6 GB RAM CUDA CORES), 2X X3650 M3 MANAGING SERVERS (2X INTEL 48 GB RAM, 6X 500 GB DISKS), 4X X3650 M3 DATA- MANAGING SERVERS (2X INTEL 48 GB RAM, 2X 500 GB DISKS, 2X 8 GBPS FC), 1X X3550 M4 SERVER (1X INTEL 8 GB RAM, 2X 500 GB DISKS), INFINIBAND 2X 40 GBPS (IN NODES), 2X DS3512 WITH 72TB DISKS v SOFTWARE INSTALATION WRF PACKAGE VERSION (WRF, WPS, TERRESTRIAL DATASETS), UPP VERSION 3.0, LIBRARIES NETCDF 4, JASPER 1.7, GNU COMPILERS VERSION (GFORTRAN, GCC, OPENMP LIBRARY), OPEN MPI VERSION
10 PERFORMANCE RESULTS WRF MODEL: SEQUENTIAL ON THE LOCAL CLUSTER PREDICTION TIME PERIOD 3 HOURS IN THIS PAPER FOR SCALING WRF SIMULATIONS FOR TESTING PURPOSE WITH GIVEN HW/SW CONFIGURATIONS 48 HOURS IN REAL SIMULATIONS (MICROSTEP-MIS) TO MODEL THE ARABIAN PENINSULA WEATHER THE NEED OF HPC TO ACCELERATE SIMULATIONS Number of nodes Number of cores per node Execution time hh:mm:ss WPS :39:54 WRF :57:53 UPP (2 jobs) :03:48 Complete simulation process 16:41:35
11 PERFORMANCE RESULTS WRF MODEL: MPI ON LOCAL CLUSTER FIXED NUMBER OF CORES PER NODE Number of nodes Number of cores per node Number of MPI processes Execution time hh:mm:ss WPS :04:22 WRF :36:33 WRF :27:01 WRF :49:03 WRF :30:13 WRF :20:47 WRF :13:57 UPP (2 jobs) :01:44 Complete simulation process (best) 00:20:03
12 PERFORMANCE RESULTS WRF MODEL: MPI + OPENMP ON LOCAL CLUSTER FIXED MPI PROCESSES Number of nodes x cores Number of MPI processes ( per node) Number of OpenMP threads Execution time hh:mm:ss WRF 8x12 32 (4) 2 00:31:31 WRF 16x12 32 (2) 4 00:20:52 WRF 16x12 32 (2) 6 00:17:21 WRF 32x12 32 (1) 8 00:15:47 WRF 32x12 32 (1) 10 00:15:15 WRF 32x12 32 (1) 12 00:15:20
13 PERFORMANCE RESULTS WRF MODEL: MPI + OPENMP ON LOCAL CLUSTER FIXED NUMBER OF OPENMP THREADS Number of nodes x cores Number of MPI processes (per node) Number of OpenMP threads Execution time hh:mm:ss WRF 8x12 32 (4) 3 00:24:44 WRF 12x12 48 (4) 3 00:19:28 WRF 16x12 64 (4) 3 00:17:27 WRF 24x12 96 (4) 3 00:13:49 WRF 32x (4) 3 00:12:24 WRF 16x12 32 (2) 6 00:17:21 WRF 24x12 48 (2) 6 00:14:31 WRF 32x12 64 (2) 6 00:12:09 WRF 40x12 80 (2) 6 00:12:01
14 WRF MODEL MPI ON GRID INFRASTRUCTURE EGI WRF RUNNING-ENVIRONMENT IN ITS INITIAL STATE, ALL EXECUTABLES AND INPUT FILES ARE STORED IN GRID STORAGE ELEMENT (SE), FROM WHICH THEY ARE DOWNLOADED GEOGRAPHICAL DATASETS (174 GB) ARE LOCATED IN CLUSTER SHARED ADDRESS SPACE, THEY DO NOT PARTICIPATE ON THE DATA TRANSFER GRID WRF WORKFLOW IS DESIGNED AS ONE GRID JOB ENCAPSULATING ALL TASKS: WPS+WRF+UPP MPI PROGRAMS ARE EXECUTED USING MPI-START OUTPUT OF SIMULATION IS UPLOADED TO STORAGE ELEMENT (SE) TIME OVERHEAD BY DATA TRANSFERS BETWEEN CE AND SE: 2 MINUTES Grid UI Grid User Interface WMS Workload Management System VO Virtual Organization CE Computing Elements GG Grid Gate LRMS Local Resource Management System WN Working Node SE Storage Element PBS Portable Batch System
15 CONCLUSION MANAGEMENT TOOLS ARE BUILT AND FULFILL DESIGNED PURPOSES TO LOCATE THE OPTIMAL CONFIGURATION WITH GIVEN SCENARIO FOR IMS MODEL PARAMETER TUNING (MICROSTEP-MIS) HYBRID PROGRAMMING MODEL (MPI + OPENMP) SEEMS A NATURAL FIT FOR THE WAY MOST CLUSTERS ARE BUILT TODAY THE GRID OVERHEAD IS CAUSED MAINLY BY THE TRANSFER OF BIG FILES BETWEEN THE SE AND CE
16 FUTURE DIRECTIONS Ø GRID AT THE MOMENT, IN EUROPEAN GRID INFRASTRUCTURE (EGI), ONLY A FEW GRID SITES AND VIRTUAL ORGANIZATIONS (VO) ARE SUPPORTING MPI AND OPENMP APPLICATIONS Ø CLOUD PERFORMANCE OVERHEAD ASSOCIATED WITH VIRTUALIZATION OF INTERCONNECTION NETWORK WRF IS REPORTED TO RUN ON VIRTUALIZED INFINIBAND INTERCONNECT WITH ONLY 15% OVERHEAD WHICH MAKES FULLY VIRTUALIZED HPC CLUSTERS VIABLE SOLUTION Ø ACCELERATORS PARTS OF WRF WERE PORTED TO NVIDIA GPU AND INTEL XEON PHI WITH PROMISING RESULTS
17 THANK YOU FOR YOUR ATTENTION MANUFACTURING WEATHER FORECASTING SIMULATIONS ON HPC INFRASTRUCTURES INSTITUTE OF INFORMATICS, SLOVAK ACADEMY OF SCIENCES ECW ENVIRONMENTAL COMPUTING WORKSHOP - ESCIENCE 2016
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
Schedule WRF model executions in parallel computing environments using Python
Schedule WRF model executions in parallel computing environments using Python A.M. Guerrero-Higueras, E. García-Ortega and J.L. Sánchez Atmospheric Physics Group, University of León, León, Spain J. Lorenzana
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Performance Analysis of a Numerical Weather Prediction Application in Microsoft Azure
Performance Analysis of a Numerical Weather Prediction Application in Microsoft Azure Emmanuell D Carreño, Eduardo Roloff, Jimmy V. Sanchez, and Philippe O. A. Navaux WSPPD 2015 - XIII Workshop de Processamento
A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance
A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance Michael Fenn ([email protected]), Jason Holmes ([email protected]), Jeffrey Nucciarone ([email protected]) Research
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster
Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959
Manual for using Super Computing Resources
Manual for using Super Computing Resources Super Computing Research and Education Centre at Research Centre for Modeling and Simulation National University of Science and Technology H-12 Campus, Islamabad
Part I Courses Syllabus
Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment
The Asterope compute cluster
The Asterope compute cluster ÅA has a small cluster named asterope.abo.fi with 8 compute nodes Each node has 2 Intel Xeon X5650 processors (6-core) with a total of 24 GB RAM 2 NVIDIA Tesla M2050 GPGPU
SR-IOV: Performance Benefits for Virtualized Interconnects!
SR-IOV: Performance Benefits for Virtualized Interconnects! Glenn K. Lockwood! Mahidhar Tatineni! Rick Wagner!! July 15, XSEDE14, Atlanta! Background! High Performance Computing (HPC) reaching beyond traditional
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Cloud Computing through Virtualization and HPC technologies
Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
Auto-Tuning TRSM with an Asynchronous Task Assignment Model on Multicore, GPU and Coprocessor Systems
Auto-Tuning TRSM with an Asynchronous Task Assignment Model on Multicore, GPU and Coprocessor Systems Murilo Boratto Núcleo de Arquitetura de Computadores e Sistemas Operacionais, Universidade do Estado
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Accelerating CFD using OpenFOAM with GPUs
Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
FLOW-3D Performance Benchmark and Profiling. September 2012
FLOW-3D Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: FLOW-3D, Dell, Intel, Mellanox Compute
Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber
Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )
CNR-INFM DEMOCRITOS and SISSA elab Trieste
elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014
HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job
Parallel Computing: Strategies and Implications. Dori Exterman CTO IncrediBuild.
Parallel Computing: Strategies and Implications Dori Exterman CTO IncrediBuild. In this session we will discuss Multi-threaded vs. Multi-Process Choosing between Multi-Core or Multi- Threaded development
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
A quick tutorial on Intel's Xeon Phi Coprocessor
A quick tutorial on Intel's Xeon Phi Coprocessor www.cism.ucl.ac.be [email protected] Architecture Setup Programming The beginning of wisdom is the definition of terms. * Name Is a... As opposed
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist
The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing
Cloud Computing. Alex Crawford Ben Johnstone
Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a
Using the Windows Cluster
Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Set Up and Run WRF (ARW-Ideal, ARW-real, and NMM-real)
Set Up and Run WRF (ARW-Ideal, ARW-real, and NMM-real) Wei Wang NCAR/NESL/MMM January 2012 Mesoscale & Microscale Meteorological Division / NCAR 1 WRF System Flowchart WRFV3 WPS ideal.exe real.exe real_nmm.exe
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania)
Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania) Outline Introduction EO challenges; EO and classical/cloud computing; EO Services The computing platform Cluster -> Grid -> Cloud
Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model
ANL/ALCF/ESP-13/1 Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model ALCF-2 Early Science Program Technical Report Argonne Leadership Computing Facility About Argonne
Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises
Parallel Programming for Multi-Core, Distributed Systems, and GPUs Exercises Pierre-Yves Taunay Research Computing and Cyberinfrastructure 224A Computer Building The Pennsylvania State University University
High Productivity Computing With Windows
High Productivity Computing With Windows Windows HPC Server 2008 Justin Alderson 16-April-2009 Agenda The purpose of computing is... The purpose of computing is insight not numbers. Richard Hamming Why
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC
A National Computing Grid: FGI
A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI Grids in Finland : a short history 3/10/2012 FGI In the beginning, we had M-Grid Interest in Grid technology
Simulation Platform Overview
Simulation Platform Overview Build, compute, and analyze simulations on demand www.rescale.com CASE STUDIES Companies in the aerospace and automotive industries use Rescale to run faster simulations Aerospace
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
LLamasoft K2 Enterprise 8.1 System Requirements
Overview... 3 RAM... 3 Cores and CPU Speed... 3 Local System for Operating Supply Chain Guru... 4 Applying Supply Chain Guru Hardware in K2 Enterprise... 5 Example... 6 Determining the Correct Number of
Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University
Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules
Large-Data Software Defined Visualization on CPUs
Large-Data Software Defined Visualization on CPUs Greg P. Johnson, Bruce Cherniak 2015 Rice Oil & Gas HPC Workshop Trend: Increasing Data Size Measuring / modeling increasingly complex phenomena Rendering
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
ArcGIS Pro: Virtualizing in Citrix XenApp and XenDesktop. Emily Apsey Performance Engineer
ArcGIS Pro: Virtualizing in Citrix XenApp and XenDesktop Emily Apsey Performance Engineer Presentation Overview What it takes to successfully virtualize ArcGIS Pro in Citrix XenApp and XenDesktop - Shareable
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System Qingyu Meng, Alan Humphrey, Martin Berzins Thanks to: John Schmidt and J. Davison de St. Germain, SCI Institute Justin Luitjens
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 [email protected] THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
Multicore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
BLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
Enterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc.
Enterprise HPC & Cloud Computing for Engineering Simulation Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc. Historical Perspective Evolution of Computing for Simulation Pendulum swing: Centralized
icer Bioinformatics Support Fall 2011
icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)
High-Performance Computing and Big Data Challenge
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
wu.cloud: Insights Gained from Operating a Private Cloud System
wu.cloud: Insights Gained from Operating a Private Cloud System Stefan Theußl, Institute for Statistics and Mathematics WU Wirtschaftsuniversität Wien March 23, 2011 1 / 14 Introduction In statistics we
Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age
Scalable and High Performance Computing for Big Data Analytics in Understanding the Human Dynamics in the Mobile Age Xuan Shi GRA: Bowei Xue University of Arkansas Spatiotemporal Modeling of Human Dynamics
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
The Hartree Centre helps businesses unlock the potential of HPC
The Hartree Centre helps businesses unlock the potential of HPC Fostering collaboration and innovation across UK industry with help from IBM Overview The need The Hartree Centre needs leading-edge computing
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
benchmarking Amazon EC2 for high-performance scientific computing
Edward Walker benchmarking Amazon EC2 for high-performance scientific computing Edward Walker is a Research Scientist with the Texas Advanced Computing Center at the University of Texas at Austin. He received
HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA
HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture
Building a Private Cloud with Eucalyptus
Building a Private Cloud with Eucalyptus 5th IEEE International Conference on e-science Oxford December 9th 2009 Christian Baun, Marcel Kunze KIT The cooperation of Forschungszentrum Karlsruhe GmbH und
Neptune. A Domain Specific Language for Deploying HPC Software on Cloud Platforms. Chris Bunch Navraj Chohan Chandra Krintz Khawaja Shams
Neptune A Domain Specific Language for Deploying HPC Software on Cloud Platforms Chris Bunch Navraj Chohan Chandra Krintz Khawaja Shams ScienceCloud 2011 @ San Jose, CA June 8, 2011 Cloud Computing Three
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015
Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration
Interactive Data Visualization with Focus on Climate Research
Interactive Data Visualization with Focus on Climate Research Michael Böttinger German Climate Computing Center (DKRZ) 1 Agenda Visualization in HPC Environments Climate System, Climate Models and Climate
CUDA in the Cloud Enabling HPC Workloads in OpenStack With special thanks to Andrew Younge (Indiana Univ.) and Massimo Bernaschi (IAC-CNR)
CUDA in the Cloud Enabling HPC Workloads in OpenStack John Paul Walters Computer Scien5st, USC Informa5on Sciences Ins5tute [email protected] With special thanks to Andrew Younge (Indiana Univ.) and Massimo
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing
Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing James D. Jackson Philip J. Hatcher Department of Computer Science Kingsbury Hall University of New Hampshire Durham,
The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices
WS on Models, Algorithms and Methodologies for Hierarchical Parallelism in new HPC Systems The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices
