The PRISM software framework and the OASIS coupler
|
|
|
- Barry Kennedy
- 10 years ago
- Views:
Transcription
1 The PRISM software framework and the OASIS coupler Sophie Valcke 1, Reinhard Budich 2, Mick Carter 3, Eric Guilyardi 4, Marie-Alice Foujols 5, Michael Lautenschlager 6, René Redler 7, Lois Steenman-Clark 8, Nils Wedi 9 1 European Center for Research and Advanced Training in Scientific Computing (CERFACS), Toulouse, France 2 Max Planck Institute for Meteorology (MPI-M), Hamburg, Germany 3 Met Office, Exeter, United Kingdom 4 Centre national de la recherche scientifique (CNRS), Paris, France 5 Institut Pierre-Simon Laplace (IPSL), Paris, France 6 Model & Data Group (MPI-M&D), Hamburg, Germany 7 C&C Research Laboratories NEC Europe Ltd. (NEC-CCRLE), Sankt-Augustin, Germany 8 Centre for Global Atmospheric Modelling (CGAM), Reading, United Kingdom 9 European Centre for Medium Range Weather Forecast (ECMWF), Reading, United Kingdom Abstract The increasing complexity of Earth system models (ESMs) and computing facilities puts a heavy technical burden on the research teams active in climate modelling. PRISM provides the Earth System Modelling community with a forum to promote sharing of development, maintenance and support of standards and software tools used to assemble, run, and analyse ESMs based on state-of-the-art component models (ocean, atmosphere, land surface, etc..) developed in the different climate research centres in Europe and elsewhere. PRISM is organised as a distributed network of experts who contribute to five "PRISM Areas of Expertise" (PAE): 1) Code coupling and I/O, 2) Integration and modelling environments, 3) Data processing, visualisation and management, 4) Meta-data, and 5) Computing. For example, the PAE Code coupling and I/O develops and supports the OASIS coupler, a software allowing synchronized exchanges of coupling information between numerical codes representing different components of the climate system. OASIS successfully demonstrates shared software, capitalising about 25 person-years of mutual developments and fulfilling the coupling needs of about 15 climate research groups around the world. The PRISM concept, goals, and organization PRISM was initially started as a project under the European Union's Framework Programme 5 (FP5, ) and its long term support is now ensured by multi-institute funding of 7 partners (CERFACS, France; NEC-CCRLE-NEC, Germany; CGAM, UK; CNRS, France; MPI-M&D, Germany; Met Office, UK; and ECMWF) and 9 associate partners (CSC, Finland; IPSL, France; Météo-France, France; MPI-M, Germany; SMHI, Sweden; and computer manufacturers CRAY, NEC- HPCE, SGI, and SUN) contributing with IT and Earth science experts to the PRISM Support Initiative (PSI, see currently for a total of about 8 person-year/year. The PRISM concept, initially a Euroclivar recommendation, is to increase what Earth system modellers have in common today (compilers, message passing libraries, algebra libraries, etc.) and share also the development, maintenance and support of a wider set of Earth System Modelling software tools and standards. This should reduce the technical development efforts of each individual
2 research team, facilitate the assembling, running, monitoring, and post-processing of ESMs based on state-of-the-art component models developed in the different climate research centres in Europe and elsewhere, and therefore promote the key scientific diversity of the climate modelling community. As demonstrated in other fields, sharing software tools is also a powerful incentive for increased scientific collaboration. It also stimulates computer manufacturers to contribute, thereby increasing the tool portability and the optimization of next generation of platforms for ESM needs, and also facilitating computer manufacturer procurement and benchmarking activities. The extensive use of the OASIS coupler illustrates the benefits of a successful shared software infrastructure. In 1991, CERFACS was commissioned to realise a software for coupling different geophysical component models developed independently by different research groups. The OASIS development team strongly focussed on efficient user support and constant integration of the developments fed back by the users. This interaction snowballed and resulted in a constantly growing community. Today, OASIS capitalises about 25 person-years (py) of mutual developments and fulfils the coupling needs of about 15 climate research groups around the world. The effort invested therefore represents, on a first order, 25 py/15 groups = 1,7 py/group, which is certainly much less than the effort that would have been required by each group to develop its own coupler. PRISM represents the first major collective effort, at the European level, to develop ESM supporting software in a shared and coherent way, as was recognised by the Joint Scientific Committee (JSC) and the Modelling Panel of the World Climate Research Programme (WCRP) that has endorsed it as a "key European infrastructure project". It is analogous to the ESMF project in the United States (see PRISM is lead by the PRISM Steering Board (one member per partner) that reviews each year a work plan proposed by the PRISM Core Group composed of PSI Coordinator(s), the leaders of the PRISM Areas of Expertise (see the next paragraph), and the chair of the PRISM User Group. The PRISM User Group is composed of all climate modelling groups using the PRISM software tools; given the dissemination of the OASIS coupler, the PRISM User Group is already a large international group. PRISM Areas of Expertise PRISM is organised around five PRISM areas of expertise (PAEs) having the following remits: Promote and, if needed, develop software tools for Earth System Modelling. A PRISM tool must be portable, usable independently and interoperable with the other PRISM tools, and freely available for research. There should be documented interest from the community to use the tool and the tool developers must be ready to provide user support. Encourage and organise a related network of experts, including technology watch. Promote and participate in the definition of community standards where needed. Coordinate with other PRISM areas of expertise and related international activities. 1. PAE Code coupling and I/O. The scope of the PAE "Code coupling and I/O" is to: develop, maintain, and support tools for coupling climate modelling component codes; ensure a constant technology watch on coupling tools developed outside PRISM; keep strong relations with the different projects involving code coupling in climate modelling internationally. The current objectives are to maintain and support the OASIS3 coupler, continue the development of the OASIS4 coupler (see below), but also, through the organisation of workshops, help the community understand the different technical approaches used in code coupling, for example in the PALM coupler (Buis et al. 2006), in the UK Met Office FLUME project (Ford and Riley 2003), in the US ESMF project (Killeen et al 2006), and in the Bespoke Framework Generator (BFG) from U. of Manchester (Ford et al 2006) used in the GENIE project.
3 2. PAE Integration & modelling environments. This PAE targets the following environments: source version control for software development (including model development); code extraction and compilation; job configuration (how to set up and define a coupled integration); job running (how to control the execution of a coupled integration); integration with archive systems. For source version control, PRISM promotes the use of Subversion (Pilato et al. 2004) and recently moved from CVS to Subversion for its own software distribution server, now located at DKRZ in Hamburg. Controlling the creation of executable and providing a suitable run time environment were seen as key integration activities within the EU FP5 PRISM project, resulting in the development of the PRISM Compile and Running Environments (SCE & SRE). These tools are built on the concept that software from multiple sources can use a single framework as long as those models can conform to a set of simple standards. Different groups (ECMWF, IPSL, CERFACS) have also shown strong interest in the UK Met Office Flexible Configuration Management (FCM) tool for version control and/or compilation management. A further review will be conducted in those groups and this tool, together with Subversion, may be considered as a replacement to CVS and extend the simplicity of 'make' in the current SCE. PrepIFS is a flexible User Interface framework provided by ECMWF that allows tailored graphical user interfaces to be built for the configuration of models and other software. It is integrated with SMS (Supervisor Monitor Scheduler) for the management of networks of jobs across a number of platforms and both products are developed using Web Services technology. SMS and prepifs have recently been packaged for use in the Chinese climate community. The power of these tools is recognised by PRISM, even if they are not widely used in the European climate community because of the level of commitment and human resources required to run these sophisticated services. ECMWF is currently developing prepoasis, a Graphical User Interface based on prepifs but suitable to be run stand alone, to configure a coupled model using the OASIS4 coupler. 3. PAE Data processing, visualisation and management The overall objective of this PAE is the development of standards and infrastructure for data processing, data archiving and data exchange in Earth system modelling and more general Earth system research (ESR). The huge amounts of data in ESR do not allow for centralised data archiving. Networking between geographically distributed archives is required. Ideally the geographical distribution of federated data archives is hidden to the user by an integrative graphical WWW-based interface. Standards are required in order to establish a data federation and to work in network. For data processing, this PAE currently analyses CDAT (Climate Data Analysis Tools) and CDO (Climate data Operators) respectively maintained and developed by PCMDI and MPI-M. The M&D Group also develops the CERA-2 data model for the World Climate Data Centre, proposing a description of geo-referenced climate data (model output) and containing information for detection, browse and use of data. An important collaboration is going on with the PAE Metadata and other international initiatives for the development and implementation of metadata standards for the description of model configuration and numerical grids. Collaboration with PRISM related data archives in the development of data networking and federated archive architectures is also going on. ECMWF MARS software may be another candidate tool for meteorological data access and manipulation even if it is likely to be of interest only to major NWP sites due to its complexity.
4 4. PAE Metadata Metadata has in the last few years become a hot topic with new schemas and ideas to promote the interchangeability of Earth system models or modelling components as well as data. The PRISM Metadata PAE provides a forum to discuss, develop, and coordinate metadata issues with other national and international projects. The fundamental objective is to develop, document, and disseminate Earth System modelling metadata schemas as well as tools for producing, checking and displaying metadata. Currently, this PAE offers an opportunity to ensure coherence between the following metadata definition efforts: Numerical Model Metadata (NMM, see developed at University of Reading, is an evolving international metadata standard intended for the exchange of information about numerical code bases, and the models/simulations done using them. The CURATOR project (Earth System Curator, see a project similar to NMM in the US. Numerical grid metadata, developed by Balaji at GFDL, USA, for numerical grid description ( The CF convention for climate and forecast metadata Interface (Eaton et al. 2003) designed to promote the processing and sharing of files created with the netcdf Application Programmer, developed by an international team and now managed by PCMDI and BADC. The metadata defined by the OASIS4 developers for description and configuration of the coupling and IO interface in a coupled model. The metadata currently defined in the UK Met Office FLUME project to manage and define the process of model configuration. The goal of this PAE is therefore to integrate these emerging standards, ensure that they meet requirements and needs of the Earth System Modelling community, and disseminate them as "good practice". 5. PAE Computing Experience has shown that a large variety of technical aspects related to computing are highly important for Earth system modelling. These techniques are in constant flow and evolve with new hardware becoming available. While computer vendors have to be kept informed about requirements emerging from the climate modelling community Earth system modellers still have to be informed about computing issues to preview difficulties and evolutions. PRISM can play a role in that aspect through the new PAE Computing devoted to those technology trends. Possible technical topics are file IO and data storage, algorithmic development, portable software to fit the needs of parallel and vector systems. It is first proposed to establish a group of people willing to contribute with their expertise via mailing list and a sharing of relevant information on the PRISM web site. In particular the activities will cover sharing of knowledge from the work on the Earth Simulator, establishment of links with the DEISA project ( and providing information about important conferences and workshops. Depending on the number of volunteers and the input from this group, the list of tasks will be revised or extended in the next years. The OASIS coupler The OASIS3 and OASIS4 couplers are software allowing synchronized exchanges of coupling information between numerical codes representing different components of the climate system. OASIS3 is the direct evolution of the OASIS coupler developed since more than 10 years at CERFACS, Toulouse, France (Valcke 2006). A new fully parallel coupler OASIS4 is also developed within PRISM (Valcke and Redler 2006). Other MPI-based parallel coupler performing field transformation exist, such as the Mesh based parallel Code Coupling (MpCCI, see
5 or the NCAR CCSM Coupler 6 (see The originality of OASIS relies in its great flexibility (as the coupling and I/O configuration is externally defined by the user), in its parallel neighbourhood search based on the geographical description of the process local domains (for OASIS4), and in its common treatment of coupling and I/O exchanges OASIS3 OASIS3 is the last version of the OASIS coupler developed since 1991 at CERFACS. It is a portable set of F77, F90 and C routines. At run time, OASIS3 acts as a separate interpolation process and as a parallel model interface library, the OASIS3 PSMILe, that needs to be linked to and used by the component models. Modularity, flexibility, and portability represent OASIS3 key design concepts. Coupling configuration At run time, the OASIS3 main process first reads the coupled run configuration, written by the user before the run in the namcouple text file following a specific format, and distributes it to the different component model PSMILes. The namcouple contains all coupling options for a particular coupled run, e.g. the duration, the component models, and, for each exchange, a symbolic description of the source and target (another component or a file for I/O actions), the exchange period, regridding and other transformations. During the run, OASIS3 main process and the component model PSMILes perform automatically appropriate exchanges based on the configuration information contained in the namcouple. Process management In a coupled run using OASIS3, the component models remain separate executables. If only MPI1 is available (Snir et al. 1998), the OASIS3 main process and the component models must be all started at once in the job script. If the MPI library supports the MPI2 standard (Gropp et al. 1998) the user has the option to start only OASIS3 main process which then launches the different component models using the MPI_Comm_Spawn functionality. In both cases, all component models are necessarily integrated from the beginning to the end of the run, and each coupling field is exchanged at a fixed frequency defined in the namcouple for the whole run; in that sense, OASIS3 supports static coupling only. Coupling field transformation and regridding For each coupling exchange, the OASIS3 main interpolation process receives the coupling field from the source model, performs the transformations and 2D regridding needed to express the source field on the grid of the target model, and sends the transformed field to the target model. Different 2D regriddings (nearest-neighbour, gaussian-weighted, bilinear, bicubic, conservative, etc.) in the Earth spherical coordinate system are available for different types of grids (regular in longitude and latitude, Gaussian, stretched, rotated, reduced, unstructured). OASIS3 can also be used in the interpolator-only mode to interpolate fields contained in files without running any model. Communication: the OASIS3 PSMILe software layer To communicate via the OASIS3 main interpolation process or to perform I/O actions, a component model needs to call few specific PSMILe routines for initialisation, grid and partition definition, field declaration, field Get and Put actions (to receive or send a field by respectively calling a prism_get or a prism_put routine), and termination. Below the prism_get and prism_put, the PSMILe library automatically performs coupling exchanges (i.e. between two component models) using Message Passing Interface (MPI) and I/O actions from/to disk files using the GFDL mpp_io library
6 ( following the user externally defined configuration in the namcouple (see above). PSMILe supports parallel communication in the sense that each process of a parallel model can send its local part of the field. For coupling exchanges, the different parts of the field are sent either directly to the other parallel model (if there is no need of redistribution or interpolation) or to the OASIS3 main interpolation process, which gathers the whole coupling field, transforms or regrids it, and redistributes it to the target component model processes. As for OASIS4, the communication follows the end-point principle, i.e. there is no reference in the component model code to the origin of a Get action or to the destination of a Put action; the source and target component models (coupling exchange) or the input or output file (I/O) are set externally by the user. This ensures an easy transition from the coupled mode (Get or Put action corresponding to a coupling exchange) to the forced mode (Get or Put action corresponding to I/O from/to a file), totally transparent for the component model itself. Furthermore, the Get/Put routines can be called at each timestep in the component model code, but the receiving and sending actions will effectively be performed only at appropriate times from/to the appropriate source/target following the configuration externally defined by the user in the namcouple file. The OASIS3 community The OASIS3 community has steadily grown since the OASIS first release about 15 years ago. The OASIS3 coupler is currently used by about 15 modelling groups in Europe, Australia, Asia and North America, on many different platforms such as the Fujitsu VPP5000, NEC SX, SGI Octane and 03000, IBM Power4, COMPAQ Alpha cluster and Linux PC cluster. Full user support, including bug fixes and release of new versions, is currently provided for OASIS3, even if most of the development efforts are now devoted to OASIS4. OASIS4 As the climate modelling community is progressively targeting higher resolution climate simulations on massively parallel platforms with coupling exchanges involving a higher number of (possibly 3D) coupling fields at higher coupling frequencies, a completely new fully parallel coupler OASIS4 is also being developed within PRISM. OASIS4 is a portable set of Fortran 90 and C routines. At run-time, OASIS4 acts as a separate parallel executable, the OASIS4 Driver-Transformer, and as a fully parallel model interface library, the OASIS4 PSMILe. The concepts of parallelism and efficiency drove OASIS4 developments, keeping at the same time in its design the concepts of portability and flexibility that made the success of OASIS3. Coupling configuration Each component model to be coupled via OASIS4 should be released with an extensible Markup Language (XML, see file describing all its potential input and output fields, i.e. the fields that can be received or sent by the component through PSMILe Get and Put actions in the code. Based on those description files, the user produces, either manually or via a Graphical User Interface, the XML configuration files. As for OASIS3, the OASIS4 Driver extracts the configuration information at the beginning of the run and sends it to the different model PSMILes, which then perform appropriate coupling or I/O actions automatically during the run. OASIS4 is also highly flexible in the sense that any duration of run, any number of component models, any number of coupling and I/O fields, and particular coupling or I/O parameters for each field, can be specified. Process management The process management in OASIS3 and OASIS4 are quite similar: the component models remain
7 separate executables, and the MPI1 and MPI2 approaches are available (see above). The process management of OASIS4 is however slightly more complex as OASIS4 Driver can spawn the different component models on different machines. Although this functionality is currently not fully implemented, OASIS4 will also allow the component models to redefine their grids during the run. But besides this dynamic grid aspect, OASIS4, as OASIS3, also manages static coupling only. Coupling field transformation and regridding During the run, the OASIS4 parallel Transformer manages the transformation and regridding of 2D or 3D coupling fields. The Transformer performs only the weight calculation and the regridding per se; the neighbourhood search, i.e. the determination for each target point of the source points that contribute to the calculation of its regridded value, is performed in parallel in the source PSMILe. During the simulation timestepping, the OASIS4 parallel Transformer can be assimilated to an automaton that reacts to what is demanded by the different component model PSMILes: receive data for transformation (source component process) or send transformed data (target component process). The OASIS4 Transformer therefore acts as a parallel buffer in which the transformations take place. Currently, only 2D and 3D nearest-neighbour, 2D and 3D linear, and bicubic regriddings are implemented, but the plans are to implement also 3D cubic grid interpolation, 2D and 3D conservative remapping. Communication: the OASIS4 PSMILe software layer To be coupled via OASIS4, the component models have to include specific calls to the OASIS4 PSMILe software layer. The OASIS4 PSMILe Application Programming Interface (API) was kept as close as possible to OASIS3 PSMILe API; this ensures a smooth and progressive transition between OASIS3 and OASIS4. The OASIS4 PSMILe supports fully parallel MPI-based communication, either directly between the models (including automatic repartitioning if needed) or via the parallel Transformer, and file I/O using the GFDL mpp_io library, which has been extended to work optionally with the parallel NetCDF library ( The detailed communication pattern among the different component model processes is established by the PSMIle, using the results of the regridding or repartitioning neighbourhood search. This search is based on the source and target identified for each coupling exchange by the user in the XML configuration files and on the local domain covered by each component process. The search uses an efficient multigrid algorithm and is done in parallel in the source PSMILe, which ensures that only the useful part of the coupling field is extracted and transferred. Besides these new parallel aspects, the OASIS4 PSMILe follows the same end-point communication and user-defined external configuration principles than the OASIS3 PSMILe. The OASIS4 users OASIS4 portability and scalability was demonstrated with different ``toy" models during the EU FP5 PRISM project. OASIS4 was also used to realize a coupling between the MOM4 ocean model and a pseudo atmosphere model at GFDL (Geophysical Fluid Dynamic Laboratory) in Princeton (USA), and with pseudo models to interpolate data onto high resolution grids at IFM-GEOMAR in Kiel, Germany. Currently, work is going on with OASIS4 at: the Swedish Meteorological and Hydrological Institute (SMHI) in Sweden for coupling regional ocean and atmosphere models (first physical case studies are already realized);
8 the European Centre for Medium-Range Weather Forecast (ECMWF), KNMI in the Netherlands, and Météo-France in France in the framework of the EU GEMS project, for 3D coupling between atmosphere and atmospheric chemistry models; at the UK MetOffice for global ocean-atmosphere coupling. After the current beta-testing phase, the first official OASIS4 version should be available to the public beginning of Conclusions The strength of PRISM is to provide a framework promoting the use of common software tools for Earth system modelling and a network allowing ESM developers to share their expertise and ideas. The benefit of the current decentralized PRISM organisation is to allow best of breed software tools to naturally emerge, although this means that PRISM relies on the developments done in the different partner groups to propose technical software solutions to Earth system modellers. In the areas where standards have to be pre-defined, for example for metadata definition, the key contribution of PRISM is to provide a visible entry point of the European ESM software community for international coordination, for example with the USA-led ESMF project within the WCRP framework. Given its institutional long term support, PRISM is now well placed to seek additional funding to support more networking and coordination activities or to help specific technical developments within the partner groups. Additional contributors (European or not) are most welcome to join, bring in additional expertise, and ensure a wider distribution of the PAE tools and standards. References Buis, S., Piacentini, A. and Déclat, D. 2006: PALM: A Computational framework for assembling high performance computing applications. Concurrency Computation: Practice and Experience, 18(2), Eaton, B., Gregory, J., Drach, B., Taylor, K., Hankin, S NetCDF Climate and Forecast (CF) Metadata Conventions, Version html. Ford, R. W. and Riley, G. D. 2003: The Met Office FLUME Project High Level Design. Manchester Informatics Ltd., The University of Manchester, 15 pp. Ford, R.W., Riley, G.D., Bane, M.K., Armstrong, C.W., and Freeman, T.L GCF: A General Coupling Framework, Concurrency and Computation: Practice and Experience, 18 (2), Gropp, W., Huss-Lederman, S., Lumsdaine, A., Lusk, E., Nitzberg, B., Saphir, W. and. Snir, M MPI - The Complete Reference, Vol. 2 The MPI Extensions, MIT Press. Killeen, T., DeLuca, C., Gombosi, T., Toth, G., Stout, Q., Goodrich, C., Sussman, A. and Hesse, M Integrated Frameworks for Earth and Space Weather Simulation. American Meteorological Society Meeting, Atlanta, GA. Pilato, C. M., Collins-Sussman, B., Fitzpatrick, B. W Version Control with Subversion, O Reilly Media Inc., 320 pp. Snir, M., Otto, S., Huss-Lederman, S., Walker, D., and Dongarra, J. 1998: MPI The Complete Reference, Vol. 1 The MPI Core, MIT Press. Valcke, S OASIS3 User Guide (prism_2-5). PRISM Support InitiativeTechnical Report No 3, 64 pp. ( Valcke, S., and Redler, R OASIS4 User Guide. PRISM Support InitiativeTechnical Report No 4, 72 pp. (
OASIS3-MCT & Open-PALM: 2 open source codes couplers
OASIS3-MCT & Open-PALM: 2 open source codes couplers https://verc.enes.org/oasis/ http://www.cerfacs.fr/globc/palm_web/ Anthony Thevenin ([email protected]) Code coupling issues Ê Why coupling scientific
Data Management Plan Deliverable 5.4
Data Management Plan Deliverable 5.4 This project has received funding from the European Union s Horizon 2020 Research and Innovation Programme under Grant Agreement No 675191 Page 1 About this document
CMIP5 Data Management CAS2K13
CMIP5 Data Management CAS2K13 08. 12. September 2013, Annecy Michael Lautenschlager (DKRZ) With Contributions from ESGF CMIP5 Core Data Centres PCMDI, BADC and DKRZ Status DKRZ Data Archive HLRE-2 archive
CLOUD BASED N-DIMENSIONAL WEATHER FORECAST VISUALIZATION TOOL WITH IMAGE ANALYSIS CAPABILITIES
CLOUD BASED N-DIMENSIONAL WEATHER FORECAST VISUALIZATION TOOL WITH IMAGE ANALYSIS CAPABILITIES M. Laka-Iñurrategi a, I. Alberdi a, K. Alonso b, M. Quartulli a a Vicomteh-IK4, Mikeletegi pasealekua 57,
CMIP6 Data Management at DKRZ
CMIP6 Data Management at DKRZ icas2015 Annecy, France on 13 17 September 2015 Michael Lautenschlager Deutsches Klimarechenzentrum (DKRZ) With contributions from ESGF Executive Committee and WGCM Infrastructure
HPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]
Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model
ANL/ALCF/ESP-13/1 Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model ALCF-2 Early Science Program Technical Report Argonne Leadership Computing Facility About Argonne
MyOcean Copernicus Marine Service Architecture and data access Experience
MyOcean Copernicus Marine Service Architecture and data access Experience Sophie Besnard CLS, Toulouse, France February 2015 MyOcean Story MyOcean Challenge & Success MyOcean Service MyOcean System MyOcean
EARTH SYSTEM SCIENCE ORGANIZATION Ministry of Earth Sciences, Government of India
EARTH SYSTEM SCIENCE ORGANIZATION Ministry of Earth Sciences, Government of India INDIAN INSTITUTE OF TROPICAL METEOROLOGY, PUNE-411008 Advertisement No. PER/03/2012 Opportunities for Talented Young Scientists
Building an Inexpensive Parallel Computer
Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University
HPC Wales Skills Academy Course Catalogue 2015
HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses
Big Data at ECMWF Providing access to multi-petabyte datasets Past, present and future
Big Data at ECMWF Providing access to multi-petabyte datasets Past, present and future Baudouin Raoult Principal Software Strategist ECMWF Slide 1 ECMWF An independent intergovernmental organisation established
Jozef Matula. Visualisation Team Leader IBL Software Engineering. 13 th ECMWF MetOps Workshop, 31 th Oct - 4 th Nov 2011, Reading, United Kingdom
Visual Weather web services Jozef Matula Visualisation Team Leader IBL Software Engineering Outline Visual Weather in a nutshell. Path from Visual Weather (as meteorological workstation) to Web Server
Operating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
Highly Scalable Dynamic Load Balancing in the Atmospheric Modeling System COSMO-SPECS+FD4
Center for Information Services and High Performance Computing (ZIH) Highly Scalable Dynamic Load Balancing in the Atmospheric Modeling System COSMO-SPECS+FD4 PARA 2010, June 9, Reykjavík, Iceland Matthias
Interactive Data Visualization with Focus on Climate Research
Interactive Data Visualization with Focus on Climate Research Michael Böttinger German Climate Computing Center (DKRZ) 1 Agenda Visualization in HPC Environments Climate System, Climate Models and Climate
Big Data Services at DKRZ
Big Data Services at DKRZ Michael Lautenschlager and Colleagues from DKRZ and Scientific Computing Research Group MPI-M Seminar Hamburg, March 31st, 2015 Big Data in Climate Research Big data is an all-encompassing
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Nevada NSF EPSCoR Track 1 Data Management Plan
Nevada NSF EPSCoR Track 1 Data Management Plan August 1, 2011 INTRODUCTION Our data management plan is driven by the overall project goals and aims to ensure that the following are achieved: Assure that
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
Part I Courses Syllabus
Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment
An Integrated Simulation and Visualization Framework for Tracking Cyclone Aila
An Integrated Simulation and Visualization Framework for Tracking Cyclone Aila Preeti Malakar 1, Vijay Natarajan, Sathish S. Vadhiyar, Ravi S. Nanjundiah Department of Computer Science and Automation Supercomputer
Zhenping Liu *, Yao Liang * Virginia Polytechnic Institute and State University. Xu Liang ** University of California, Berkeley
P1.1 AN INTEGRATED DATA MANAGEMENT, RETRIEVAL AND VISUALIZATION SYSTEM FOR EARTH SCIENCE DATASETS Zhenping Liu *, Yao Liang * Virginia Polytechnic Institute and State University Xu Liang ** University
EC-Earth: new global earth system model
EC-Earth: new global earth system model Wilco Hazeleger Vincent v Gogh Global Climate Division/EC-Earth program KNMI, The Netherlands Amsterdam, December 2008 1 Amsterdam, December 2008 2 Observed climate
VisIVO, a VO-Enabled tool for Scientific Visualization and Data Analysis: Overview and Demo
Claudio Gheller (CINECA), Marco Comparato (OACt), Ugo Becciani (OACt) VisIVO, a VO-Enabled tool for Scientific Visualization and Data Analysis: Overview and Demo VisIVO: Visualization Interface for the
Mariana Vertenstein. NCAR Earth System Laboratory CESM Software Engineering Group (CSEG) NCAR is sponsored by the National Science Foundation
Leveraging the New CESM1 CPL7 Architecture - Current and Future Challenges Mariana Vertenstein NCAR Earth System Laboratory CESM Software Engineering Group (CSEG) NCAR is sponsored by the National Science
Software and Performance Engineering of the Community Atmosphere Model
Software and Performance Engineering of the Community Atmosphere Model Patrick H. Worley Oak Ridge National Laboratory Arthur A. Mirin Lawrence Livermore National Laboratory Science Team Meeting Climate
Performance Engineering of the Community Atmosphere Model
Performance Engineering of the Community Atmosphere Model Patrick H. Worley Oak Ridge National Laboratory Arthur A. Mirin Lawrence Livermore National Laboratory 11th Annual CCSM Workshop June 20-22, 2006
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
ESPRIT 29938 ProCure. ICT at Work for the LSE ProCurement Chain:
ESPRIT 29938 ProCure ICT at Work for the LSE ProCurement Chain: Putting Information Technology & Communications Technology to work in the Construction Industry The EU-funded ProCure project was designed
CHAPTER 1: OPERATING SYSTEM FUNDAMENTALS
CHAPTER 1: OPERATING SYSTEM FUNDAMENTALS What is an operating? A collection of software modules to assist programmers in enhancing efficiency, flexibility, and robustness An Extended Machine from the users
Advanced discretisation techniques (a collection of first and second order schemes); Innovative algorithms and robust solvers for fast convergence.
New generation CFD Software APUS-CFD APUS-CFD is a fully interactive Arbitrary Polyhedral Unstructured Solver. APUS-CFD is a new generation of CFD software for modelling fluid flow and heat transfer in
Big Data Research at DKRZ
Big Data Research at DKRZ Michael Lautenschlager and Colleagues from DKRZ and Scien:fic Compu:ng Research Group Symposium Big Data in Science Karlsruhe October 7th, 2014 Big Data in Climate Research Big
BLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
ANALYSIS OF THE STAKEHOLDER CONSULTATION ON
ANALYSIS OF THE STAKEHOLDER CONSULTATION ON Science and Technology, the key to Europe s future: guidelines for future European policy to support research COM(353)2004 DG Research, European Commission,
Data-Intensive Science and Scientific Data Infrastructure
Data-Intensive Science and Scientific Data Infrastructure Russ Rew, UCAR Unidata ICTP Advanced School on High Performance and Grid Computing 13 April 2011 Overview Data-intensive science Publishing scientific
THE BRITISH LIBRARY. Unlocking The Value. The British Library s Collection Metadata Strategy 2015-2018. Page 1 of 8
THE BRITISH LIBRARY Unlocking The Value The British Library s Collection Metadata Strategy 2015-2018 Page 1 of 8 Summary Our vision is that by 2020 the Library s collection metadata assets will be comprehensive,
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Addendum to the CMIP5 Experiment Design Document: A compendium of relevant emails sent to the modeling groups
Addendum to the CMIP5 Experiment Design Document: A compendium of relevant emails sent to the modeling groups CMIP5 Update 13 November 2010: Dear all, Here are some items that should be of interest to
Parallel Analysis and Visualization on Cray Compute Node Linux
Parallel Analysis and Visualization on Cray Compute Node Linux David Pugmire, Oak Ridge National Laboratory and Hank Childs, Lawrence Livermore National Laboratory and Sean Ahern, Oak Ridge National Laboratory
BSC vision on Big Data and extreme scale computing
BSC vision on Big Data and extreme scale computing Jesus Labarta, Eduard Ayguade,, Fabrizio Gagliardi, Rosa M. Badia, Toni Cortes, Jordi Torres, Adrian Cristal, Osman Unsal, David Carrera, Yolanda Becerra,
Primary author: Kaspar, Frank (DWD - Deutscher Wetterdienst), [email protected]
Primary author: Kaspar, Frank (DWD - Deutscher Wetterdienst), [email protected] Co-authors: Johannes Behrendt (DWD - Deutscher Wetterdienst), Klaus-Jürgen Schreiber (DWD - Deutscher Wetterdienst) Abstract
Overlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm [email protected] [email protected] Department of Computer Science Department of Electrical and Computer
TSINGHUA UNIVERSITY. Li Liu, Guangwen Yang, Bin Wang Center for Earth System Science, Tsinghua University
CoR: a MliDi Multi-Dimensional i l Common Remapping Software for Earth System Model Li Liu, Guangwen Yang, Bin Wang Center for Earth System Science, Tsinghua University [email protected] i Outline
Performance Monitoring of Parallel Scientific Applications
Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure
GIS Initiative: Developing an atmospheric data model for GIS. Olga Wilhelmi (ESIG), Jennifer Boehnert (RAP/ESIG) and Terri Betancourt (RAP)
GIS Initiative: Developing an atmospheric data model for GIS Olga Wilhelmi (ESIG), Jennifer Boehnert (RAP/ESIG) and Terri Betancourt (RAP) Unidata seminar August 30, 2004 Presentation Outline Overview
StreamStorage: High-throughput and Scalable Storage Technology for Streaming Data
: High-throughput and Scalable Storage Technology for Streaming Data Munenori Maeda Toshihiro Ozawa Real-time analytical processing (RTAP) of vast amounts of time-series data from sensors, server logs,
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
African-European Radio Astronomy Platform. 2013 Africa-EU Cooperation Forum on ICT. Addis Ababa, Ethiopia 3 December 2013
African-European Radio Astronomy Platform 2013 Africa-EU Cooperation Forum on ICT Addis Ababa, Ethiopia 3 December 2013 Context The African European Radio Astronomy Platform European Parliament s Written
Baudouin Raoult, Iryna Rozum, Dick Dee
ECMWF contribution to the EU funded CHARME Project: A Significant Event Viewer tool Matthew Manoussakis Baudouin Raoult, Iryna Rozum, Dick Dee 5th Workshop on the use of GIS/OGC standards in meteorology
1 About This Proposal
1 About This Proposal 1. This proposal describes a six-month pilot data-management project, entitled Sustainable Management of Digital Music Research Data, to run at the Centre for Digital Music (C4DM)
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Integrated Open-Source Geophysical Processing and Visualization
Integrated Open-Source Geophysical Processing and Visualization Glenn Chubak* University of Saskatchewan, Saskatoon, Saskatchewan, Canada [email protected] and Igor Morozov University of Saskatchewan,
Selection Requirements for Business Activity Monitoring Tools
Research Publication Date: 13 May 2005 ID Number: G00126563 Selection Requirements for Business Activity Monitoring Tools Bill Gassman When evaluating business activity monitoring product alternatives,
NERC Data Policy Guidance Notes
NERC Data Policy Guidance Notes Author: Mark Thorley NERC Data Management Coordinator Contents 1. Data covered by the NERC Data Policy 2. Definition of terms a. Environmental data b. Information products
BIRT Document Transform
BIRT Document Transform BIRT Document Transform is the industry leader in enterprise-class, high-volume document transformation. It transforms and repurposes high-volume documents and print streams such
2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment
R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI
OECD SERIES ON PRINCIPLES OF GOOD LABORATORY PRACTICE AND COMPLIANCE MONITORING NUMBER 10 GLP CONSENSUS DOCUMENT
GENERAL DISTRIBUTION OCDE/GD(95)115 OECD SERIES ON PRINCIPLES OF GOOD LABORATORY PRACTICE AND COMPLIANCE MONITORING NUMBER 10 GLP CONSENSUS DOCUMENT THE APPLICATION OF THE PRINCIPLES OF GLP TO COMPUTERISED
JMulTi/JStatCom - A Data Analysis Toolkit for End-users and Developers
JMulTi/JStatCom - A Data Analysis Toolkit for End-users and Developers Technology White Paper JStatCom Engineering, www.jstatcom.com by Markus Krätzig, June 4, 2007 Abstract JStatCom is a software framework
Building Platform as a Service for Scientific Applications
Building Platform as a Service for Scientific Applications Moustafa AbdelBaky [email protected] Rutgers Discovery Informa=cs Ins=tute (RDI 2 ) The NSF Cloud and Autonomic Compu=ng Center Department
17-11-05 ANNEX IV. Scientific programmes and initiatives
17-11-05 ANNEX IV Scientific programmes and initiatives DG TAXUD Pre-feasibility study for an observation system of the External Border of the European Union In this study, the external border will be
Functions of NOS Overview of NOS Characteristics Differences Between PC and a NOS Multiuser, Multitasking, and Multiprocessor Systems NOS Server
Functions of NOS Overview of NOS Characteristics Differences Between PC and a NOS Multiuser, Multitasking, and Multiprocessor Systems NOS Server Hardware Windows Windows NT 4.0 Linux Server Software and
Building Reliable, Scalable AR System Solutions. High-Availability. White Paper
Building Reliable, Scalable Solutions High-Availability White Paper Introduction This paper will discuss the products, tools and strategies available for building reliable and scalable Action Request System
Operating Systems: Basic Concepts and History
Introduction to Operating Systems Operating Systems: Basic Concepts and History An operating system is the interface between the user and the architecture. User Applications Operating System Hardware Virtual
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad
Cloud Computing: Computing as a Service Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Abstract: Computing as a utility. is a dream that dates from the beginning from the computer
Weighted Total Mark. Weighted Exam Mark
CMP2204 Operating System Technologies Period per Week Contact Hour per Semester Total Mark Exam Mark Continuous Assessment Mark Credit Units LH PH TH CH WTM WEM WCM CU 45 30 00 60 100 40 100 4 Rationale
Operating System Multilevel Load Balancing
Operating System Multilevel Load Balancing M. Corrêa, A. Zorzo Faculty of Informatics - PUCRS Porto Alegre, Brazil {mcorrea, zorzo}@inf.pucrs.br R. Scheer HP Brazil R&D Porto Alegre, Brazil [email protected]
The Copernicus Marine Enviroment Monitoring Service
The Copernicus Marine Enviroment Monitoring Service P.Y. Le Traon 7&8 Sep 2015 - CMEMS SE & UU Workshop, Brussels 1 CMEMS : The Copernicus Marine Environment Monitoring Service SATELLITES (S1, S3, Jason-CS)
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS Workshop on the Future of Big Data Management 27-28 June 2013 Philip Kershaw Centre for Environmental Data Archival
Computer Virtualization in Practice
Computer Virtualization in Practice [ life between virtual and physical ] A. Németh University of Applied Sciences, Oulu, Finland [email protected] ABSTRACT This paper provides an overview
Collaborative modelling and concurrent scientific data analysis:
Collaborative modelling and concurrent scientific data analysis: Application case in space plasma environment with the Keridwen/SPIS- GEO Integrated Modelling Environment B. Thiebault 1, J. Forest 2, B.
The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver
1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution
Final Report - HydrometDB Belize s Climatic Database Management System. Executive Summary
Executive Summary Belize s HydrometDB is a Climatic Database Management System (CDMS) that allows easy integration of multiple sources of automatic and manual stations, data quality control procedures,
WORLD METEOROLOGICAL ORGANIZATION. Introduction to. GRIB Edition1 and GRIB Edition 2
WORLD METEOROLOGICAL ORGANIZATION Introduction to GRIB Edition1 and GRIB Edition 2 JUNE 2003 GRIB EDITION 1 and GRIB EDITION 2 HISTORY 1. The GRIB code has great advantages in comparison with the traditional
THE CCLRC DATA PORTAL
THE CCLRC DATA PORTAL Glen Drinkwater, Shoaib Sufi CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire, WA4 4AD, UK. E-mail: [email protected], [email protected] Abstract: The project aims
Universal PMML Plug-in for EMC Greenplum Database
Universal PMML Plug-in for EMC Greenplum Database Delivering Massively Parallel Predictions Zementis, Inc. [email protected] USA: 6125 Cornerstone Court East, Suite #250, San Diego, CA 92121 T +1(619)
Virtualization s Evolution
Virtualization s Evolution Expect more from your IT solutions. Virtualization s Evolution In 2009, most Quebec businesses no longer question the relevancy of virtualizing their infrastructure. Rather,
Zero Downtime In Multi tenant Software as a Service Systems
Zero Downtime In Multi tenant Software as a Service Systems Toine Hurkmans Principal, Research Engineering Exact Software About Exact Software Founded 25 years ago Business Solutions for SMB space 100.000
Recommendations for Performance Benchmarking
Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
Digital libraries of the future and the role of libraries
Digital libraries of the future and the role of libraries Donatella Castelli ISTI-CNR, Pisa, Italy Abstract Purpose: To introduce the digital libraries of the future, their enabling technologies and their
