Annual Report High Performance Computing Center Stuttgart (HLRS) University of Stuttgart

Size: px
Start display at page:

Download "Annual Report 2002. High Performance Computing Center Stuttgart (HLRS) University of Stuttgart"

Transcription

1 Annual Report 2002 High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Stuttgart, Germany 3/14/2003 9:55 AM

2 1 Foreword Organization Structure Head Count Guest Scientists Key Staff Hired Key Staff Leaving User Support User Projects Distribution by fields Usage of the systems System usage by field System usage by state Workshops Trouble Ticket System Systems IA64 Early Release Platform IA64 Platform IA32 Platform HP System Change Teaching Lectures online Parallel Programming Workshop Introduction to Computer Science Simulation on High Performance Computer PhD Thesis Research Projects National International (European and Others) Industry Scientific Co-operations Existing Co-operations That Were Continued in New Co-operations Established in Scientific Workshops Conference Shows

3 6.5 Publications Refereed Papers Other Papers Talks Professional Activities

4 1 Foreword The year 2002 was a year of change for HLRS. Two outstanding events have shaped the flow of events and work of people. On the one hand, HLRS was separated from the computing center (RUS) at the end of the year. This was accompanied by a change in leadership. Michael Resch from the computer science department of the University of Houston, TX replaced Prof. Rühle, who will retire in April 2003, as the director of the HLRS in autumn On the other hand, HLRS was able to kick of the process of bringing in its next generation supercomputer system. Already in 2001 the steering committee of the HLRS has supported such a next step in supercomputing and has provided the requirements of the various users communities. In 2002 the University of Stuttgart has started a European procurement that will be finished in 2003 and will make sure that in 2004/2005 HLRS will be able to provide a competitive system to its users. In a changing environment the challenge of HLRS is to keep its international competitiveness not only at the level of compute power. More and more supercomputing centers have to focus on applied research and a close cooperation with users. One field of importance will be GRID computing. The potential of integration of infrastructure components will have to be exploited. At the same time exaggerated hopes will have to be pointed at and will have to be set against the requirements of reality. 4

5 2 Organization 2.1 Structure The new structure of the HLRS has 2 departments with a number of working groups. Head Office HLRS Michael Resch Systems & Software Applications & Visualization Parallel and Distributed Systems Technical & Scientific Simulation Visualisation Software Technology Parallel Computing Numerical Methods and Libraries Applications HPCN Production Figure 1 Structure of HLRS 2.2 Head Count The head count of the HLRS has changed with its separation from the RUS in The head count on January 1 st 2003 was: Permanent staff 24.5 Non-Permanent staff 9.5 Third party funded research 16.5 Research Assistants (Students) ~ Guest Scientists October 1 st 2002 September 30 th 2002 Dr. Graham Fagg, University of Tennessee, Knoxville, USA. Dr. Fagg did work and lecturing in the field of communication in distributed environments and GRID computing. 5

6 November 1 st 2001 October 31 st 2002 Dr. Toshiyuki Imamura, Japan Atomic Energy Research Institute, Tokyo, Japan. Dr. Imamura did work and lecturing in the field of communication in distributed computing environments and in the field of GRID computing. October 1 st 2002 September 30 th 2005 Dr. Nina Shokina, Institute of Computational Technologies, Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia. Dr. Shokina works in the field of algorithms in computational fluid dynamics. February 17 th 2002 March 8 th 2002 Joe Michael Kniss, University of Utah, USA, School of Computing, Scientific Computing and Imaging Institute. In the framework of Sonderforschungsbereich 382, Joe Kniss cooperated with Jürgen Schulze-Döbold on the integration of his volume rendering code into the virtual reality based volume-rendering framework developed at HLRS. Special focus was on the usage of his improved transfer function editor in a virtual reality environment. November 1 st 2002 August 15 th 2003 Chung-Hsien Chang, National Center for High Performance Computing (NCHC), Hsinchu, Taiwan. Chung- Hsien Chang s field of work is parallel ray tracing. In the cooperation the potential of integrating his work with the virtual reality oriented visualization in the HLRS CUBE will be evaluated. 2.4 Key Staff Hired New director: Ass. Prof. Dr. Michael M. Resch from the Computer Science Department of the University of Houston, TX, USA has accepted the position of director of HLRS and full professor in Autumn Key Staff Leaving Dr. Edgar Gabriel, Head of Parallel and Distributed Systems Group at HLRS, has moved to the Innovative Computing Laboratory at the University of Tennessee, Knoxville, USA, as a post doctoral scientific researcher at the end of December

7 3 User Support User support of HLRS is based on: - Direct project support - Workshops - Trouble Ticket System - Online help 3.1 User Projects The total number of projects supported in 2002 was projects were approved by the steering committee of the HLRS during the year Of these 12 were new projects while 21 were continuations of existing projects Distribution by fields Bioinformatics CFD Chemistry Climate Research Computer science Others Physics Reacting Flows Solid State Physics Structural Mechanics Figure 2 Number of projects in various research fields The distribution by research field of projects shows a focus on Computational Fluid Dynamics (CFD). Including Reacting Flows projects in this field make 45% of all projects. The second biggest number of projects is in Physics and Solid State Physics with about 30%. 3.2 Usage of the systems The main production systems are still the NEC SX-4 with 32 CPUs, the NEC SX- 5 with 2*16 CPUs and the Cray T3E with 512 CPUs. For these production sys- 7

8 tems we provide information about the systems usage for the time of System usage by field Structural Mechanics Solid State Physics Reacting Flows Physics Others Computer Science Climate Research 0.8 Chemistry CFD Bioinformatics SX4 SX5 T3E % Figure 3 System usage by research field System usage by field of research shows again the domination of computational fluid dynamics. This comes mainly from aerospace research and automotive research. The focus is on the vector-based systems. This is due to the high memory bandwidth requirements of these kinds of applications. Physics on the other hand is a heavily user of the T3E. These applications can make good use of the much larger number of CPUs and are not that much bandwidth bound System usage by state Usage of the main production systems by state shows the dominant role of Baden-Württemberg in the last year. This is due to the fact that the HLRS systems are outdated and will have to be replaced. A number of users from other states have moved to more recent architectures as are available at Juelich and Munich. This will change with the installation of new systems in Stuttgart in 2004/

9 Baden-Württemberg Bayern Berlin Brandenburg European Union Federal Research Center Hamburg Hessen Niedersachsen Nordrhein-Westfalen Rheinland-Pfalz Saarland Sachsen Schleswig-Holstein Thüringen SX4 SX5 T3E % Figure 4 System usage by state 3.3 Workshops Workshops allow training the users of HLRS. 5 such workshops were organized by HLRS in In addition HLRS did support 4 workshops organized by the center of competence at Baden-Württemberg (WiR), the high-performance computing center at Dresden (ZHR), The University of Heidelberg, and the University of Lübeck. Date Organizer Location Topic Feb 25 Mar 1 HLRS HLRS MPI, OpenMP, and advanced topics in parallel programming Mar 4 5 WIR (Scientific Computing in BW) Univ. of Stuttgart Schnelle Löser von großen Gleichungssystemen (HLRS: 2hours) Mar HLRS HLRS Fortran for Scientific Computing Mar ZHR (Center for High Performance Computing, Dresden) Dresden MPI and OpenMP Sep HLRS HLRS C++ in scientific computing Sep HLRS HLRS MPI, OpenMP, and advanced topics in parallel programming Sep 30 Oct 1 HLRS HLRS 5 th HLRS Results and Review workshop Oct 7-11 Medizinische Universität zu Lübeck Lübeck Iterative Gleichungssystemlöser und parallele Algorithmen (HLRS: 3 days) Nov. 5-7 Univ. of Heidelberg Heidelberg Introduction to Parallel Computing Workshop (HLRS: 2.5 days) 9

10 3.4 Trouble Ticket System To improve user support HLRS has started a pilot phase for a trouble ticket system together with the computing center. Users can submit questions and search in a database of questions and answers related to supercomputing. First experience and user feedback are very positive. Figure 5 Web based trouble ticket system 10

11 4 Systems This section describes changes in the configuration of systems that are available to the users of the HLRS. HLRS is currently in procurement for a new supercomputer. During 2002 only small changes have taken place, which effect only development platforms. 4.1 IA64 Early Release Platform For early testing of IA64-systems HLRS entered into a partnership with Intel. Under the terms of this co-operation Intel provided HLRS with one of 5 early evaluation systems for the Itanium 2 processor. The system was installed in May 2002 and testing was going on for several months. During this time the system was also made available to users of HLRS. The specifications of the system are: CPU Itanium 2, 1 GHz Number of CPUs 4 Peak Performance Memory Disk Space 16 GFLOP/s 16 GB 40 GB 4.2 IA64 Platform In order to provide users with a new development platform that delivers performance beyond standard Clusters of PC HLRS decided to install a small system based on Itanium 2. This decision was based on the good experience that HLRS had with the early released system of Intel. The system was installed in October The specifications of the system are: CPU Itanium 2, 900 MHz Number of CPUs 16 Peak Performance Memory Interconnect Disk Space 57.6 GFLOP/s 64 GB Myrinet & GE 300 GB 11

12 4.3 IA32 Platform Together with its partner Intel HLRS has set up a small cluster of Intel Xeon processors for applications that do not require 64 bit architectures and run well on standard PC processors. The specifications of the system are: CPU Intel Xeon, 2.4 GHz Number of CPUs 48 Peak Performance Memory Interconnect Disk Space GFLOP/s 48 GB Myrinet & GE 960 GB 4.4 HP System Change The existing HP N system of T-systems was replaced by a more modern system. So in July 2002 an HP RP 8400 was installed, which is also available to the users of the HLRS. The specifications of the system are: CPU PA-8700, 750 MHz Number of CPUs 8 Peak Performance Memory Disk Space 24 GFLOP/s 8 GB 200 GB 12

13 5 Teaching The director of HLRS is responsible for CSE in Mechanical engineering 5.1 Lectures A number of staff members from HLRS have given lectures at the University of Stuttgart and at the University of Applied Sciences at Stuttgart. A focus is on simulation and high performance computing. Lecturer Title of Lecture Semester University Michael Resch Simulation on Supercomputers SS2002 Univ. of Stuttgart Thomas Bönisch Parallel Computing SS2002 Univ. of Applied Sciences Stuttgart Ulrich Lang Visualization WS2002 Univ. of Stuttgart Ulrich Lang Introduction to computer science in automotive engineering Uwe Küster Numerical methods for Supercomputers WS2002 WS2002 Univ. of Stuttgart Univ. of Stuttgart Peter Haas Microprocessors WS2002 Univ. of Stuttgart Stefan Wesner Software Development for Scientific and Technical Applications Stefan Wesner Introduction to Computer Science WS2002 SS2002 Univ. of Stuttgart Univ. of Stuttgart online The project 100-online was established by the University of Stuttgart to provide lectures to students interactively via the web. More than 200 courses were made available through this project. More details can be found at: HLRS contributed three lectures Parallel Programming Workshop The one-week Parallel Programming Workshop was made available online. All slides are now accessible on the web together with the full audio information in English and German. Each lesson is available as a block, but also access to each single slide is possible. The project was presented at the exhibition of Universität Stuttgart multimedial in July URL: Lecturer: Rolf Rabenseifner 13

14 5.2.2 Introduction to Computer Science The materials used within the lectures are published in the web and are supported with interactive elements that recover the tools used within the practical part of the lecture. URL: Lecturer: Stefan Wesner Simulation on High Performance Computer The material used within the lecture is published via the web. All material was integrated in animated PowerPoint slides. Multimedia content was integrated to allow for an interactive demonstration of contents. URL: Lecturer: Michael Resch 5.3 PhD Thesis Clemens Helf, Eine Finite-Volumen-Methode in allgemeinen Zellen für die Euler- Gleichungen mit integrierter, selbst-adaptiver Gittergenerierung, February Edgar Gabriel, Optimierung und Einsatz einer Kommunikationsbibliothek für Metacomputing, May Thomas Schall, Einsatz komponentenbasierter Softwarebausteine in der wissenschaftlich-technischen Simulation, November

15 6 Research 6.1 Projects This section covers in more detail the projects that were started in A complete list of currently running projects includes: Project Funded by Duration Web page DAMIEN European Commission GRIDSTART European Commission CROSSGRID European Commission LeGE-WG European Commission WG/presentation.asp GRASP European Commission GeneSyS European Commission SFB259 DFG SFB374 DFG SFB382 DFG home.html GRIDWELTEN DFN GIMOLUS BMBF UNICORE+ BMBF NUSS BMBF

16 CINDA-SV ESA/ESTEC Co- NEC operation Co- Intel operation National Intel HLRS NEC HLRS & & GRIDWELTEN Abstract The project is funded by the German Research Network organization DFN. Its principal purpose is to collect requirements of German users with respect to GRID software. A focus in this investigation will be on networking requirements. Additionally existing GRID software will be evaluated and be set against the requirements of the users. Partners The Partners in the project are HLRS and the John von Neumann Institute for Computing (NIC) at Jülich. HLRS is the project leader and coordinator NUSS Abstract Within the Project Notebook University Stuttgart financed by the German Ministry for Research and Education BMBF, multiple University institutes cooperate in the evaluation of essential elements of notebook based student education and lecturing. HLRS contributes its competences in multimedia, collaborative working and application sharing. Partners Institute for Systems Theory in Engineering (IST), Institute for Industrial Automation and Software Engineering (IAS), Institute of Aerodynamics and Gasdynamics (IAG), Institut für Erziehungswissenschaft und Psychologie (PAE), Institute for Planning Fundamentals (IGP), Rechenzentrum (RUS), and HLRS International (European and Others) GRIDSTART Abstract The objective of the project is to maximize the impact of EU-funded Grid projects and related other activities through clustering. This will be primarily done by driving forward Grid developments by identifying and amplifying synergies between application areas and by encouraging interaction amongst similar activities in Europe. 16

17 Partners The list of partners consists of the EPCC (Project coordinator), HLRS, CERN, CYFRONET, ESO, FZJ, University of Southampton, Poznan SNC and University College London CROSSGRID Abstract The Cross Grid is a European R&D project, which aims to develop, implement and exploit new Grid components for interactive compute and data intensive applications like simulation and visualization for surgical procedures, flooding crisis, team decision support systems, distributed data analysis in high-energy physics, air pollution combined with weather forecasting. Partners The consortium consists of 19 partners, CYFRONET in Poland acting as cocoordinator. The technical partners are HLRS, ICM, INP, INS, UvA, II SAS, University of Linz, FZK, TUM, PSNC, UCY, Datamat, TCD, CSIC, UAB, U.S.C., Demo, LIP, and Alog LeGE-WG Abstract The Learning Grid of Excellence Working Group (LeGE-WG) aims to facilitate the establishment of a European Learning Grid Infrastructure by supporting the systematic exchange of information and by creating opportunities for close collaboration between the different actors in the formative process. The Working Group operates on a 24 month basis and brings together actors with complementary interests in Grid computing and e-learning from technologyoriented disciplines, pedagogy, government or regulating bodies and of course students. It will therefore provide an interdisciplinary consortium of experts and will promote close interaction between the communities associated with them, so as To achieve an in-depth understanding of the fundamental issues underpinning the application of GRID computing for e-learning, To cultivate the necessary common background for addressing the challenges associated with the establishment of a European Learning Grid Infrastructure, To establish a solid baseline for full exploitation of the EU-US Cooperation initiative on Science and Technology for e-learning. Partners The consortium consists out of 25 partners and is continuously extended. The organization of this large numbers is on a national node basis. The national nodes are Central Laboratory of the Research Councils (CLRC), Communication 17

18 & Systemes - Systemes D'information (CS-SI), Dipartimento di Ingegneria dell'informazione e Matematica Applicata (DIIMA), ZEUS Consulting S.A., SchlumbergerSema, HLRS, University of Graz, Kaunas University of Technology and EDAW (Principal Contractor) GRASP Abstract The aim of GRASP project is to use GRID technology in order to realize current and future ASP business models that integrate distributed and heterogeneous resources. The main project objectives are: To design and implement a layered architecture for service provision using GRID technologies. To overcome weaknesses of current ASP solutions concerning resource management, security, definition of a service level agreement and pricing mechanism. For the realization of the services supplied by the GRID middleware, the consortium will use existing research results but, also study and evaluate the impact of COTS (such as the Microsoft.Net platform). To explore and evaluate three different business models that fully exploit the GRID technologies: a classical ASP (one-to-many model with one provider and many clients); a many-to-many model (where resource are heterogeneous and distributed and also the clients can make available their resource in order to receive an income); a federated model (where the provider is constituted by a federation of ASPs). To design three GRID-aware applications, developed using the GRASP architecture, in order to validate the effectiveness of the project results. To define methodologies and techniques in order to make existing applications GRID-aware. Partners The GRASP consortium has been created with the aim to reach two objectives, both fundamental for the project to be successful. The first one is the implementation of an efficient infrastructure based on grid technologies. The second one is the integration of this infrastructure with an applicative level for execution of our business application and their provision to customers. The partners are LogicDIS (Gr), CRMPA (I), CCLRC (UK), SchlumbergerSema (ES), HLRS and CS-SI (FR) GeneSys Abstract The GeneSyS (Generic Systems Supervision) project's mission is to enhance distributed systems and applications with a generic and standardized supervision solution and nurture its practical implementation and multi-sector exploitation as a key enabler for the competitiveness of European research and industry. 18

19 The top-level objectives of the GeneSyS project are: To specify and develop an open, generic, modular and comprehensive supervision concept, To integrate and validate this supervision structure within various industrial contexts, To achieve the adoption of the GeneSyS concepts by all stakeholders (internal and external to the consortium) and to ensure that the vision of the proposed generic structure will become a new emerging standard. Partners The partners are EADS LV France, NAVUS Germany, D3 Group Germany, HLRS and MTA SZTAKI DSD Hungary Industry Intel Co-operation Abstract The purpose of the project is to bring together the expertise of Intel in hardware and the HLRS in software to test IA32 and IA64 clusters in real application and production environments. Partners The Partners in the project are Intel GmbH and HLRS CINDA-SV Abstract ESA/ESTEC contracted the development of a roadmap/blueprint for Computational Integrated Design and Analysis for Space Vehicles as a step towards Virtual Space Vehicles to HLRS. As a result the essential elements of a software architecture to conduct the design and analysis of space vehicles is intended. Also the preparatory work and the estimation of activities and efforts to implement such an environment should be estimated. Partners HLRS, Institut für Flugzeugbau und Leichtbau, Universität Braunschweig, ESA/ESTEC 6.2 Scientific Co-operations This section gives a list of existing co-operations with partners in Germany, Europe and worldwide Existing Co-operations That Were Continued in 2002 Japan Atomic Energy Research Institute (JAERI), Tokyo, Japan National Center for High performance Computing (NCHC), Hsinchu, Taiwan Pittsburgh Supercomputing Center (PSC), Pittsburgh, PA, USA 19

20 Sandia National Laboratories (SNL), Albuquerque, NM, USA New Co-operations Established in 2002 Russian Academy of Sciences, Novosibirsk, Russia. Co-operation to organize a common Russian-German workshop in the field of simulation. Commissariat à l Energie Atomique (CEA), Paris, France. Co-operation in the field of management of large systems, visualization and evaluation of new architectural concepts. Supercomputing Center of the Korea Institute of Science and Technology Information (KISTI), Taejon, Korea. Co-operation is based on exchange of scientists and co-operation in GRID test-beds. Fields of co-operation will be GRID-computing and computational fluid dynamics. High Performance Computing Cluster Center (HPCC), St. Petersburg State Polytechnical University, St Petersburg, Russia. An official memorandum of agreement was signed in September 2002 for 5 years. Cooperation will be in the fields of cluster computing, visualization, computational fluid dynamics as well as education and workshops. 6.3 Scientific Workshops Date Organizer Location Topic April 8-9, 2002 HLRS/hww Stuttgart Data Management May 14-15, 2002 May 27-29, 2002 Sep ,2002 Oct , 2002 HLRS, IAG, VIS Univ. of Stuttgart Stuttgart Numerical Flow Visualization Forum 2002 HLRS Stuttgart 5 th HLRS Metacomputing Workshop Institute of Physics, Academy of Sciences of the Czech Republic Trest/ Czech Republic Summer school on computing techniques in physics CEA Bruyeres/France CEA/HLRS Data Management Workshop Dec. 5, 2002 HLRS Stuttgart HLRS/CEA Visualization Workshop 6.4 Conference Shows International Supercomputer Conference 2002 at Heidelberg, Germany, June 20 th 23 rd, HLRS booth together with the other national supercomputing centers (LRZ, NIC) IGrid 2002 Exhibition in Amsterdam, The Netherlands, September 23-26, HLRS together with its partners from Sandia National Laboratories 20

21 and Pittsburgh Supercomputing Center had a booth there and were doing demonstrations. IST 2002 at Copenhagen, Denmark, November 4-6. HLRS was participating there as part of the GRIDSTART booth, representing the DAMIEN project. Supercomputing Conference 2002 at Baltimore, USA, November 17 th - 22 nd, HLRS booth in cooperation with Sandia National Laboratories. 6.5 Publications Refereed Papers 1. M. Müller, Edgar Gabriel, Michael M. Resch, A Software Development Environment for GRID-Computing, Concurrency and Computation - Practice and Experience, 14; , M. Müller, An OpenMP Compiler Benchmark, Scientific Programming, accepted for publication, Rolf Rabenseifner and Gerhard Wellein, Communication and Optimization Aspects of Parallel Programming Models on Hybrid Architectures, International Journal of High Performance Computing Applications, accepted for publication, Rolf Rabenseifner, Alice E. Koniges, Jean-Pierre Prost, and Richard Hedges, The Parallel Effective I/O Bandwidth Benchmark: b_eff_io, to be published in Special Issue of Calculateurs Parallèles Journal on Parallel I/O for Cluster Computing 5. Graham E. Fagg, Jack Dongarra, HARNESS and fault tolerant MPI design, usage and performance issues, Future Generation Computer Systems 18(2002), pp N. Barberou, M. Garbey, M. Hess, T. Rossi, Resch, J. Toivanen, D. Tromeur-Dervout, Scalable numerical algorithms for efficient metacomputing of elliptic equations, in P. Wilders, A. Ecer, J. Periaux, N. Satofuka, P. Fox (Eds.), Parallel Computational Fluid Dynamics, Practice and Theory, Elsevier, North-Holland, 2002, Michael M. Resch, Clusters in Grids: Power plants for CFD, in P. Wilders, A. Ecer, J. Periaux, N. Satofuka, P. Fox (Eds.), Parallel Computational Fluid Dynamics, Practice and Theory, Elsevier, North-Holland, 2002, Peggy Lindner, Natalia Currle-Linde, Michael M. Resch and Edgar Gabriel, Distributed Application Management in Heterogeneous Grids, Euroweb 2002 conference, Oxford, UK, December 17-18, Andrey Sadovykh, Stefan Wesner, Jean-Eric Bohdanowicz, GeneSyS: A Generic Architecture for Supervision of Distributed Applications, in pro- 21

22 ceedings of Euroweb 2002 conference, Oxford, UK, December 17-18, T. Dimitrakos, M.Gaeta, P. Ritrovato, B. Serhan, S. Wesner, K. Wulf, Grid Based Application Service Provision, in proceedings of Euroweb 2002 conference, Oxford, UK, December 17-18, T. Imamura, Yukihiro Hasegawa, Hironobu Yamagishi and Hiroshi Takemiya, TME - a Distributed Resource Handling Tool, accepted for publication at the International Conference on Scientific and Engineering Computation (IC-SEC2002), Singapore, December 3-5, T. Imamura, A redistribution function for a distributed array data type on distributed computing environments' 13th IASTED conference, Parallel and Distributed Computing and Systems (PDCS2002), Boston, USA, November 4-6, Yuishi Tsujita, Toshiyuki Imamura, Hiroshi Takemiya and Hironobu Yamagishi, Stampi-I/O: A Flexible Parallel-I/O Library for Heterogeneous Computing Environment, 9th EuroPVM/MPI conference, Linz, Austria, September 29 - October 2, Douglas Antony Louis Piriyakumar, Paul Levi, and Rolf Rabenseifner, Enhanced File Interoperability with Parallel MPI File-I/O in Image Processing, in Recent Advances in Parallel Virtual Machine and Message Passing Interface, J. Dongarra and D. Kranzlmüller (Eds.), Proceedings of the 9 th European PVM/MPI Users' Group Meeting, EuroPVM/MPI 2002, Sep Oct. 2, Linz, Austria, LNCS, 2474, pp , Springer, Rolf Rabenseifner, Communication and Optimization Aspects on Hybrid Architectures, in Recent Advances in Parallel Virtual Machine and Message Passing Interface, J. Dongarra and D. Kranzlmüller (Eds.), Proceedings of the 9 th European PVM/MPI Users' Group Meeting, EuroPVM/MPI 2002, Sep Oct. 2, Linz, Austria, LNCS, 2474, pp , Springer, Stefan Wesner, Konrad Wulf, Mark Müller, How GRID could improve E- Learning in the environmental science domain, to be published in the proceedings of the 1 st International Workshop on Educational Models for Grid Based Services, Lausanne, Switzerland, September 16, Schulze, J.P., U. Lang: "The Parallelization of the Perspective Shear- Warp Volume Rendering Algorithm", Proceedings of the Fourth Eurographics Workshop on Parallel Graphics and Visualization, September Ken Naono and Toshiyuki Imamura, Developing an Automatic Tuning Numerical Library for the Eigenvalue Solver, IPSJ SIG-HPC (SWoPP2002), Yufuin, Japan, August,

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG

HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG HIGH PERFORMANCE COMPUTING COMPETENCE CENTER BADEN-WÜRTTEMBERG Contents High Performance Computing Competence Center Baden-Württemberg (hkz-bw)... 4 Vector Parallel Supercomputer NEC SX-6X... 8 Massively

More information

Access to the Federal High-Performance Computing-Centers

Access to the Federal High-Performance Computing-Centers Access to the Federal High-Performance Computing-Centers rabenseifner@hlrs.de University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 TOP 500 Nov. List German Sites,

More information

Prof. Dr. D. W. Cunningham, Berliner Strasse 35A, 03046 Cottbus, Germany

Prof. Dr. D. W. Cunningham, Berliner Strasse 35A, 03046 Cottbus, Germany Curriculum Vitae Prof. Dr. Douglas William Cunningham Work Address: Brandenburg Technical University Cottbus Graphical Systems Department Konrad-Wachsmann-Allee 1 D-03046 Cottbus, Tel: (+49) 355-693816

More information

GRID Computing and Networks

GRID Computing and Networks A Member of the ExperTeam Group GRID Computing and Networks Karl Solchenbach Global IPv6 Summit Madrid, May 14, 2003 Pallas GmbH Hermülheimer Straße 10 D-50321 Brühl, Germany info@pallas.de http://www.pallas.com

More information

Network Bandwidth Measurements and Ratio Analysis with the HPC Challenge Benchmark Suite (HPCC)

Network Bandwidth Measurements and Ratio Analysis with the HPC Challenge Benchmark Suite (HPCC) Proceedings, EuroPVM/MPI 2005, Sep. 18-21, Sorrento, Italy, LNCS, Springer-Verlag, 2005. c Springer-Verlag, http://www.springer.de/comp/lncs/index.html Network Bandwidth Measurements and Ratio Analysis

More information

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,

More information

Min Si. Argonne National Laboratory Mathematics and Computer Science Division

Min Si. Argonne National Laboratory Mathematics and Computer Science Division Min Si Contact Information Address 9700 South Cass Avenue, Bldg. 240, Lemont, IL 60439, USA Office +1 630-252-4249 Mobile +1 630-880-4388 E-mail msi@anl.gov Homepage http://www.mcs.anl.gov/~minsi/ Current

More information

Cosmological simulations on High Performance Computers

Cosmological simulations on High Performance Computers Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

benchmarking Amazon EC2 for high-performance scientific computing

benchmarking Amazon EC2 for high-performance scientific computing Edward Walker benchmarking Amazon EC2 for high-performance scientific computing Edward Walker is a Research Scientist with the Texas Advanced Computing Center at the University of Texas at Austin. He received

More information

PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland

PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland PRACE An Introduction Tim Stitt PhD. CSCS, Switzerland High Performance Computing A Key Technology 1. Supercomputing is the tool for solving the most challenging problems through simulations; 2. Access

More information

PRACE in building the HPC Ecosystem Kimmo Koski, CSC

PRACE in building the HPC Ecosystem Kimmo Koski, CSC PRACE in building the HPC Ecosystem Kimmo Koski, CSC 1 Petaflop computing First Steps and Achievements Production of the HPC part of the ESFRI Roadmap; Creation of a vision, involving 15 European countries

More information

Dynamism and Data Management in Distributed, Collaborative Working Environments

Dynamism and Data Management in Distributed, Collaborative Working Environments Dynamism and Data Management in Distributed, Collaborative Working Environments Alexander Kipp 1, Lutz Schubert 1, Matthias Assel 1 and Terrence Fernando 2, 1 High Performance Computing Center Stuttgart,

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome

More information

Bachelor Degree in Informatics Engineering Master courses

Bachelor Degree in Informatics Engineering Master courses Bachelor Degree in Informatics Engineering Master courses Donostia School of Informatics The University of the Basque Country, UPV/EHU For more information: Universidad del País Vasco / Euskal Herriko

More information

Das TOP500-Projekt der Universitäten Mannheim und Tennessee zur Evaluierung des Supercomputer Marktes. Hans-Werner Meuer Universität Mannheim

Das TOP500-Projekt der Universitäten Mannheim und Tennessee zur Evaluierung des Supercomputer Marktes. Hans-Werner Meuer Universität Mannheim Das TOP500-Projekt der Universitäten Mannheim und Tennessee zur Evaluierung des Supercomputer Marktes Hans-Werner Meuer Universität Mannheim Informatik - Kolloquium der Universität Passau 20. Juli 1999

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

Stream Processing on GPUs Using Distributed Multimedia Middleware

Stream Processing on GPUs Using Distributed Multimedia Middleware Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research

More information

QUADRICS IN LINUX CLUSTERS

QUADRICS IN LINUX CLUSTERS QUADRICS IN LINUX CLUSTERS John Taylor Motivation QLC 21/11/00 Quadrics Cluster Products Performance Case Studies Development Activities Super-Cluster Performance Landscape CPLANT ~600 GF? 128 64 32 16

More information

Parallel Computing. Introduction

Parallel Computing. Introduction Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) t.grahs@tu-bs.de Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00

More information

A Data Structure Oriented Monitoring Environment for Fortran OpenMP Programs

A Data Structure Oriented Monitoring Environment for Fortran OpenMP Programs A Data Structure Oriented Monitoring Environment for Fortran OpenMP Programs Edmond Kereku, Tianchao Li, Michael Gerndt, and Josef Weidendorfer Institut für Informatik, Technische Universität München,

More information

CERN s Scientific Programme and the need for computing resources

CERN s Scientific Programme and the need for computing resources This document produced by Members of the Helix Nebula consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available at

More information

Performance Monitoring of Parallel Scientific Applications

Performance Monitoring of Parallel Scientific Applications Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure

More information

EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling

EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling EUFORIA: Grid and High Performance Computing at the Service of Fusion Modelling Miguel Cárdenas-Montes on behalf of Euforia collaboration Ibergrid 2008 May 12 th 2008 Porto Outline Project Objectives Members

More information

Resource Management and Scheduling. Mechanisms in Grid Computing

Resource Management and Scheduling. Mechanisms in Grid Computing Resource Management and Scheduling Mechanisms in Grid Computing Edgar Magaña Perdomo Universitat Politècnica de Catalunya Network Management Group Barcelona, Spain emagana@nmg.upc.edu http://nmg.upc.es/~emagana/

More information

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction Vol. 3 Issue 1, January-2014, pp: (1-5), Impact Factor: 1.252, Available online at: www.erpublications.com Performance evaluation of cloud application with constant data center configuration and variable

More information

Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT

Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT SCIENTIFIC COMPUTING, HPC AND GRIDS KIT the cooperation of Forschungszentrum Karlsruhe GmbH and Universität Karlsruhe (TH)

More information

Ph.D., Particle Physics Theory Thesis title: FCNC Processes of B and K Mesons from Lattice QCD University of Edinburgh October 1995 July1999

Ph.D., Particle Physics Theory Thesis title: FCNC Processes of B and K Mesons from Lattice QCD University of Edinburgh October 1995 July1999 Curriculum Vitae Date of Birth: 8 th April 1971 Nationality: Place of Birth: Work Address: Taiwanese Taipei City Institute for Physics National Chiao-Tung University Hsinchu 300 Taiwan Work Telephone:

More information

Integrated Communication Systems

Integrated Communication Systems Integrated Communication Systems Courses, Research, and Thesis Topics Prof. Paul Müller University of Kaiserslautern Department of Computer Science Integrated Communication Systems ICSY http://www.icsy.de

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

A Flexible Cluster Infrastructure for Systems Research and Software Development

A Flexible Cluster Infrastructure for Systems Research and Software Development Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure

More information

PRESS RELEASE FRAUNHOFER INSTITUTE FOR INTEGRATED CIRCUITS IIS DESIGN AUTOMATION DIVISION EAS. PRESSE RELEASE June 2, 2014 Page 1 5

PRESS RELEASE FRAUNHOFER INSTITUTE FOR INTEGRATED CIRCUITS IIS DESIGN AUTOMATION DIVISION EAS. PRESSE RELEASE June 2, 2014 Page 1 5 PRESS RELEASE June 2, 2014 Page 1 5 European Project VERDI provides Universal Verification Methodology (UVM) in SystemC to Accellera Systems Initiative as new industry standard proposal UVM-SystemC language

More information

A Chromium Based Viewer for CUMULVS

A Chromium Based Viewer for CUMULVS A Chromium Based Viewer for CUMULVS Submitted to PDPTA 06 Dan Bennett Corresponding Author Department of Mathematics and Computer Science Edinboro University of PA Edinboro, Pennsylvania 16444 Phone: (814)

More information

Application Frameworks for High Performance and Grid Computing

Application Frameworks for High Performance and Grid Computing Application Frameworks for High Performance and Grid Computing Gabrielle Allen Assistant Director for Computing Applications, Center for Computation & Technology Associate Professor, Department of Computer

More information

PRIMERGY server-based High Performance Computing solutions

PRIMERGY server-based High Performance Computing solutions PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating

More information

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Jan Wiebelitz Wolfgang Müller, Michael Brenner, Gabriele von Voigt Cracow Grid Workshop 2008, Cracow,

More information

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Using an MPI Cluster in the Control of a Mobile Robots System

Using an MPI Cluster in the Control of a Mobile Robots System Using an MPI Cluster in the Control of a Mobile Robots System Mohamed Salim LMIMOUNI, Saïd BENAISSA, Hicham MEDROMI, Adil SAYOUTI Equipe Architectures des Systèmes (EAS), Laboratoire d Informatique, Systèmes

More information

PRACE hardware, software and services. David Henty, EPCC, d.henty@epcc.ed.ac.uk

PRACE hardware, software and services. David Henty, EPCC, d.henty@epcc.ed.ac.uk PRACE hardware, software and services David Henty, EPCC, d.henty@epcc.ed.ac.uk Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean

More information

Scientific Computing Programming with Parallel Objects

Scientific Computing Programming with Parallel Objects Scientific Computing Programming with Parallel Objects Esteban Meneses, PhD School of Computing, Costa Rica Institute of Technology Parallel Architectures Galore Personal Computing Embedded Computing Moore

More information

Bulletin. Introduction. Dates and Venue. History. Important Dates. Registration

Bulletin. Introduction. Dates and Venue. History. Important Dates. Registration Bulletin Introduction The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences for physicists and computing professionals from

More information

A short introduction to RWTH

A short introduction to RWTH A short introduction to RWTH Location of RWTH Aachen University RWTH Mainbuilding Development of RWTH Aachen University until 2004 Students Foreign Students Polytechnical School Faculty of Architecture

More information

Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms

Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms Amani AlOnazi, David E. Keyes, Alexey Lastovetsky, Vladimir Rychkov Extreme Computing Research Center,

More information

INTEL Software Development Conference - LONDON 2015. High Performance Computing - BIG DATA ANALYTICS - FINANCE. Final version

INTEL Software Development Conference - LONDON 2015. High Performance Computing - BIG DATA ANALYTICS - FINANCE. Final version INTEL Software Development Conference - LONDON 2015 High Performance Computing - BIG DATA ANALYTICS - FINANCE Final version London, Canary Wharf December 10 th & 11 th 2015 Level39, One Canada Square The

More information

Performance Engineering of the Community Atmosphere Model

Performance Engineering of the Community Atmosphere Model Performance Engineering of the Community Atmosphere Model Patrick H. Worley Oak Ridge National Laboratory Arthur A. Mirin Lawrence Livermore National Laboratory 11th Annual CCSM Workshop June 20-22, 2006

More information

Statistical Computing / Computational Statistics

Statistical Computing / Computational Statistics Interactive Graphics for Statistics: Principles and Examples Augsburg, May 31., 2006 Department of Computational Statistics and Data Analysis, Augsburg University, Germany Statistical Computing / Computational

More information

Scalability and Classifications

Scalability and Classifications Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

More information

Multicore Parallel Computing with OpenMP

Multicore Parallel Computing with OpenMP Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large

More information

Fire Simulations in Civil Engineering

Fire Simulations in Civil Engineering Contact: Lukas Arnold l.arnold@fz- juelich.de Nb: I had to remove slides containing input from our industrial partners. Sorry. Fire Simulations in Civil Engineering 07.05.2013 Lukas Arnold Fire Simulations

More information

Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Annual Report 1998/1999

Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Annual Report 1998/1999 Research Report Series Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Technische Universität München http://wwwbode.informatik.tu-muenchen.de/ Editor: Prof. Dr. A. Bode Vol. 18 Lehrstuhl

More information

An Open MPI-based Cloud Computing Service Architecture

An Open MPI-based Cloud Computing Service Architecture An Open MPI-based Cloud Computing Service Architecture WEI-MIN JENG and HSIEH-CHE TSAI Department of Computer Science Information Management Soochow University Taipei, Taiwan {wjeng, 00356001}@csim.scu.edu.tw

More information

Turbomachinery CFD on many-core platforms experiences and strategies

Turbomachinery CFD on many-core platforms experiences and strategies Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29

More information

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed

More information

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver 1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution

More information

Accelerating CFD using OpenFOAM with GPUs

Accelerating CFD using OpenFOAM with GPUs Accelerating CFD using OpenFOAM with GPUs Authors: Saeed Iqbal and Kevin Tubbs The OpenFOAM CFD Toolbox is a free, open source CFD software package produced by OpenCFD Ltd. Its user base represents a wide

More information

Monitoring Message Passing Applications in the Grid

Monitoring Message Passing Applications in the Grid Monitoring Message Passing Applications in the Grid with GRM and R-GMA Norbert Podhorszki and Peter Kacsuk MTA SZTAKI, Budapest, H-1528 P.O.Box 63, Hungary pnorbert@sztaki.hu, kacsuk@sztaki.hu Abstract.

More information

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1 Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these

More information

C u r r i c u l u m V i t a e György Vaszil

C u r r i c u l u m V i t a e György Vaszil C u r r i c u l u m V i t a e György Vaszil May, 2011 Personal Family status: Languages: Married, father of three children (seven and five years, 21 months) English, German, Hungarian (mother tongue) Education

More information

Microsoft Research Worldwide Presence

Microsoft Research Worldwide Presence Microsoft Research Worldwide Presence MSR India MSR New England Redmond Redmond, Washington Sept, 1991 San Francisco, California Jun, 1995 Cambridge, United Kingdom July, 1997 Beijing, China Nov, 1998

More information

Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015

Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Numerical Calculation of Laminar Flame Propagation with Parallelism Assignment ZERO, CS 267, UC Berkeley, Spring 2015 Xian Shi 1 bio I am a second-year Ph.D. student from Combustion Analysis/Modeling Lab,

More information

and RISC Optimization Techniques for the Hitachi SR8000 Architecture

and RISC Optimization Techniques for the Hitachi SR8000 Architecture 1 KONWIHR Project: Centre of Excellence for High Performance Computing Pseudo-Vectorization and RISC Optimization Techniques for the Hitachi SR8000 Architecture F. Deserno, G. Hager, F. Brechtefeld, G.

More information

The Bucharest Academy of Economic Studies, Romania E-mail: ppaul@ase.ro E-mail: catalin.boja@ie.ase.ro

The Bucharest Academy of Economic Studies, Romania E-mail: ppaul@ase.ro E-mail: catalin.boja@ie.ase.ro Paul Pocatilu 1 and Ctlin Boja 2 1) 2) The Bucharest Academy of Economic Studies, Romania E-mail: ppaul@ase.ro E-mail: catalin.boja@ie.ase.ro Abstract The educational process is a complex service which

More information

Software Distributed Shared Memory Scalability and New Applications

Software Distributed Shared Memory Scalability and New Applications Software Distributed Shared Memory Scalability and New Applications Mats Brorsson Department of Information Technology, Lund University P.O. Box 118, S-221 00 LUND, Sweden email: Mats.Brorsson@it.lth.se

More information

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland Software services competence in research and development activities at PSNC Cezary Mazurek PSNC, Poland Workshop on Actions for Better Participation of New Member States to FP7-ICT Timişoara, 18/19-03-2010

More information

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin. http://www.dell.com/clustering

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin. http://www.dell.com/clustering Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin Reza Rooholamini, Ph.D. Director Enterprise Solutions Dell Computer Corp. Reza_Rooholamini@dell.com http://www.dell.com/clustering

More information

High Performance Computing

High Performance Computing High Performance Computing Trey Breckenridge Computing Systems Manager Engineering Research Center Mississippi State University What is High Performance Computing? HPC is ill defined and context dependent.

More information

Multi-core Curriculum Development at Georgia Tech: Experience and Future Steps

Multi-core Curriculum Development at Georgia Tech: Experience and Future Steps Multi-core Curriculum Development at Georgia Tech: Experience and Future Steps Ada Gavrilovska, Hsien-Hsin-Lee, Karsten Schwan, Sudha Yalamanchili, Matt Wolf CERCS Georgia Institute of Technology Background

More information

MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ)

MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit. Prof. Dieter Kranzlmüller (LRZ) MEGWARE HPC Cluster am LRZ eine mehr als 12-jährige Zusammenarbeit Prof. Dieter Kranzlmüller (LRZ) LRZ HPC-Systems at the End of the UNIX-Era (Years 2000-2002) German national supercomputer Hitachi SR800

More information

Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner

Supercomputing 2004 - Status und Trends (Conference Report) Peter Wegner (Conference Report) Peter Wegner SC2004 conference Top500 List BG/L Moors Law, problems of recent architectures Solutions Interconnects Software Lattice QCD machines DESY @SC2004 QCDOC Conclusions Technical

More information

DR AYŞE KÜÇÜKYILMAZ. Imperial College London Personal Robotics Laboratory Department of Electrical and Electronic Engineering SW7 2BT London UK

DR AYŞE KÜÇÜKYILMAZ. Imperial College London Personal Robotics Laboratory Department of Electrical and Electronic Engineering SW7 2BT London UK DR AYŞE KÜÇÜKYILMAZ Imperial College London Personal Robotics Laboratory Department of Electrical and Electronic Engineering SW7 2BT London UK http://home.ku.edu.tr/~akucukyilmaz a.kucukyilmaz@imperial.ac.uk

More information

Collaborative and Interactive CFD Simulation using High Performance Computers

Collaborative and Interactive CFD Simulation using High Performance Computers Collaborative and Interactive CFD Simulation using High Performance Computers Petra Wenisch, Andre Borrmann, Ernst Rank, Christoph van Treeck Technische Universität München {wenisch, borrmann, rank, treeck}@bv.tum.de

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing

More information

Computational Engineering Programs at the University of Erlangen-Nuremberg

Computational Engineering Programs at the University of Erlangen-Nuremberg Computational Engineering Programs at the University of Erlangen-Nuremberg Ulrich Ruede Lehrstuhl für Simulation, Institut für Informatik Universität Erlangen http://www10.informatik.uni-erlangen.de/ ruede

More information

Building an Inexpensive Parallel Computer

Building an Inexpensive Parallel Computer Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

walberla: A software framework for CFD applications on 300.000 Compute Cores

walberla: A software framework for CFD applications on 300.000 Compute Cores walberla: A software framework for CFD applications on 300.000 Compute Cores J. Götz (LSS Erlangen, jan.goetz@cs.fau.de), K. Iglberger, S. Donath, C. Feichtinger, U. Rüde Lehrstuhl für Informatik 10 (Systemsimulation)

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

COMPUTER SCIENCE. FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa

COMPUTER SCIENCE. FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa FACULTY: Jennifer Bowen, Chair Denise Byrnes, Associate Chair Sofia Visa COMPUTER SCIENCE Computer Science is the study of computer programs, abstract models of computers, and applications of computing.

More information

EPA Data Center Efficiency Workshop SPEC Benchmarks. March 27, 2006 Walter Bays, President, SPEC

EPA Data Center Efficiency Workshop SPEC Benchmarks. March 27, 2006 Walter Bays, President, SPEC EPA Data Center Efficiency Workshop SPEC Benchmarks March 27, 2006 Walter Bays, President, SPEC SPEC Background Benchmark wars of the 80's RISC vs. CISC Vendors & EE Times created SPEC for better benchmarks

More information

GPGPU accelerated Computational Fluid Dynamics

GPGPU accelerated Computational Fluid Dynamics t e c h n i s c h e u n i v e r s i t ä t b r a u n s c h w e i g Carl-Friedrich Gauß Faculty GPGPU accelerated Computational Fluid Dynamics 5th GACM Colloquium on Computational Mechanics Hamburg Institute

More information

Large-Data Software Defined Visualization on CPUs

Large-Data Software Defined Visualization on CPUs Large-Data Software Defined Visualization on CPUs Greg P. Johnson, Bruce Cherniak 2015 Rice Oil & Gas HPC Workshop Trend: Increasing Data Size Measuring / modeling increasingly complex phenomena Rendering

More information

Workshop on Parallel and Distributed Scientific and Engineering Computing, Shanghai, 25 May 2012

Workshop on Parallel and Distributed Scientific and Engineering Computing, Shanghai, 25 May 2012 Scientific Application Performance on HPC, Private and Public Cloud Resources: A Case Study Using Climate, Cardiac Model Codes and the NPB Benchmark Suite Peter Strazdins (Research School of Computer Science),

More information

AT A GLANCE UNIVERSITY OF STUTTGART AN EXCELLENT CHOICE!

AT A GLANCE UNIVERSITY OF STUTTGART AN EXCELLENT CHOICE! CONNECTING BRAINS AT A GLANCE An interdisciplinary profile with key competences in the fields of engineering, natural sciences, humanities, economics, and social sciences Among the top institutions in

More information

Four Keys to Successful Multicore Optimization for Machine Vision. White Paper

Four Keys to Successful Multicore Optimization for Machine Vision. White Paper Four Keys to Successful Multicore Optimization for Machine Vision White Paper Optimizing a machine vision application for multicore PCs can be a complex process with unpredictable results. Developers need

More information

A Pattern-Based Approach to. Automated Application Performance Analysis

A Pattern-Based Approach to. Automated Application Performance Analysis A Pattern-Based Approach to Automated Application Performance Analysis Nikhil Bhatia, Shirley Moore, Felix Wolf, and Jack Dongarra Innovative Computing Laboratory University of Tennessee (bhatia, shirley,

More information

2 nd ENAEE Conference, Leuven, 16.-17. September 2013 European Master of Advanced Industrial Management in the EHEA

2 nd ENAEE Conference, Leuven, 16.-17. September 2013 European Master of Advanced Industrial Management in the EHEA Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen 2 nd ENAEE Conference, Leuven, 16.-17. September 2013 European Master of Advanced Industrial Management in the EHEA Preparing Engineers

More information

Grids Computing and Collaboration

Grids Computing and Collaboration Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide

More information

Workshop Agenda Feb 25th 2015

Workshop Agenda Feb 25th 2015 Workshop Agenda Feb 25th 2015 Time Presenter Title 09:30 T. König Talk bwhpc Concept & bwhpc-c5 - Federated User Support Activities 09:45 R. Walter Talk bwhpc architecture (bwunicluster, bwforcluster JUSTUS,

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Mesh Generation and Load Balancing

Mesh Generation and Load Balancing Mesh Generation and Load Balancing Stan Tomov Innovative Computing Laboratory Computer Science Department The University of Tennessee April 04, 2012 CS 594 04/04/2012 Slide 1 / 19 Outline Motivation Reliable

More information

Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp

Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp Welcome! Who am I? William (Bill) Gropp Professor of Computer Science One of the Creators of

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 20. October 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 27 Course

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner

Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Research Group Scientific Computing Faculty of Computer Science University of Vienna AUSTRIA http://www.par.univie.ac.at

More information