PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure for Polish Science
|
|
|
- Brian Stephens
- 10 years ago
- Views:
Transcription
1 1 PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space PLGrid Programme: IT Platforms and Domain-Specific Solutions Developed for the National Grid Infrastructure for Polish Science Jacek Kitowski and Marcin Radecki ACK Cyfronet AGH Consortium PL-Grid VPH Share Meeting, Cracow,
2 Outline 2 PLGrid Programme in a nutshell Family of the Projects: PL-Grid, PLGrid Plus, PLGrid NG, PLGrid Core Achievements Selected domain solutions Conclusions
3 Past and Present Involvement in European Projects 6WINIT (IST ) ( ) IPv6 Wireless Internet Initiative CROSSGRID (IST ), coordinator ( ) ca. 20 partners Development of Grid Environments for Interactive Applications PELLUCID (IST ) ( ) A Platform for Organizationally Mobile Public Employees GridStart (IST ) ( ) Grid dissemination, standarisation, applications, roadmap Pro-Access (IST ) ( ) ImPROving ACCESS of Associated States To Advanced Concepts In Medical Informatics EGEE I/II/III ( ) Enabling Grids for e-science in Europe (EU 6FP) K-WfGrid (511385) ( ) Knowledge-based Workflow System for Grid Applications CoreGrid (IST ) ( ) European Research Network... Virolab (027446) ( ) A virtual laboratory for decision support in viral diseases treatment Gredia (FP ) ( ) Grid enabled access to rich media content Int.eu.grid (FP ) ( ) Interactive European Grid European Structural Funds (Innovative Economy) Family of PLGrid Projects (POIG.02.03) ( ) POWIEW (POIG.02.03) ( ) Grand Challenges Computation PLATON (POIG.02.03) ( ) Platform for Scientific Services gslm (FP ) ( ), FedSM ( ) Service Delivery and Service Management in Grid Infrastructures and Federated Infrastructures MAPPER (FP ) ( ) Multiscale Applications on European e-infrastructures UrbanFlood (FP ) ( ) Real-time Emergency Management VPH-Share (FP ) ( ) Virtual Physiological Human EUSAS (EDA A-0676-RT-GC) ( ) European Urban Simulation for Asymmetric Scenarios EGI InSPIRE ( ) IS-EPOS ( ) Digital Research Space of Induced Seismity for EPOS CTA Collaboration VirtROLL (RFCS-CT ) ( ) Virtual Strip Rolling Mill PRACE 1, 2, EGI Engage (2015- ) Indigo DataGrid (2015- )
4 TOP500 Polish Sites Rank Site System Cores Cyfronet Poland ICM Warsaw Poland TASK Gdańsk WCSS Wroclaw PCSS Poland June 2011 Nov June 2013 Nov June 2014 Nov Zeus - Cluster Platform SL390/BL2x220, Xeon X5650 6C, 2.660GHz, Infiniband QDR, NVIDIA 2090 Hewlett-Packard BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect IBM GALERA PLUS -- Action Xeon HP BL 2x220/BL490 E5345/L5640 Infiniband ACTION Cluster Platform 3000 BL2x220, X56xx, 2.66 GHz, Infiniband Hewlett-Packard Rackable C1103-G15, Opteron C 2.40 GHz, Infiniband QDR SGI 11,694 23,932 25,468 25,468 25,468 25,468 16,384 16,384 16,384 16,384 16,384 10, , , Rmax (TFlop/s) Rpeak (TFlop/s) Allegro Nasza Klasa 2008, Telecomm. Company 2008, 2010
5 PL-Grid Consortium 5 Development based on: European Regional Development Fund as part of the Innovative Economy Program Polish scientific communities: ~75% highly rated Polish publications in 5 Communities close international collaboration (EGI,.) previous projects (5FP, 6FP, 7FP, EDA ) National Network Infrastructure available: Pionier National Project computing resources: Top500 list PL-Grid Consortium members: 5 High Performance Computing Polish Centres, representing the Communities, coordinated by ACC Cyfronet AGH
6 6 Implementation of PL-Grid Programme adopted by the Consortium since January 2007 Family of Projects by Operational Programme: Innovative Economy
7 Family of PL-Grid Projects coordinated by Cyfronet 7 PL-Grid ( ) Number of people involved: ca. 80 (total, from different Polish Centres) Outcome: Common base infrastructure National Grid Infrastructure (NGI_PL) Resources: 230 Tflops, 3.6 PB PLGrid NG ( ) Expected outcome: Optimization of resources usage, training Extension of domain specific solutions by 14 add l domains Extension of resources and services by: ca. 8 Tflops, some PB PLGrid PLUS ( ) Number of people involved: ca. 120 Outcome: Focus on users (training, helpdesk ) Domain specific solutions: 13 domains (Specific computing environments) Extension of resources and services by: 500 Tflops, 4.4 PB PLGrid CORE ( ) (Cyfronet only) Expected outcome: Competence Center End-user services Open Science paradigm Large workflow applications Data Farming mass computation Extension of resources and services by: ca Tflops, 25 PB
8 Family of PL-Grid Projects coordinated by Cyfronet PL-Grid ( ) Number of people involved: ca. 80 (total, from different Polish Centres) Outcome: Common base infrastructure National Grid Infrastructure (NGI_PL) Resources: 230 Tflops, 3.6 PB Real Users PLGrid NG ( ) Expected outcome: Optimization of resources usage, training Extension of domain specific solutions by 14 add l domains Extension of resources and services by: ca. 8 Tflops, some PB 8 PLGrid PLUS ( ) Number of people involved: ca. 120 Outcome: Focus on users (training, helpdesk ) Domain specific solutions: 13 domains (Specific computing environments) Extension of resources and services by: 500 Tflops, 4.4 PB PLGrid CORE ( ) (Cyfronet only) Expected outcome: Competence Center End-user services Open Science paradigm Large workflow applications Data Farming mass computation Extension of resources and services by: ca Tflops, 25 PB
9 Supercomputer Zeus 9 Xeon, 23 TB, 169 TFlops Opteron, 26 TB, 61 TFlops Xeon, 3,6 TB, 136 TFlops Xeon, 6 TB, 8 TFlops ZEUS Statistics 2012 (2013) Users needs taken into account Almost 8 mln jobs 21,000+ daily 80 mln CPU hours 9130 years 800+ active users 100PB+ usage of scratch 2014: (350PB) 7,7 mln jobs The longest job: days CPU-years The biggest job: Longest 576 cores job: 90 (1024) days Ca. 50% CPU Biggest time for job: multicore 2400 cores jobs
10 Summary of Projects Results (up-to-date) 10 Close collaboration between Partners and research communities Development of IT PL-Grid Infrastructure and ecosystem Development of tools, environments and middleware services,clouds Integration, HPC, Data intensive, Instruments Development of 27 domain specific solutions
11 New HPC Asset 11 New Cluster Prometheus Contract signed Some data R peak = TFlops 1728 servers 41,472 Haswell cores 216 TB RAM (DDR4) 10 PB disks, 180 GB/s HP Apollo 8000 Grand Opening May, 27, 2015 (this week!) Performance (R max = TFlops) Contest for graphics for Prometheus: the winning project was chosen from 42 works
12 Summary of Projects Results (up-to-date) 12 Facilitation of community participation in international collaboration EGI Council, EGI Executive Board FP7 (VPH-Share, RFCS-VirtROLL.) EDA EUSAS EGI-InSPIRE, FedSM, EGI-Engage, INDIGO DataCloud, EPOS, CTA,. Publications 26 papers on PL-Grid Project results Conference papers Journal papers and book chapters PL-Grid ( ) PLGrid Plus ( ) PLGrid Core ( ) PLGrid NG ( ) total papers on PLGrid Plus Project results 147 authors, 76 reviewers Total
13 Journal Publications (subjective selection) 13 Journal IF Journal IF Journal IF J.Chem.Theor.Phys.Appl Phys.Lett. B 6,019 J.High Energy Phys. 6,22 Astronomy &Astrophys. 4,479 Inorganic Chem. 4,794 J.Org.Chem. 4,638 Optic Lett. 3,179 Appl.Phys.Lett J.Comput.Chem. 3,601 J.Phys.Chem. B 3,377 Soft Matter 4,151 Int.J.Hydrogen Energy 2,93 Physica B 1,133 J.Chem.Phys. 3,122 J.Phys.Chem.Lett. 6,687 Phys.Chem.Chem.Phys. 4,638 Fuel Processing Techn. 3,019 J.Magn. & Magn. Mat. 2,002 Eur.J.Inorg.Chem. 2,965 Chem.Phys.Lett. 1,991 Phys.Rev.B 3,664 Eur.Phys.J. 2,421 Future Gen.Comp.Syst. 2,639 J.Phys.Chem. C 4,835 Crystal Growth & Desing 4,558 Conferences: Cracow Grid Workshop (since 2001) KU KDM (since 2008) Macromolecules 5,927 Astrophys.J.Lett. 5,602 Phys.Rev.Letters 7,728 J.Chem.Theor.Appl. 5,31 Astrophys.J 6,28 Chem.Physics 2,028 Molec.Pharmaceutics 4,787 Eur.J.Pharmacology 2,684 Energy 4,159 Carbon 6,16 J.Biogeography 4,969 Electrochem.Comm. 4,287 J.Magn.&Magn.Mat. 1,892
14 Summary of Projects Results (up-to-date) # infrastructure users all accounts registered infrastructure users employees # Grid users of global services glite UNICORE 200 QosCosGrid 100 0
15 15 Implementation of the PL-Grid Programme Deployed IT Platforms and Tools selected examples (by Cyfronet) GridSpace InSilicoLab Scalarm onedata Cloud computing
PLGrid Infrastructure Solutions For Computational Chemistry
PLGrid Infrastructure Solutions For Computational Chemistry Mariola Czuchry, Klemens Noga, Mariusz Sterzel ACC Cyfronet AGH 2 nd Polish- Taiwanese Conference From Molecular Modeling to Nano- and Biotechnology,
Grid Activities in Poland
Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center [email protected] Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was
Development of parallel codes using PL-Grid infrastructure.
Development of parallel codes using PL-Grid infrastructure. Rafał Kluszczyński 1, Marcin Stolarek 1, Grzegorz Marczak 1,2, Łukasz Górski 2, Marek Nowicki 2, 1 [email protected] 1 ICM, University of Warsaw,
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center
PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center e-irg Workshop October 12-13, 2011; Poznań, Poland 18th
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
LHC GRID computing in Poland
POLAND LHC GRID computing in Poland Michał Turała IFJ PAN/ ACK Cyfronet AGH, Kraków Polish Particle ICFA Physics DDW07, Symposium, Mexicio City, Warszawa, 25.10.2007 21.04.2008 1 Outline Computing needs
DataNet Flexible Metadata Overlay over File Resources
1 DataNet Flexible Metadata Overlay over File Resources Daniel Harężlak 1, Marek Kasztelnik 1, Maciej Pawlik 1, Bartosz Wilk 1, Marian Bubak 1,2 1 ACC Cyfronet AGH, 2 AGH University of Science and Technology,
Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland
Software services competence in research and development activities at PSNC Cezary Mazurek PSNC, Poland Workshop on Actions for Better Participation of New Member States to FP7-ICT Timişoara, 18/19-03-2010
Cloud services in PL-Grid and EGI Infrastructures
1 Cloud services in PL-Grid and EGI Infrastructures J. Meizner, M. Radecki, M. Pawlik, T. Szepieniec ACK Cyfronet AGH Cracow Grid Workshop 2012, Kraków, 22.10.2012 Overview 2 Different types of Compute
Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland
Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK
GridSpace2 Towards Science-as-a-Service Model
Polish Roadmap towards Domain-Specific Infrastructure for Supporting Computational Science in European Research Area GridSpace2 Towards Science-as-a-Service Model Eryk Ciepiela, Bartosz Wilk, Daniel Harężlak,
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
Auto-administration of glite-based
2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL)
Advanced Service Platform for e-science Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL) PLATON Service Platform for e-science Operational Programme: Innovative Economy 2007-2013 Investments in development
Advancements in Storage QoS Management in National Data Storage
Advancements in Storage QoS Management in National Data Storage Darin Nikolow 1, Renata Słota 1, Stanisław Polak 1 and Jacek Kitowski 1,2 1 AGH University of Science and Technology, Faculty of Computer
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
Development and Execution of Collaborative Application on the ViroLab Virtual Laboratory
Development and Execution of Collaborative Application on the ViroLab Virtual Laboratory Marek Kasztelnik 3, Tomasz Guba la 2,3, Maciej Malawski 1, and Marian Bubak 1,3 1 Institute of Computer Science
A National Computing Grid: FGI
A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI Grids in Finland : a short history 3/10/2012 FGI In the beginning, we had M-Grid Interest in Grid technology
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Research Group Scientific Computing Faculty of Computer Science University of Vienna AUSTRIA http://www.par.univie.ac.at
TELEMEDICINE in POLAND
TELEMEDICINE in POLAND Antoni Nowakowski Department of Biomedical Engineering Gdansk University of Technology Gdańsk, POLAND Ministry of Science and Higher Education Committee of Information Infrastructure
HPC Infrastructure Development in Bulgaria
HPC Infrastructure Development in Bulgaria Svetozar Margenov [email protected] Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Acad. G. Bonchev Str. Bl. 25-A,
AUTOMATIC PROXY GENERATION AND LOAD-BALANCING-BASED DYNAMIC CHOICE OF SERVICES
Computer Science 13 (3) 2012 http://dx.doi.org/10.7494/csci.2012.13.3.45 Jarosław Dąbrowski Sebastian Feduniak Bartosz Baliś Tomasz Bartyński Włodzimierz Funika AUTOMATIC PROXY GENERATION AND LOAD-BALANCING-BASED
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
e-infrastructure and related projects in PSNC
e-infrastructure and related projects in PSNC Norbert Meyer PSNC ACTIVITY Operator of Poznań Metropolitan Area Network POZMAN Operator of Polish National Research and Education Network PIONIER HPC Center
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
SARA Computing & Networking Services
SARA Computing & Networking Services Paul Wielinga Business unit manager High Performance Networking [email protected] Agenda 14.00 uur Opening Paul Wielinga 14:00 14:15 Inleiding Paul Wielinga 14:15 15:00
High-Performance Computing and Big Data Challenge
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA
HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
Book of Abstracts. 2016 CS3 Workshop
Book of Abstracts 2016 CS3 Workshop Experiences of Cloud Storage Service Monitoring: Performance Assessment and Comparison E. Bocchi (Politecnico Torino) Enrico Bocchi 1,2, Idilio Drago 1, Marco Mellia
Overview of HPC systems and software available within
Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster
Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers
Information Technology Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers Effective for FY2016 Purpose This document summarizes High Performance Computing
Keywords: Virtualization, resource management, repositories, cloud infrastructure
Computing and Informatics, Vol. 31, 2012, 743 757 FLEXIBLE ORGANIZATION OF REPOSITORIES FOR PROVISIONING CLOUD INFRASTRUCTURES Joanna Kosińska, Jacek Kosiński S lawomir Zieliński, Krzysztof Zieliński Department
Massively Self-Scalable Platform for Data Farming
AGH University of Science and Technology in Kraków, Poland Faculty of Computer Science, Electronics and Telecommunications Department of Computer Science Massively Self-Scalable Platform for Data Farming
David Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
Parallel Computing. Introduction
Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00
HPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]
Recent Advances in HPC for Structural Mechanics Simulations
Recent Advances in HPC for Structural Mechanics Simulations 1 Trends in Engineering Driving Demand for HPC Increase product performance and integrity in less time Consider more design variants Find the
CEDA Storage. Dr Matt Pritchard. Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk
CEDA Storage Dr Matt Pritchard Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk How we store our data NAS Technology Backup JASMIN/CEMS CEDA Storage Data stored as files on disk. Data is migrated
Clusters: Mainstream Technology for CAE
Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux
SOSCIP Platforms. SOSCIP Platforms
SOSCIP Platforms SOSCIP Platforms 1 SOSCIP HPC Platforms Blue Gene/Q Cloud Analytics Agile Large Memory System 2 SOSCIP Platforms Blue Gene/Q Platform 3 top500.org Rank Site System Cores Rmax (TFlop/s)
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
10- High Performance Compu5ng
10- High Performance Compu5ng (Herramientas Computacionales Avanzadas para la Inves6gación Aplicada) Rafael Palacios, Fernando de Cuadra MRE Contents Implemen8ng computa8onal tools 1. High Performance
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Scientific and Technical Applications as a Service in the Cloud
Scientific and Technical Applications as a Service in the Cloud University of Bern, 28.11.2011 adapted version Wibke Sudholt CloudBroker GmbH Technoparkstrasse 1, CH-8005 Zurich, Switzerland Phone: +41
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
Data Sharing Options for Scientific Workflows on Amazon EC2
Data Sharing Options for Scientific Workflows on Amazon EC2 Gideon Juve, Ewa Deelman, Karan Vahi, Gaurang Mehta, Benjamin P. Berman, Bruce Berriman, Phil Maechling Francesco Allertsen Vrije Universiteit
Introduction History Design Blue Gene/Q Job Scheduler Filesystem Power usage Performance Summary Sequoia is a petascale Blue Gene/Q supercomputer Being constructed by IBM for the National Nuclear Security
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
Grids Computing and Collaboration
Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide
Bob Jones Technical Director [email protected]
Bob Jones Technical Director [email protected] CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST-2003-508833 EGEE Goal & Strategy Goal: Create a wide
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Trials community. Yannick Legré. [email protected]. www.egi.eu. EGI InSPIRE RI 261323
EGI InSPIRE InSPIRE EGI Federated cloud for the Clinical Trials community Yannick Legré [email protected] ECRIN Workshop EGI European Grid Infrastructure Distributed, federated storage and compute facilities
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Deploying and managing a Visualization Farm @ Onera
Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC
Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500
Managing Complexity in Distributed Data Life Cycles Enhancing Scientific Discovery
Center for Information Services and High Performance Computing (ZIH) Managing Complexity in Distributed Data Life Cycles Enhancing Scientific Discovery Richard Grunzke*, Jens Krüger, Sandra Gesing, Sonja
FPGA Acceleration using OpenCL & PCIe Accelerators MEW 25
FPGA Acceleration using OpenCL & PCIe Accelerators MEW 25 December 2014 FPGAs in the news» Catapult» Accelerate BING» 2x search acceleration:» ½ the number of servers»
