High-Performance Scientific Computing in the UK STFC and the Hartree Centre. Mike Ashworth
|
|
|
- Jasmin Wilcox
- 10 years ago
- Views:
Transcription
1 High-Performance Scientific Computing in the UK STFC and the Hartree Centre Mike Ashworth Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory
2 High Performance Computing in the UK STFC s Scientific Computing Department STFC s Hartree Centre Scientific Computing Highlights
3 High Performance Computing in the UK STFC s Scientific Computing Department STFC s Hartree Centre Scientific Computing Highlights
4 Linpack Gflop/s History of UK academic HPC provision Moore's Law 14 months Total UK Universities Total UK national provision (incl. HPCx, HECToR, Hartree, DIRAC) Peta Tera Jan-93 Jan-95 Jan-97 Jan-99 Jan-01 Jan-03 Jan-05 Jan-07 Jan-09 Jan-11 Jan-13 TOP500 list
5 Linpack Gflop/s History of UK academic HPC provision Hartree & DIRAC Peta HECToR CSAR HPCx SRIF Regional centres Tera Jan-93 Jan-95 Jan-97 Jan-99 Jan-01 Jan-03 Jan-05 Jan-07 Jan-09 Jan-11 Jan-13 TOP500 list
6 The UK National Supercomputing facilities The UK National Supercomputing Services are managed by EPSRC on behalf of the UK academic communities HPCx ran from using IBM POWER4 and POWER5 HECToR is the current system Located at Edinburgh, operated jointly by EPCC and STFC HECToR Phase3 90,112 cores Cray XE6 (660 Tflop/s Linpack) ARCHER is the new service, due for installation around Nov/Dec 2013, service starting in early 2014
7 DiRAC DiRAC (Distributed Research utilising Advanced Computing) Integrated supercomputing facility for UK research in theoretical modelling and HPC-based simulation in particle physics, astronomy and cosmology Funded by STFC Computer simulated image of the glow of dark matter (Credit: Virgo) Flagship DiRAC system is a 6-rack IBM Blue Gene/Q 98,304 cores 1.0 Pflop/s Linpack Located at Edinburgh
8 UK Tier-2 regional university centres N8 Research Partnership N8 HPC centre 3.25M facility Based at University of Leeds SGI system 5312 cores Possibly to become S6 with Cambridge & Imperial College X86 system at University of Southampton IBM system cores GPU system at STFC RAL HP with 372 Fermi GPUs #3 GPU system in Europe
9 UK academic HPC pyramid Tier-0 PRACE systems Tier-1 National HECToR, Hartree, DIRAC Tier-2 Regional Universities S6 and N8 regional clusters Local Universities and Institutes
10 High Performance Computing in the UK STFC s Scientific Computing Department STFC s Hartree Centre Scientific Computing Highlights
11 HM Government (& HM Treasury) Organisation RCUK Executive Group
12 UK Astronomy Technology Centre, Edinburgh, Scotland Daresbury Laboratory Daresbury Science and Innovation Campus Warrington, Cheshire Polaris House Swindon, Wiltshire Rutherford Appleton Laboratory Harwell Science and Innovation Campus Didcot, Oxfordshire Chilbolton Observatory Stockbridge, Hampshire STFC s Sites Isaac Newton Group of Telescopes La Palma Joint Astronomy Centre Hawaii
13 Understanding our Universe STFC s Science Programme Particle Physics Large Hadron Collider (LHC), CERN - the structure and forces of nature Ground based Astronomy European Southern Observatory (ESO), Chile Very Large Telescope (VLT), Atacama Large Millimeter Array (ALMA), European Extremely Large Telescope (E-ELT), Square Kilometre Array (SKA) Space based Astronomy European Space Agency (ESA) Herschel/Planck/GAIA/James Webb Space Telescope (JWST) Bi-laterals NASA, JAXA, etc. STFC Space Science Technology Department Nuclear Physics Facility for anti-proton and Ion research (FAIR), Germany Nuclear Skills for - medicine (Isotopes and Radiation applications), energy (Nuclear Power Plants) and environment (Nuclear Waste Disposal)
14 STFC s Facilities Neutron Sources ISIS - pulsed neutron and muon source/ and Institute Laue-Langevin (ILL), Grenoble Providing powerful insights into key areas of energy, biomedical research, climate, environment and security. High Power Lasers Central Laser Facility - providing applications on bioscience and nanotechnology HiPER Demonstrating laser driven fusion as a future source of sustainable, clean energy Light Sources Diamond Light Source Limited (86%) - providing new breakthroughs in medicine, environmental and materials science, engineering, electronics and cultural heritage European Synchrotron Radiation Facility (ESRF), Grenoble
15 Major funded activities Scientific Computing Department 160 staff supporting over 7500 users Applications development and support Compute and data facilities and services Research: over 100 publications per annum Deliver over 3500 training days per annum Systems administration, data services, high-performance computing, numerical analysis & software engineering. Major science themes and capabilities Director: Adrian Wander Appointed 24th July 2012 Expertise across the length and time scales from processes occurring inside atoms to environmental modelling
16 Scientific Highlights Journal of Materials Chemistry 16 no. 20 (May 2006) - issue devoted to HPC in materials chemistry (esp. use of HPCx); Phys. Stat. Sol.(b) 243 no. 11 (Sept 2006) - issue featuring scientific highlights of the Psi-k Network (the European network on the electronic structure of condensed matter coordinated by our Band Theory Group); Molecular Simulation 32 no (Oct, Nov 2006) - special issue on applications of the DL_POLY MD program written & developed by Bill Smith (the 2 nd special edition of Mol Sim on DL_POLY - the 1 st was about 5 years ago); Acta Crystallographica Section D 63 part 1 (Jan 2007) - proceedings of the CCP4 Study Weekend on protein crystallography. The Aeronautical Journal, Volume 111, Number 1117 (March 2007), UK Applied Aerodynamics Consortium, Special Edition. Proc Roy Soc A Volume 467, Number 2131 (July 2011), HPC in the Chemistry and Physics of Materials. Last 5 years metrics: 67 grants of order 13M 422 refereed papers and 275 presentations Three senior staff have joint appointments with Universities Seven staff have visiting professorships Six members of staff awarded Senior Fellowships or Fellowships by Research Councils individual merit scheme Five staff are Fellows of senior learned societies
17 High Performance Computing in the UK STFC s Scientific Computing Department STFC s Hartree Centre Scientific Computing Highlights
18 Opportunities Political Opportunity Business Opportunity Scientific Opportunity Technical Opportunity Demonstrate growth through economic and societal impact from investments in HPC Engage industry in HPC simulation for competitive advantage Exploit multi-core Build multi-scale, multiphysics coupled apps Tackle complex Grand Challenge problems Exploit new Petascale and Exascale architectures Adapt to multi-core and hybrid architectures
19 Government Investment in e-infrastructure th Aug 2011: Prime Minister David Cameron confirmed 10M investment into STFC's Daresbury Laboratory. 7.5M for computing infrastructure Clockwise from top left 3 rd Oct 2011: Chancellor George Osborne announced 145M for e-infrastructure at the Conservative Party Conference 4 th Oct 2011: Science Minister David Willetts indicated 30M investment in Hartree Centre 30 th Mar 2012: John Womersley CEO STFC and Simon Pendlebury IBM signed major collaboration at the Hartree Centre
20 Intel collaboration STFC and Intel have signed an MOU to develop and test technology that will be required to power the supercomputers of tomorrow. STFC and Intel have signed an MOU to develop and test technology that will be required to power the supercomputers of tomorrow. Karl Solchenbach, Director of European Exascale Computing at Intel said "We will use STFC's leading expertise in scalable applications to address the challenges of exascale computing in a co-design approach."
21 Tildesley Report BIS commissioned a report on the strategic vision for a UK e-infrastructure for Science and Business. Prof Dominic Tildesley led the team including representatives from Universities, Research Councils, industry and JANET. The scope included compute, software, data, networks, training and security. Mike Ashworth, Richard Blake and John Bancroft from STFC provided input. Published in December Google the title to download from the BIS website
22 Hartree Centre capital spend 2011/ BlueGene/Q 12 idataplex 6 Data Intensive approximate capital spend M Total 37.5M 6 6 Disk & Tape Visualization Infrastructure
23 Hartree Centre IBM BG/Q Blue Joule TOP500 #18 in the Jun 2013 list #6 in Europe #1 system in UK 6 racks 98,304 cores 6144 nodes 16 cores & 16 GB per node 1.25 Pflop/s peak 1 rack to be configured as BGAS (Blue Gene Advanced Storage) 16,384 cores Up to 1PB Flash memory
24 TOP500 #222 in the Jun 2013 list 8192 cores, 170 Tflop/s peak node has 16 cores, 2 sockets Intel Sandy Bridge (AVX etc.) 252 nodes with 32 GB 4 nodes with 256 GB 12 nodes with X3090 GPUs 256 nodes with 128 GB ScaleMP virtualization software up to 4TB virtual shared memory Hartree Centre IBM idataplex Blue Wonder
25 Hartree Centre Datastore Storage: 5.76 PB usable disk storage 15 PB tape store
26 Hartree Centre Visualization Four major facilities: Hartree Vis-1: a large visualization wall supporting stereo Hartree Vis-2: a large surround and immersive visualization system Hartree ISIC: a large visualization wall supporting stereo at ISIC Hartree Atlas: a large visualization wall supporting stereo in the Atlas Building at RAL, part of the Harwell Imaging Partnership (HIP) Virtalis is the hardware supplier
27 Hartree Centre Mission Hartree Centre at the STFC Daresbury Science and Innovation Campus will be an International Centre of Excellence for Computational Science and Engineering. It will bring together academic, government and industry communities and focus on multi-disciplinary, multi-scale, efficient and effective computation. The goal is to provide a step-change in modelling capabilities for strategic themes including energy, life sciences, the environment, materials and security.
28 Douglas Rayner Hartree Father of Computational Science Hartree Fock method Appleton Hartree equation Differential Analyser Numerical Analysis Douglas Rayner Hartree PhD, FRS ( ) It may well be that the high-speed digital computer will have as great an influence on civilization as the advent of nuclear power 1946 Douglas Hartree with Phyllis Nicolson at the Hartree Differential Analyser at Manchester University
29 Responding to the Challenges Expert optimisation of existing software Profiling and identification of hotspots, use of libraries, tackling issues associated with large core counts, serial bottlenecks, redesign of I/O etc Application Co-design Software and hardware must evolve together Requires close collaboration between hardware architects, computer scientists, and application software experts Re-engineering software requires a specialised development platform Highest possible core count Configured and operated for software development (interactive use, profiling, debugging etc) The Hartree Centre: a Research Collaboratory with IBM
30 Government Investment in e-infrastructure st Feb 2013: Chancellor George Osborne and Science Minister David Willetts opened the Hartree Centre and announced a further 185M of funding for e-infrastructure 19M for the Hartree Centre for power-efficient computing technologies 11M for the UK s participation in the Square Kilometre Array This investment forms part of the 600 million investment for science announced by the Chancellor at the Autumn Statement By putting out money into science we are supporting the economy of tomorrow. George Osborne opens the Hartree Centre, 1 st February 2013
31 Collaboration with Unilever 1 st Feb 2013: Also announced was a key partnership with Unilever in the development of Computer Aided Formulation (CAF) Months of laboratory bench work can be completed within minutes by a tool designed to run as an App on a tablet or laptop which is connected remotely to the Blue Joule supercomputer at Daresbury. The aggregation of surfactant molecules into micelles is an important process in product formulation This tool predicts the behaviour and structure of different concentrations of liquid compounds, both in the bottle and in-use, and helps researchers plan fewer and more focussed experiments. John Womersley, CEO STFC, and Jim Crilly, Senior Vice President, Strategic Science Group at Unilever
32 19M investment in power-efficient technologies System with latest NVIDIA Kepler GPUs System based on Intel Xeon Phi System based on ARM processors Active storage project using IBM BGAS Dataflow architecture based on FPGAs Instrumented machine room Systems will be made available for development and evaluation projects with Hartree Centre partners from industry, government and academia Power-efficient Technologies Shopping List
33 High Performance Computing in the UK STFC s Scientific Computing Department STFC s Hartree Centre Scientific Computing Highlights
34 Hartree Centre Projects in the First 12 Months 56 projects; 200M BG/Q hours; 14M idataplex hours Also 2.5% BG/Q contributed to PRACE DECI calls
35 Current Unified Model 450 Met Office Unified Model Unified in the sense of using the same code for weather forecasting and for climate research Combines dynamics on a lat/long grid with physics (radiation, clouds, precipitation, convection etc.) Also couples to other models (ocean, sea-ice, land surface, chemistry/aerosols etc.) for improved forecasting and earth system modelling New Dynamics Davies et al (2005) Met Office ECMWF USA France Germany Japan Canada Australia Performance of the UM (dark blue) versus a basket of models measured by 3-day surface pressure errors ENDGame to be operational in 2013 From Nigel Wood, Met Office
36 Limits to Scalability of the UM The current version (New Dynamics) has limited scalability The latest ENDGame code improves this, but a more radical solution is required for Petascale and beyond (17km) The problem lies with the spacing of the lat/long grid at the poles At 25km resolution, grid spacing near poles is 75m At 10km this reduces to 12m! Perfect scaling From Nigel Wood, Met Office POWER7 Nodes
37 Challenging Solutions GUNG-HO targets a brand new dynamical core Scalability choose a globally uniform grid which has no poles (see below) Speed maintain performance at high & low resolution and for high & low core counts Accuracy need to maintain standing of the model Space weather implies a 600km deep model Five year project Operational weather forecasts around 2020! Globally Uniform Next Generation Highly Optimized Working together harmoniously Triangles From Nigel Wood, Met Office Cubesphere Yin-Yang
38 Summary HPC is flourishing in the UK New Government investment supports a wide range of e-infrastructure projects (incl. data centres, networks, ARCHER, Hartree) The Hartree Centre is a new centre developing and deploying new software for industry, government and academia We are very happy to talk about partnership with centres in China For more information on the Hartree Centre see the
39 If you have been thank you for listening Mike Ashworth
Nuclear Physics Technology Showcase Event 26 September 2013, IoP London. STFC External Innovations Programme Funding Schemes
Nuclear Physics Technology Showcase Event 26 September 2013, IoP London STFC External Innovations Programme Funding Schemes Dr Vlad Skarda DBA CPhys External Innovations Science and Technology Facilities
The Hartree Centre helps businesses unlock the potential of HPC
The Hartree Centre helps businesses unlock the potential of HPC Fostering collaboration and innovation across UK industry with help from IBM Overview The need The Hartree Centre needs leading-edge computing
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
Experience of Data Transfer to the Tier-1 from a DIRAC Perspective
Experience of Data Transfer to the Tier-1 from a DIRAC Perspective Lydia Heck Institute for Computational Cosmology Manager of the DiRAC-2 Data Centric Facility COSMA 1 Talk layout Introduction to DiRAC?
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS Workshop on the Future of Big Data Management 27-28 June 2013 Philip Kershaw Centre for Environmental Data Archival
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware?
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? Dr Andrew Turner EPCC University of Edinburgh Edinburgh, UK [email protected]
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
High-Performance Computing and Big Data Challenge
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED
Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory 21 st Century Research Continuum Theory Theory embodied in computation Hypotheses tested through experiment SCIENTIFIC METHODS
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)
COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University
HPC-related R&D in 863 Program
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010 Outline The 863 key project on HPC and Grid Status and Next 5 years 863 efforts on
HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK
HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK Barry Davis, General Manager, High Performance Fabrics Operation Data Center Group, Intel Corporation Legal Disclaimer Today s presentations contain
New Jersey Big Data Alliance
Rutgers Discovery Informatics Institute (RDI 2 ) New Jersey s Center for Advanced Computation New Jersey Big Data Alliance Manish Parashar Director, Rutgers Discovery Informatics Institute (RDI 2 ) Professor,
PRACE hardware, software and services. David Henty, EPCC, [email protected]
PRACE hardware, software and services David Henty, EPCC, [email protected] Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean
High Performance Computing
High Parallel Computing Hybrid Program Coding Heterogeneous Program Coding Heterogeneous Parallel Coding Hybrid Parallel Coding High Performance Computing Highly Proficient Coding Highly Parallelized Code
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Energy efficiency in HPC :
Energy efficiency in HPC : A new trend? A software approach to save power but still increase the number or the size of scientific studies! 19 Novembre 2012 The EDF Group in brief A GLOBAL LEADER IN ELECTRICITY
Introduction History Design Blue Gene/Q Job Scheduler Filesystem Power usage Performance Summary Sequoia is a petascale Blue Gene/Q supercomputer Being constructed by IBM for the National Nuclear Security
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum
International High Performance Computing Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum Why International HPC? Large-scale science projects can require resources
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
Barry Bolding, Ph.D. VP, Cray Product Division
Barry Bolding, Ph.D. VP, Cray Product Division 1 Corporate Overview Trends in Supercomputing Types of Supercomputing and Cray s Approach The Cloud The Exascale Challenge Conclusion 2 Slide 3 Seymour Cray
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
Relations with ISV and Open Source. Stephane Requena GENCI [email protected]
Relations with ISV and Open Source Stephane Requena GENCI [email protected] Agenda of this session 09:15 09:30 Prof. Hrvoje Jasak: Director, Wikki Ltd. «HPC Deployment of OpenFOAM in an Industrial
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD Natasha Balac, Ph.D. Brief History of SDSC 1985-1997: NSF national supercomputer center; managed by General Atomics
Challenges in e-science: Research in a Digital World
Challenges in e-science: Research in a Digital World Thom Dunning National Center for Supercomputing Applications National Center for Supercomputing Applications University of Illinois at Urbana-Champaign
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Human Brain Project -
Human Brain Project - Scientific goals, Organization, Our role Wissenswerte, Bremen 26. Nov 2013 Prof. Sonja Grün Insitute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulations (IAS-6)
GTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved
GTC Presentation March 19, 2013 Copyright 2012 Penguin Computing, Inc. All rights reserved Session S3552 Room 113 S3552 - Using Tesla GPUs, Reality Server and Penguin Computing's Cloud for Visualizing
The GPU Accelerated Data Center. Marc Hamilton, August 27, 2015
The GPU Accelerated Data Center Marc Hamilton, August 27, 2015 THE GPU-ACCELERATED DATA CENTER HPC DEEP LEARNING PC VIRTUALIZATION CLOUD GAMING RENDERING 2 Product design FROM ADVANCED RENDERING TO VIRTUAL
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
Software Engineering Support
Software Engineering Support Christopher Greenough, Alan Kyffin, Gemma Poulter Software Engineering Group Scientific Computing Department STFC Rutherford Appleton Laboratory [email protected]
PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council
PRACE the European HPC Research Infrastructure Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council Barcelona, 6-June-2013 PRACE an European e-infrastructure & ESFRI-list item in operation
Parallel Computing. Introduction
Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00
High Productivity Computing With Windows
High Productivity Computing With Windows Windows HPC Server 2008 Justin Alderson 16-April-2009 Agenda The purpose of computing is... The purpose of computing is insight not numbers. Richard Hamming Why
A successful publicprivate partnership. Bob Jones head of CERN openlab
A successful publicprivate partnership Bob Jones head of CERN openlab CERN openlab in a nutshell A science industry partnership to drive R&D and innovation with over a decade of success Evaluate state-of-the-art
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
HPC enabling of OpenFOAM R for CFD applications
HPC enabling of OpenFOAM R for CFD applications Towards the exascale: OpenFOAM perspective Ivan Spisso 25-27 March 2015, Casalecchio di Reno, BOLOGNA. SuperComputing Applications and Innovation Department,
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
Percipient StorAGe for Exascale Data Centric Computing
Percipient StorAGe for Exascale Data Centric Computing per cip i ent (pr-sp-nt) adj. Having the power of perceiving, especially perceiving keenly and readily. n. One that perceives. Introducing: Seagate
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
HPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]
Make the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
PRIMERGY server-based High Performance Computing solutions
PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating
High Performance Computing Infrastructure in JAPAN
High Performance Computing Infrastructure in JAPAN Yutaka Ishikawa Director of Information Technology Center Professor of Computer Science Department University of Tokyo Leader of System Software Research
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
Quick Reference Selling Guide for Intel Lustre Solutions Overview
Overview The 30 Second Pitch Intel Solutions for Lustre* solutions Deliver sustained storage performance needed that accelerate breakthrough innovations and deliver smarter, data-driven decisions for enterprise
Cloud Computing. Alex Crawford Ben Johnstone
Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a
Collaborative Computational Projects: Networking and Core Support
Collaborative Computational Projects: Networking and Core Support Call type: Invitation for proposals Closing date: 16:00 07 October 2014 Related themes: Engineering, ICT, Mathematical sciences, Physical
THE RESEARCH INFRASTRUCTURES IN FP7
29 October 2004 Working Document on THE RESEARCH INFRASTRUCTURES IN FP7 Introduction In the Commission s communication on the financial perspectives of the European Union for the period 2007-2013 1, the
Future and Emerging Technologies (FET) in H2020. Ales Fiala Future and Emerging Technologies DG CONNECT European Commission
Future and Emerging Technologies (FET) in H2020 51214 Ales Fiala Future and Emerging Technologies DG CONNECT European Commission H2020, three pillars Societal challenges Excellent science FET Industrial
Workprogramme 2014-15
Workprogramme 2014-15 e-infrastructures DCH-RP final conference 22 September 2014 Wim Jansen einfrastructure DG CONNECT European Commission DEVELOPMENT AND DEPLOYMENT OF E-INFRASTRUCTURES AND SERVICES
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez
Energy efficient computing on Embedded and Mobile devices Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez A brief look at the (outdated) Top500 list Most systems are built
Deep Learning Meets Heterogeneous Computing. Dr. Ren Wu Distinguished Scientist, IDL, Baidu [email protected]
Deep Learning Meets Heterogeneous Computing Dr. Ren Wu Distinguished Scientist, IDL, Baidu [email protected] Baidu Everyday 5b+ queries 500m+ users 100m+ mobile users 100m+ photos Big Data Storage Processing
RSC presents SPbPU supercomputer center and new scientific research results achieved with RSC PetaStream massively parallel supercomputer
Press contacts: Oleg Gorbachov Corporate Communications Director, RSC Group Cell: +7 (967) 052-50-85 Email: [email protected] Press Release RSC presents SPbPU supercomputer center and new scientific
How To Build A Cloud Computer
Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology
