A National Computing Grid: FGI

Similar documents
The Asterope compute cluster

Grids Computing and Collaboration

Estonian Scientific Computing Infrastructure (ETAIS)

1 DCSC/AU: HUGE. DeIC Sekretariat /RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers

Manual for using Super Computing Resources

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Building a Top500-class Supercomputing Cluster at LNS-BUAP

HIP Computing Resources for LHC-startup

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

CSC computing resources. Ville Savolainen, Tommi Nyrönen and Tomasz Malkiewicz CSC IT Center for Science Ltd.

Introduction Physics at CSC. Tomasz Malkiewicz Jan Åström

Building Clusters for Gromacs and other HPC applications

High Performance Computing in CST STUDIO SUITE

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

LUMA CENTRE FINLAND: NATIONAL LUMA DAYS NATIONAL SCIENTIX DAYS THE 5 TH ISSE SYMPOSIUM

CRIBI. Calcolo Scientifico e Bioinformatica oggi Università di Padova 13 gennaio 2012

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

1 Bull, 2011 Bull Extreme Computing

Information and accounting systems. Lauri Anton

Clusters: Mainstream Technology for CAE

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

University education 2012

Cornell University Center for Advanced Computing

Logically a Linux cluster looks something like the following: Compute Nodes. user Head node. network

19-23 May 2014, Helsinki, Finland EXHIBITION GUIDE

NorduGrid ARC Tutorial

SYSTEM SETUP FOR SPE PLATFORMS

University education 2014

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Getting Started with HPC

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

The CNMS Computer Cluster

Auto-administration of glite-based

Parallel Programming Survey

Interoperability Testing and iwarp Performance. Whitepaper

Comparing the performance of the Landmark Nexus reservoir simulator on HP servers

Overview of HPC systems and software available within

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

Cluster Implementation and Management; Scheduling

QuickSpecs. What's New Support for two InfiniBand 4X QDR 36P Managed Switches

Cornell University Center for Advanced Computing

HPC Update: Engagement Model

Scientific Computing Data Management Visions

Microsoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison

Cloud Computing. Alex Crawford Ben Johnstone

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

HPC-related R&D in 863 Program

SUN ORACLE EXADATA STORAGE SERVER

Brainlab Node TM Technical Specifications

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal by SGI Federal. Published by The Aerospace Corporation with permission.

Managing a local Galaxy Instance. Anushka Brownley / Adam Kraut BioTeam Inc.

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

Parallels Plesk Automation

Using the Windows Cluster

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

Virtualization of a Cluster Batch System

Ignify ecommerce. Item Requirements Notes

INDIAN INSTITUTE OF TECHNOLOGY KANPUR Department of Mechanical Engineering

SERVER CLUSTERING TECHNOLOGY & CONCEPT

HP Proliant BL460c G7

Cluster Computing at HRI

EMC ISILON NL-SERIES. Specifications. EMC Isilon NL400. EMC Isilon NL410 ARCHITECTURE

Report from SARA/NIKHEF T1 and associated T2s

Performance Characteristics of Large SMP Machines

Virtual Compute Appliance Frequently Asked Questions

Introduction to Running Computations on the High Performance Clusters at the Center for Computational Research

Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture

EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X210. EMC Isilon X410 ARCHITECTURE

ORACLE BIG DATA APPLIANCE X3-2

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

4cast Server Specification and Installation

Kriterien für ein PetaFlop System

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

MapR Enterprise Edition & Enterprise Database Edition

IBM System Cluster 1350 ANSYS Microsoft Windows Compute Cluster Server

Lecture 1: the anatomy of a supercomputer

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband

Secure Hybrid Cloud Infrastructure for Scien5fic Applica5ons

Avid ISIS v4.7.7 Performance and Redistribution Guide

How To Run A Hosted Physical Server On A Server At Redcentric

Transcription:

A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI

Grids in Finland : a short history 3/10/2012 FGI

In the beginning, we had M-Grid Interest in Grid technology rose in Finland during 2003 A consortium of 7 Universities, HIP and CSC was formed which successfully obtained funding for the FIRST Finnish Computing Grid M-Grid Effort was driven by CSC and Kai Nordlund (HU) M-Grid was operational from 2005 to 2011 9 Sites Theoretical total computing capacity ~ 2.5 TFlops Infrastructure had aged significantly by end 2008

Then, FGI is born The second generation M-Grid planned since ~2009 Many discussions about upgrading infrastructure Pekka Lehtovuori (CSC) & Kai Nordlund seek funding Application for funding made in October 2010 FIRI grant approved beginning 2011 Academy funding totals 1.38M Consortium consists of the following: Aalto University, University of Helsinki, Lappeenranta University of Technology, Tampere University of Technology, University of Eastern Finland, University of Jyväskylä, University of Oulu, University of Turku, Åbo Akademi University and CSC CSC coordinates the activity Members host the clusters

What was ordered Standard node configuration (408) HP SG7 scaleout dual 6 core 2.67GHz Xeon X5650 24 GB memory (min.) Big Memory nodes (4) HP Proliant DL 580 G7 server 1 TB memory GPGPU nodes (52) 2 Nvidia Tesla cards in a standard compute node Theoretical peak computing capacity of ~154 Tflops Disk servers: Total storage capacity of about 1 PB QDR InfiniBand & Gigabit ethernet for interconnect and network.

Getting the stuff, Installation and Acceptance Delivery started early November and installation at sites was done within one to two days of delivery Operating system is Scientific Linux 6 Scheduler used is SLURM And what there is... Aalto: 112 nodes, 8 GPGPU nodes, two 1TB big memory nodes Lappeenranta: 16 nodes Eastern Finland: 64 nodes Helsinki: 49 nodes, 20 GPGPU nodes, one 1 TB big memory node Jyväskylä: 48 nodes, 8 GPGPU nodes Oulu: 30 nodes Tampere (TUT): 37 nodes, 8 GPGPU nodes, one 1 TB big memory node Turku: 20 nodes Åbo Akademi: 8 GPGPU nodes CSC: 24 nodes (with 96GB memory)

Systems on line Local use is open at all sites (since early 2012) Sites maintain their own clusters: Site administrators are encouraged to collaborate and communicate Weekly meetings Providing grid software support for users Becoming part of the FGI community Small team from CSC manage the general administration

What FGI can offer you: Hardware resources More resources than a single University can offer Distributed nature means better availability even when the local cluster is full Local account is not required! Software There are a number of software packages already available for use via the grid Runtime environments list (currently 15 and growing) is available at https://confluence.csc.fi/display/fgi/grid+runtime+environments Support CSC provides GRID administrative support, software AND user support send an email to : helpdesk@csc.fi

Normal clusters Job scheduler (e.g. Slurm, PBS) User X Send job (sbatch, qsub...) User X: Job 1 User X: Job 2 User Y: Job 3 User Z: Job 4 Frontend Storage Network Compute node 1-n

Grids Storage Work computer re St o User X da ta Grid interface d Sen j ob Grid tools Lappeenranta cluster job Send Se nd job Grid interface Grid interface CSC cluster 03/10/2012 Helsinki cluster FGI Symposium at Viikki

What do you need? Certificate VO membership The ARC client tools Installable on most Linux versions MAC OSX Available on CSC servers: HIPPU, Vuori Also available on your local cluster login node

Starting with FGI http://www.csc.fi/grids and follow the links to FGI and FGI user pages http://confluence.csc.fi/display/fgi Central place for all documentation and information about FGI Getting started Available software, and how to use it helpdesk@csc.fi Problems? Requests?

Software in FGI Some scientific software is pre-installed Primarily open source software You can also run your own programs in FGI If you have suggestions contact us We can help you install YOUR software requirements!

FGI and EGI FGI is the Finnish NGI and EGI sees us as NGI_FI CSC is the Operations Center for FGI Uses the monitoring and service tools provided by EGI Follows EGI procedures for operations Manages the Regional Operational on Duty team Sites admins are part of this team!

What EGI can offer.. An even larger computational resource than just FGI! Connections with international user groups in your field Some of them have already made tools/software GRID-ready Enables easy sharing of expertise with your collaborators through Virtual Organisations (VOs)