Swedish National Infrastructure for Computing

Similar documents
DSA1.4 R EPORT ON IMPLEMENTATION OF MONITORING AND OPERATIONAL SUPPORT SYSTEM. Activity: SA1. Partner(s): EENet, NICPB. Lead Partner: EENet

DSA1.5 U SER SUPPORT SYSTEM

Introduction to Grid computing

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

egovernment and escience Cloud Initiatives in Greece

NT1: An example for future EISCAT_3D data centre and archiving?

Grids Computing and Collaboration

Big Data and Storage Management at the Large Hadron Collider

Finnish Grid Activities

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment

Virtualization of a Cluster Batch System

The Italian Grid Infrastructure (IGI) CGW08 Cracow. Mirco Mazzucato Italian Grid Infrastructure Coordinator INFN CNAF Director

SEE-GRID-SCI. SEE-GRID-SCI USER FORUM 2009 Turkey, Istanbul December, 2009

PRACE in building the HPC Ecosystem Kimmo Koski, CSC

Global Grid User Support - GGUS - in the LCG & EGEE environment

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

HPC and Grid Concepts

perfsonar Multi-Domain Monitoring Service Deployment and Support: The LHC-OPN Use Case

Generic Grid Computing Tools for Resource and Project Management

Grid Scheduling Dictionary of Terms and Keywords

CMS Dashboard of Grid Activity

IBM Europe Announcement ZG , dated March 11, 2008

Beyond High Performance Computing: What Matters to CERN

Retrieval on the Grid Results from the European Project GRACE (Grid Search and Categorization Engine)

Integrating a heterogeneous and shared Linux cluster into grids

Bob Jones Technical Director

SARA Computing & Networking Services

Linux and the Higgs Particle

Research E-Infrastructure Upgrade Project at IMCS UL

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware

Sustainable Grid User Support

Computing in High- Energy-Physics: How Virtualization meets the Grid

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton.

Status and Integration of AP2 Monitoring and Online Steering

The glite File Transfer Service

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

Instruments in Grid: the New Instrument Element

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

The Grid-it: the Italian Grid Production infrastructure

SEERA-EI. Introduction to Cloud Computing. SEERA-EI training, 13 April Aneta Karaivanova, IICT-BAS, Bulgaria

GRID computing at LHC Science without Borders

EGEE vision and roadmap to involve industry

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste

INFN Testbed status report

NorduGrid ARC Tutorial

VTrak SATA RAID Storage System

Intel DPDK Boosts Server Appliance Performance White Paper

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper

The Virtual Grid Application Development Software (VGrADS) Project

E U F O R I A D S A 3. 1

Stream Processing on GPUs Using Distributed Multimedia Middleware

An approach to grid scheduling by using Condor-G Matchmaking mechanism

In-Situ Bitmaps Generation and Efficient Data Analysis based on Bitmaps. Yu Su, Yi Wang, Gagan Agrawal The Ohio State University

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

EU H2020 funding opportunities. Mauro Morandin INFN PD

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

Mass Storage System for Disk and Tape resources at the Tier1.

HIP Computing Resources for LHC-startup

The Big-Data Cloud. Patrick Fuhrmann. On behave of the project team. The BIG DATA Cloud 8 th dcache Workshop, DESY Patrick Fuhrmann 15 May

ATLAS job monitoring in the Dashboard Framework

Status and Evolution of ATLAS Workload Management System PanDA

Prof. A. Murli University of Naples Federico II and ICAR - CNR International Summer School on Grid Computing 2003, Vico Equense 14 July

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. Steinbuch Centre for Computing

The dcache Storage Element

Grid Computing in Aachen

Cloud Computing through Virtualization and HPC technologies

E-Infrastructure Development Trends in the Area of Grids, Clouds, HPC, Storage, Virtualization and IaaS

OBJECTIVE. National Knowledge Network (NKN) project is aimed at

GridKa: Roles and Status

Sun's Vision and Strategy for Grid Computing

Tier-1 Services for Tier-2 Regional Centres

e-irg workshop Dublin May 2013 Track 1: Coordination of e-infrastructures

Monitoring the Grid at local, national, and global levels

Oracle Database Scalability in VMware ESX VMware ESX 3.5

The ENEA-EGEE site: Access to non-standard platforms

Scheduling and Resource Management in Computational Mini-Grids

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Transparent Optimization of Grid Server Selection with Real-Time Passive Network Measurements. Marcia Zangrilli and Bruce Lowekamp

EGEE a worldwide Grid infrastructure

Data sharing and Big Data in the physical sciences. 2 October 2015

Procurement Innovation for Cloud Services in Europe

Deploying Multiscale Applications on European e-infrastructures

From Distributed Computing to Distributed Artificial Intelligence

THE RESEARCH INFRASTRUCTURES IN FP7

Experiences with the GLUE information schema in the LCG/EGEE production Grid

ZCL for SONYGigECAM Introduction Manual

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

WP2: Engagement and Dissemination. Daniele Catteddu Cloud Security Alliance

ATLAS Cloud Computing and Computational Science Center at Fresno State

Information and accounting systems. Lauri Anton

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

Workprogramme

SUPERCOMPUTING FACILITY INAUGURATED AT BARC

Information Retrieval on the Grid? Results and suggestions from Project GRACE

Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab

The Grid Monitor. Usage and installation manual. Oxana Smirnova

A Performance Monitor based on Virtual Global Time for Clusters of PCs

The Hardware Dilemma. Stephanie Best, SGI Director Big Data Marketing Ray Morcos, SGI Big Data Engineering

What is this Thing Called the Grid?

Transcription:

Swedish National Infrastructure for Computing SNIC & Grid & Data Sverker Holmgren SNIC 2007, - 1

SNIC-Mission The Swedish National Infrastructure for Computing (SNIC), under the jurisdiction of the Swedish Research Council, is a national resource intended to create integrated quality access to computational resources for Swedish research purposes where networks, data storage, computers, visualisation and various Grid-techniques can be used to produce a transparent resource Stated in the instruction for SNIC issued by the Swedish Research Council SNIC 2007, - 2

SNIC-Strategy Provide long term funding for HPC-resources in Sweden Coordinate investments in HPC-systems Coordinate competence at participating centers to optimize user support and quality of operations HPC-related development projects in Computer systems Storage Networks Computational science Visualization GRID-technology Disseminate information and knowledge about SNIC resources and their use Host the Swedish National Graduate School in Scientific Computing (NGSSC) SNIC 2007, - 3

Why a Metacenter? Limited number of HPC experts in Sweden Proximity to users by having regional centers In depth users support Collaborations Induction of new HPC usage Points of entry to national infrastructure Load balancing national leading edge systems based on technical assessments and resource availability Grid technology A metacenter can contribute to development Grid technology enables metacenter co-ordination International collaboration as a unified structure NorduGrid/ARC EGEE/LCG SNIC 2007, - 4

GRID-Vision Hardware, networks and middleware are used to put together a virtual computer resource s should not have to know where computation is taking place or where data is stored s will work together over disciplinary and geographical borders and form virtual organizations SNIC 2007, - 5

Flat GRID Resource Resource GRID Resource Resource Resource SNIC 2007, - 6

Hierarchical GRID Management Regional center GRID Regional center Local resource Local resource Local resource Local resource SNIC 2007, - 7

Collaborative GRID GRID Resources Resources SNIC 2007, - 8

Power plant GRID HPC-center HPC-center HPC-center GRID HPC-center HPC-center SNIC 2007, - 9

Some important Grid projects Globus Middleware project, provides the foundation for many other projects GGF (Global Grid Forum) World wide meetings and standardization efforts LCG (Large Hadron Collider Computing Grid) CERNs Grid project to do data analysis for LHC NorduGrid/ARC (Advanced Resource Connector) Middleware driving SweGrid NDGF (Nordic Data Grid Facility) Nordic organisation for national Grids, T1 facility EGEE (Enabling Grids for Escience in Europe) EU funded CERN driven project involving 74 partners BalticGrid EGEE outreach project to the Baltic states, coordianted by KTH DEISA EU funded project connecting large HPC centers in Europe eirg Advisory body to EU on einfrastructures ESFRI expert panel on HPC European advisory panel on HPC related issues SNIC 2007, - 10

SweGrid production testbed The first step towards HPC center Gridification Initiative from All HPC-centers in Sweden IT-researchers wanting to research Grid technology s Life Science Earth Sciences Space & Astro Physics High energy physics PC-clusters with large storage capacity Build for GRID production Participation in international collaborations LCG EGEE NorduGrid SNIC 2007, - 11

SweGrid production test bed Total budget 3.6 MEuro 6 GRID nodes 600 CPUs IA-32, 1 processor/server 875P with 800 MHz FSB and dual memory busses 2.8 GHz Intel P4 2 Gbyte Gigabit Ethernet 12 TByte temporary storage FibreChannel for bandwidth 14 x 146 GByte 10000 rpm 410 TByte nearline storage 140 TByte disk 270 TByte tape 1 Gigabit direct connection to SUNET (10 Gbps) SNIC 2007, - 12

SUNET connectivity GigaSunet 10 Gbit/s Typical POP at Univ. 2.5 Gbit/s 10 Gbit/s Univ. LAN SweGrid 1 Gbps Dedicated SNIC 2007, - 13

Persistent storage on SweGrid 1 2 3 Size Administration Bandwidth Availability SNIC 2007, - 14

SweGrid status Nodes installed January 2004, now becoming old Extensive use of the resources Local batch queues GRID queues through the NorduGrid middlware ARC Some nodes also available on Glite. Part of North Federation resources 60 national users 1/3 of SweGrid is dedicated to HEP (200 CPUs) Significant Contribution to LCG challenges As a partner in NorduGrid Also supporting LCG (glite) Working on compatibility between ARC and LCG Forms the core of the Northern EGEE ROC Accounting has been introduced SGAS Development of general and application specific grid portals Development of grid-enabled data base technology Development of data base technology for streaming data SNIC 2007, - 15

SweGrid Observations Global user identity Each SweGrid users must receive a unique x509-certifikat All centers must agree on a common lowest level of security. This will affect general security policy for HPC centers. Unified support organization All helpdesk activities and other support needs to be coordinated between centers. s can not decide where their jobs will be run (should not) and expect the same level of service at all sites. More bandwidth is needed To be able to move data between the nodes in SweGrid before and after execution of jobs continuously increasing bandwidth will be needed More storage is needed s can despite increasing bandwidth not fetch all data back home. Storage for both temporary and permanent data will be needed in close proximity to processor capacity SNIC 2007, - 16

Large data at HPC centers Database queries Service Long term file storage Near line storage Temporary storage SNIC 2007, - 17

A Proposal A Swedish infrastructure (Database SNIC) for data curation and services 5 application experts (curation and user support) 5 technicians (services and tool development) Driven by owners of data Hardware infrastructure provided by SNIC Software infrastructure Licenses Tools developed by SNIC centers and users in collaboration SUNET access Grid based access SNIC 2007, - 18

SNIC Storage Landscape A few centers nominated as hosts for data intensive applications These centers will have up to 10 petabyte storage capacity driven build-up Storage is nationally available (on all SNIC systems) Swedish Infrastructure for Data Curation and Services Hosting escience databases SNIC 2007, - 19

Database GRID Resouces Middleware AAA Curators Data Portal Meta GRID Processors s Technicians Data Portal SNIC 2007, - 20

The Big Questions? Is there a need to coordinate a Swedish infrastructure for Data bases in the same way as SNIC coordinates the HPC infrastructure? Is there synergies to be found between the HPC Grid infrastructure and Swedish Databases? If so which forms of collaborations should be established? SNIC 2007, - 21