HIP Computing Resources for LHC-startup



Similar documents
A National Computing Grid: FGI

Grid Computing in Aachen

Tier0 plans and security and backup policy proposals

The Data Quality Monitoring Software for the CMS experiment at the LHC

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

Status and Evolution of ATLAS Workload Management System PanDA

NorduGrid ARC Tutorial

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing

HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid

Manjrasoft Market Oriented Cloud Computing Platform

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

Secure Hybrid Cloud Infrastructure for Scien5fic Applica5ons

Clusters in the Cloud

Report from SARA/NIKHEF T1 and associated T2s

Sun Constellation System: The Open Petascale Computing Architecture

HTCondor at the RAL Tier-1

Virtualization of a Cluster Batch System

Experiences setting up a Rocks based Linux Cluster Tomas Lindén Helsinki Institute of Physics CMS Programme

Computing in High- Energy-Physics: How Virtualization meets the Grid

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb

LHC schedule: what does it imply for SRM deployment? CERN, July 2007

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

The CMS analysis chain in a distributed environment

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

IBM PureData System for Transactions. Technical Deep Dive. Jonathan Rossi, PureSystems Specialist

Manjrasoft Market Oriented Cloud Computing Platform

Building Clusters for Gromacs and other HPC applications

A Physics Approach to Big Data. Adam Kocoloski, PhD CTO Cloudant

Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Techniques for implementing & running robust and reliable DB-centric Grid Applications

CDFII Computing Status

Performance Monitoring of the Software Frameworks for LHC Experiments

CycleServer Grid Engine Support Install Guide. version 1.25

1 P a g e Delivering Self -Service Cloud application service using Oracle Enterprise Manager 12c

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de

INTRODUCTION APPLICATION DEPLOYMENT WITH ORACLE VIRTUAL ASSEMBLY

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Grids Computing and Collaboration

Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing

Das HappyFace Meta-Monitoring Framework

High Availability Databases based on Oracle 10g RAC on Linux

IGNITE Cloud Server enhancements

DCMS Tier 2/3 prototype infrastructure

CERN openlab III. Major Review Platform CC. Sverre Jarp Alfio Lazzaro Julien Leduc Andrzej Nowak

Database Services for CERN

Development of Monitoring and Analysis Tools for the Huawei Cloud Storage

How To Monitor Your Computer With Nagiostee.Org (Nagios)

Zadara Storage Cloud A

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

<Insert Picture Here> Introducing Oracle VM: Oracle s Virtualization Product Strategy

How to Build a Sanbolic Platform User Manual

Windows Server Performance Monitoring

Introduction to Cloud Computing. Lecture 02 History of Enterprise Computing Kaya Oğuz

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

Oracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya

JBoss Seam Performance and Scalability on Dell PowerEdge 1855 Blade Servers

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

Computing at the HL-LHC

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers

Service Name: Software Support Service

NET ACCESS VOICE PRIVATE CLOUD

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week

GridKa: Roles and Status

Table 1. Requirements for Domain Controller. You will need a Microsoft Active Directory domain. Microsoft SQL Server. SQL Server Reporting Services

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

Chapter 2: Getting Started

Tools and strategies to monitor the ATLAS online computing farm

OpenClovis Product Presentation

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Oracle Solaris: Aktueller Stand und Ausblick

Running a Workflow on a PowerCenter Grid

Bringing Big Data Modelling into the Hands of Domain Experts

XTM Web 2.0 Enterprise Architecture Hardware Implementation Guidelines. A.Zydroń 18 April Page 1 of 12

VMware for Bosch VMS. en Software Manual

Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar

MSU Tier 3 Usage and Troubleshooting. James Koll

Clusters: Mainstream Technology for CAE

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Virtual Desktop Infrastructure (VDI) Overview

Transcription:

HIP Computing Resources for LHC-startup Tomas Lindén Finnish CMS meeting in Kumpula 03.10. 2007 Kumpula, Helsinki October 3, 2007 1 Tomas Lindén

Contents 1. Finnish Tier-1/2 computing in 2007 and 2008 2. Hardware resources in 2007 and 2008 3. Manpower 4. Operations 5. CMS software 6. CSA07 October 3, 2007 2 Tomas Lindén

1. Finnish Tier-1/2 computing in 2007 and 2008 800 keur annually for Finnish LHC computing 2008, 2009 and 2010 HIP CSC MoU in preparation to cover collaboration in the LHC project HIP CSC responsibility split defined in the Appendix Project Plan CSC supports the hardware resources and is working on dcache CSC will help with the aqcuisitions The HIP Technology programme supports the LHC computing project October 3, 2007 3 Tomas Lindén

2. Hardware resources in 2007 and 2008 Ametisti upgrade with 128 CPUs from sepeli The compute-2-x 2,2 GHz sepeli nodes run stably in the Kumpula machine room Only a broken switch and one node needed to be replaced Cabling the networks between the new racks took some effort Remote management firmware was updated for temperature readout and automated shutdown Temperature monitoring (J. Koivumäki, E. Kesälä, T. Lindén) shows no thermal issues after upgrade The new queues on the new nodes are recommended since they are faster (small serial, medium serial and big serial) NFS-problems on ametisti are beeing looked into October 3, 2007 4 Tomas Lindén

The CPU resources for 2007-2008 will be existing shared (M-grid) resources: CPU: ametisti (260 CPUs), mill (64 CPUs), and opaali (88 CPUs), and existing dedicated (M-grid) resources: CPU: sepeli (256 cores 10/2007, 384 cores 12/2007, 448 cores 04/2008, 512 cores 07/2008) Disk for 2008 will be purchased during autumn 2007. CMS disk 2007: 7 TB silo2 + 12 TB CSC ALICE disk 2007: 4 TB opaali + 12 TB CSC ALICE, CMS and TOTEM disk 2008: about 150 TB disk October 3, 2007 5 Tomas Lindén

3. Manpower Antti Pirinen, HIP, project leader Dan Still, CSC, project leader Tomas Lindén, HIP, grid coordinator Jukka Klem, HIP, PhEDEx, Frontier Jonas Dahlbom, HIP, dcache support Chris Hanke, CSC, dcache support Vera Hansper, NDGF/CSC, Finnish Node coordinator, ROOT runtime Erik Edelmann, NDGF/CSC, CMS ProdAgent plugin, CMSSW runtime Jesper Koivumäki, HIP, CMS support Mikko Närjänen, HIP, Alice support DongJo Kim, JYFL, ALICE computing Kalle Happonen, HIP, middleware support Eero Kesälä, Physics department, cluster administration in Kumpula Pekko Metsä, HIP, cluster administration in Kumpula Francisco Garcia, HIP, ROOT and GEANT4 on Kumpula clusters October 3, 2007 6 Tomas Lindén

4. Operations The major part of the hardware will be in the CSC machine room Hardware, operating system and grid middleware is planned to be supported by CSC HIP will support application level software Resources part of Nordic Data Grid Facility CMS Tier-1 support in terms of data sets and CMS software from RAL or CERN, still open for discussion NORDUNET Point Of Presence (POP) at CSC Do we need a dedicated network link to Kumpula? October 3, 2007 7 Tomas Lindén

5. CMS software Works now CMSSW jobs locally on ametisti, kivi, mill and sepeli Python thread memory management and SGE virtual memory queue limits requires limiting the stack size with limit stacksize 512M CMSSW grid jobs using ARC between sepeli, kivi (soon mill and ametisti) To be investigated CMSSW jobs submitted with Ganga from the ARDA project Needs further development CMS organized Monte Carlo Production (ProdAgent plugin developed) ASAP/CRAB jobs from ARC/gLite/OSG sites October 3, 2007 8 Tomas Lindén

sepeli ametisti mill kivi opaali CPU cores 256* 260 64 8 88 Rocks version 4.1 4.1 4.3 4.1 4.1 CMSSW v. 1.6.0 1.6.4 1.5.4 1.4.3 - CMSSW RE ok ok ROOT RE ok ok ok ok ok CMSSW 1.3.0 onwards support building 32-binaries on 64-bit hosts Starting with CMMSSW 1.6.0 no dependency on root installed packages. Simplifies grid installation jobs and opprtunistic use cases. silo2 7,5 TB dcache pool problem with Ganglia fixed waiting for NDGF to reconnect our dcache pool October 3, 2007 9 Tomas Lindén

6. CSA 07 CSA07 is running now Once we get PhEDEx running we can participate October 3, 2007 10 Tomas Lindén