DCMS Tier 2/3 prototype infrastructure

Similar documents
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

Grid Computing in Aachen

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de

Computing in High- Energy-Physics: How Virtualization meets the Grid

Report from SARA/NIKHEF T1 and associated T2s

The CMS analysis chain in a distributed environment

The Grid-it: the Italian Grid Production infrastructure

The dcache Storage Element

Analisi di un servizio SRM: StoRM

Virtualization of a Cluster Batch System

Global Grid User Support - GGUS - in the LCG & EGEE environment

GridKa: Roles and Status

CPU Benchmarks Over 600,000 CPUs Benchmarked

Das HappyFace Meta-Monitoring Framework

Grids Computing and Collaboration

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing

Tier0 plans and security and backup policy proposals

The ENEA-EGEE site: Access to non-standard platforms

CNR-INFM DEMOCRITOS and SISSA elab Trieste

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

LHC schedule: what does it imply for SRM deployment? CERN, July 2007

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN

Report on WorkLoad Management activities

HappyFace for CMS Tier-1 local job monitoring

System Requirements Table of contents

Welcome to the User Support for EGEE Task Force Meeting

How To Use Happyface (Hf) On A Network (For Free)

The GENIUS Grid Portal

PassMark - CPU Mark Multiple CPU Systems - Updated 17th of July 2012

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers

HIP Computing Resources for LHC-startup

PassMark - CPU Mark High Mid Range CPUs - Updated 17th of July 2012

Global Grid User Support - GGUS - start up schedule

Status and Integration of AP2 Monitoring and Online Steering

ATLAS GridKa T1/T2 Status

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

ALICE GRID & Kolkata Tier-2

Clusters: Mainstream Technology for CAE

PassMark CPU Benchmarks - High Mid Range CPU's

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart)

Auto-administration of glite-based

Forschungszentrum Karlsruhe: GridKa and GGUS

Managed GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team

A National Computing Grid: FGI

Sun Grid Engine, a new scheduler for EGEE

Laptopy Fujitsu LIFEBOOK Nowoczesna technologia z Intel Core i7 vpro drugiej generacji. lifebook.pl.ts.fujitsu.com

Basics of Virtualisation

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week

CMS Query Suite. CS4440 Project Proposal. Chris Baker Michael Cook Soumo Gorai

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart)

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

CPU Benchmarks Over 600,000 CPUs Benchmarked

OpenMP Programming on ScaleMP

A quantitative comparison between xen and kvm

Home Software Hardware Benchmarks Services Store Support Forums About Us

ontune SPA - Server Performance Monitor and Analysis Tool

The glite File Transfer Service

Overview of DFN`s Certificate Services - Regular, Grid and short-lived -

Transcription:

DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary

DCMS overview CMS DCMS: Tier 1 GridKa Collaboration of the German institutes in AC, HH and KA participating in the CMS experiment LCG VO 'dcms' 'federated' Tier 2 DESY / Aachen Aims: more independence exchange of experience in analysis (on the grid) Tier 3 Aachen Tier 3 Hamburg Tier 3 Karlsruhe sharing of recources prioritisation of dcmsusers at the DCMS sites DCMS 2

DCMS communication 3 RWTH Aachen DESY / Uni Hamburg EKP Karlsruhe DCMS has about 90 members Two mailing lists: cms-germany-grid@cern.ch: Grid related topics (41 members) cms-germany-prs@cern.ch: Physics related topics (67 members) For subscription, go to http://listboxservices.web.cern.ch/listboxservices (or mail to anja.vest@cern.ch) VRVS Video Meeting every third Wednesday a month Internet portals: DCMS Wiki: https://www-flc.desy.de/dcms (B. Hegner, C. Rosemann) Topics: ORCA, data, grid, MC-production, meetings... VisaHEP: http://visahep.home.cern.ch/visahep/ (S. Kappler) Seminars, organisation, documentation, meetings...

DCMS VO's/LCG admins 4 DCMS sites support at least 4 VOs: cms, dcms (CMS collaboration) dteam (LCG deployment team), dech (EGEE federation DE/CH) participation in GridKa School '05 with testbed (V. Büge, C. Jung, A. Vest) VO dcms hosted at DESY (no software manager 'dcmssgm') DESY runs a RB/BDII/PXY for dcms DESY provides the VO server and the RLS; VOMS LCG-site administration: Aachen: A. Nowack, (M. Kirsch) Hamburg: A. Gellrich, M. de Riese Karlsruhe: V. Büge, C. Jung, A. Vest Up to now: one VRVS LCG-Admin meeting

DCMS LCG clusters 5 All sites are running LCG 2.6.0 (since ~August) and SLC 3.0.5 on the LCG components (UI, CE, WN, SE etc.) The Worker Nodes are controlled by PBS/TORQUE batch systems using Maui/Torque At Karlsruhe: local Linux cluster successfully integrated into LCG, installed with yaim (EKP-Note in preparation) Developing homepage http://www-ekp.physik.uni-karlsruhe.de/~lcgadmin At Aachen: LCG cluster installed with Quattor Cluster monitoring with Lemon A. Nowack At Hamburg: separate LCG cluster installed with Quattor/yaim

DCMS LCG monitoring 6 Grid status of German LCG sites: http://goc.grid.sinica.edu.tw/gstat/germanyswitzerland.html DESY-HH RWTH-Aachen ekplcg2 LCG Monitoring: http://goc03.grid-support.ac.uk/googlemaps/lcg2.html

DCMS LCG Monitoring 7

DCMS LCG Monitoring 8

DCMS LCG Monitoring 9

DCMS LCG Monitoring 10

DCMS LCG queues 11 ekp-lcg-ui:> edg-job-list-match -vo dcms testjob.jdl Selected Virtual Organisation name (from --vo option): dcms Connecting to host grid-rb.desy.de, port 7772 **************************************************************** COMPUTING ELEMENT IDs LIST The following CE(s) matching your job requirements have been found: *CEId* ekp-lcg-ce.physik.uni-karlsruhe.de:2119/jobmanager-lcgpbs-dcms ekp-lcg-ce.physik.uni-karlsruhe.de:2119/jobmanager-lcgpbs-cms grid-ce.desy.de:2119/jobmanager-lcgpbs-dcms grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-cms grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-short grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-dcms grid-ce.desy.de:2119/jobmanager-lcgpbs-cms **************************************************************** ekp-lcg-ui:>

DCMS queues / mapping: set up 12 Queue properties Aachen Karlsruhe Hamburg Published # CPUs 18/20 20/36 70 Queues for DCMS dcms,cms,short dcms,cms dcms,cms Max. # of jobs running 500 15 10, 5 Max. # of jobs per user - 9 - Max. CPU time 24 h 24 h 48 h Max. wallclock time 48 h 36 h 72 h Nice value 15 10 - The queue lengths are normalised to the CPU capacities Each CMS member coming to a DCMS site (VO cms or dcms) is mapped to a dedicated account depending on its affiliation (e.g. dcms001or cms001) dcms-members can be prioritised by e.g. different fairshare targets of their user groups Setup @ EKP CMS members DCMS members EKP CMS members Mapping account cms001 cms050 dcms001 dcms050 local account Groups cmsgrid dcms + cmsgrid cms + dcms + cmsgrid Fairshare low priority medium priority high priority Common queue/mapping policies in DCMS?

Hardware capacities 13 Aachen Karlsruhe Hamburg Sum SE classic 1.6 TB + 1.9 TB 220 GB (+ 2 TB) 5.7 TB will be: 1.6 TB (3.8 TB) SE dcache 1.9 TB + 1.2 TB 10 TB 13.1 TB will be: (1.9 +1.9 + 1.2 +...) TB + 20 (30) TB in Nov. (45 TB) CPU's 20 x AMD Opteron 246 5 x AMD Athlon XP 1600+ 70 x Dual Xeon 2.8/3.07 GHz 110 13 x AMD Athlon T-Bird (126) 1 x AMD Opteron 242 8 x AMD Opteron 246 Storage Elements: ~ 19 TB (39-49 TB in November) Worker Nodes: 110 CPU's published (126 can be used)

Supported software Grid monitoring dcms (V. Büge): http//www-ekp.physik.uni-karlsruhe.de/~vbuege/web/cmsmon.html Software installed with XCMSI K. Rabbertz Homogeneous software environment Status 30/09/2005 Software Aachen Hamburg Karlsruhe FZK CMKIN_4_3_1 X X X X CMKIN_5_0_0 X X X X CMKIN_5_1_0 X X X X CMKIN_5_1_1 X X X X OSCAR_3_6_5 X X OSCAR_3_7_0 X X X X ORCA_8_7_1 X X ORCA_8_7_3 X X X X ORCA_8_7_4 X X X X ORCA_8_7_5 X X X X ORCA_8_10_1 X X X FAMOS_1_2_0 X X X X FAMOS_1_3_1 X X X X slc3_ia32_gcc323 X X X X LCG 2.6.0 X X X X R-GMA X X X X geant42ndprod X X X X 14

Datasets / file transfers PhEDEx Tier 1 GridKa Tier 1... DESY Aachen Hamburg Karlsruhe File transfers with PhEDEx J. Rehn Hits, Digis, DSTs Publishing/zipping ongoing DCMS data manager? Tier 1 CNAF Datasets (not exhaustive): bt03_ttbb_tth bt03_ttjj_tth bt03_tth120_6j1l eg03_tt_2l_topr eg03_wt_2l_toprex hg03b_wt_tauqq_toprex mu03_dy2mu mu03_w1mu jm03b_ttbar_incl jm03b_qcd jm03b_wjets jm03b_zjets jm03b_ww_inclusive jm03b_zw_inclusive jm03b_zz_inclusive Sum: ~ 15-20 TB 15

Summary: Analysis within DCMS 16 Communication: Internet portal(s), mailing lists, meetings, phone... Hardware capacities: ok sharing of recources via LCG VO dcms CMS related software: ok Datasets: File transfers are running tth, QCD, Pileups... Analysis tools: PAX (C++ toolkit for advanced physics analyses) S. Kappler ExRootAnalysis (writes root trees) CRAB (submitting a job on the Grid using official datasets) Tutorial; C. Rosemann Do analyses, write theses, papers, (P)TDR and conference contributions...