DCMS Tier 2/3 prototype infrastructure

Size: px
Start display at page:

Download "DCMS Tier 2/3 prototype infrastructure"

Transcription

1 DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary

2 DCMS overview CMS DCMS: Tier 1 GridKa Collaboration of the German institutes in AC, HH and KA participating in the CMS experiment LCG VO 'dcms' 'federated' Tier 2 DESY / Aachen Aims: more independence exchange of experience in analysis (on the grid) Tier 3 Aachen Tier 3 Hamburg Tier 3 Karlsruhe sharing of recources prioritisation of dcmsusers at the DCMS sites DCMS 2

3 DCMS communication 3 RWTH Aachen DESY / Uni Hamburg EKP Karlsruhe DCMS has about 90 members Two mailing lists: [email protected]: Grid related topics (41 members) [email protected]: Physics related topics (67 members) For subscription, go to (or mail to [email protected]) VRVS Video Meeting every third Wednesday a month Internet portals: DCMS Wiki: (B. Hegner, C. Rosemann) Topics: ORCA, data, grid, MC-production, meetings... VisaHEP: (S. Kappler) Seminars, organisation, documentation, meetings...

4 DCMS VO's/LCG admins 4 DCMS sites support at least 4 VOs: cms, dcms (CMS collaboration) dteam (LCG deployment team), dech (EGEE federation DE/CH) participation in GridKa School '05 with testbed (V. Büge, C. Jung, A. Vest) VO dcms hosted at DESY (no software manager 'dcmssgm') DESY runs a RB/BDII/PXY for dcms DESY provides the VO server and the RLS; VOMS LCG-site administration: Aachen: A. Nowack, (M. Kirsch) Hamburg: A. Gellrich, M. de Riese Karlsruhe: V. Büge, C. Jung, A. Vest Up to now: one VRVS LCG-Admin meeting

5 DCMS LCG clusters 5 All sites are running LCG (since ~August) and SLC on the LCG components (UI, CE, WN, SE etc.) The Worker Nodes are controlled by PBS/TORQUE batch systems using Maui/Torque At Karlsruhe: local Linux cluster successfully integrated into LCG, installed with yaim (EKP-Note in preparation) Developing homepage At Aachen: LCG cluster installed with Quattor Cluster monitoring with Lemon A. Nowack At Hamburg: separate LCG cluster installed with Quattor/yaim

6 DCMS LCG monitoring 6 Grid status of German LCG sites: DESY-HH RWTH-Aachen ekplcg2 LCG Monitoring:

7 DCMS LCG Monitoring 7

8 DCMS LCG Monitoring 8

9 DCMS LCG Monitoring 9

10 DCMS LCG Monitoring 10

11 DCMS LCG queues 11 ekp-lcg-ui:> edg-job-list-match -vo dcms testjob.jdl Selected Virtual Organisation name (from --vo option): dcms Connecting to host grid-rb.desy.de, port 7772 **************************************************************** COMPUTING ELEMENT IDs LIST The following CE(s) matching your job requirements have been found: *CEId* ekp-lcg-ce.physik.uni-karlsruhe.de:2119/jobmanager-lcgpbs-dcms ekp-lcg-ce.physik.uni-karlsruhe.de:2119/jobmanager-lcgpbs-cms grid-ce.desy.de:2119/jobmanager-lcgpbs-dcms grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-cms grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-short grid-ce.physik.rwth-aachen.de:2119/jobmanager-lcgpbs-dcms grid-ce.desy.de:2119/jobmanager-lcgpbs-cms **************************************************************** ekp-lcg-ui:>

12 DCMS queues / mapping: set up 12 Queue properties Aachen Karlsruhe Hamburg Published # CPUs 18/20 20/36 70 Queues for DCMS dcms,cms,short dcms,cms dcms,cms Max. # of jobs running , 5 Max. # of jobs per user Max. CPU time 24 h 24 h 48 h Max. wallclock time 48 h 36 h 72 h Nice value The queue lengths are normalised to the CPU capacities Each CMS member coming to a DCMS site (VO cms or dcms) is mapped to a dedicated account depending on its affiliation (e.g. dcms001or cms001) dcms-members can be prioritised by e.g. different fairshare targets of their user groups EKP CMS members DCMS members EKP CMS members Mapping account cms001 cms050 dcms001 dcms050 local account Groups cmsgrid dcms + cmsgrid cms + dcms + cmsgrid Fairshare low priority medium priority high priority Common queue/mapping policies in DCMS?

13 Hardware capacities 13 Aachen Karlsruhe Hamburg Sum SE classic 1.6 TB TB 220 GB (+ 2 TB) 5.7 TB will be: 1.6 TB (3.8 TB) SE dcache 1.9 TB TB 10 TB 13.1 TB will be: ( ) TB + 20 (30) TB in Nov. (45 TB) CPU's 20 x AMD Opteron x AMD Athlon XP x Dual Xeon 2.8/3.07 GHz x AMD Athlon T-Bird (126) 1 x AMD Opteron x AMD Opteron 246 Storage Elements: ~ 19 TB (39-49 TB in November) Worker Nodes: 110 CPU's published (126 can be used)

14 Supported software Grid monitoring dcms (V. Büge): http//www-ekp.physik.uni-karlsruhe.de/~vbuege/web/cmsmon.html Software installed with XCMSI K. Rabbertz Homogeneous software environment Status 30/09/2005 Software Aachen Hamburg Karlsruhe FZK CMKIN_4_3_1 X X X X CMKIN_5_0_0 X X X X CMKIN_5_1_0 X X X X CMKIN_5_1_1 X X X X OSCAR_3_6_5 X X OSCAR_3_7_0 X X X X ORCA_8_7_1 X X ORCA_8_7_3 X X X X ORCA_8_7_4 X X X X ORCA_8_7_5 X X X X ORCA_8_10_1 X X X FAMOS_1_2_0 X X X X FAMOS_1_3_1 X X X X slc3_ia32_gcc323 X X X X LCG X X X X R-GMA X X X X geant42ndprod X X X X 14

15 Datasets / file transfers PhEDEx Tier 1 GridKa Tier 1... DESY Aachen Hamburg Karlsruhe File transfers with PhEDEx J. Rehn Hits, Digis, DSTs Publishing/zipping ongoing DCMS data manager? Tier 1 CNAF Datasets (not exhaustive): bt03_ttbb_tth bt03_ttjj_tth bt03_tth120_6j1l eg03_tt_2l_topr eg03_wt_2l_toprex hg03b_wt_tauqq_toprex mu03_dy2mu mu03_w1mu jm03b_ttbar_incl jm03b_qcd jm03b_wjets jm03b_zjets jm03b_ww_inclusive jm03b_zw_inclusive jm03b_zz_inclusive Sum: ~ TB 15

16 Summary: Analysis within DCMS 16 Communication: Internet portal(s), mailing lists, meetings, phone... Hardware capacities: ok sharing of recources via LCG VO dcms CMS related software: ok Datasets: File transfers are running tth, QCD, Pileups... Analysis tools: PAX (C++ toolkit for advanced physics analyses) S. Kappler ExRootAnalysis (writes root trees) CRAB (submitting a job on the Grid using official datasets) Tutorial; C. Rosemann Do analyses, write theses, papers, (P)TDR and conference contributions...

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

The Grid-it: the Italian Grid Production infrastructure

The Grid-it: the Italian Grid Production infrastructure n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Analisi di un servizio SRM: StoRM

Analisi di un servizio SRM: StoRM 27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

CPU Benchmarks Over 600,000 CPUs Benchmarked

CPU Benchmarks Over 600,000 CPUs Benchmarked Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» Multiple CPU Systems CPU Benchmarks Video Card Benchmarks Hard Drive Benchmarks RAM PC

More information

Das HappyFace Meta-Monitoring Framework

Das HappyFace Meta-Monitoring Framework Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and

More information

Grids Computing and Collaboration

Grids Computing and Collaboration Grids Computing and Collaboration Arto Teräs CSC, the Finnish IT center for science University of Pune, India, March 12 th 2007 Grids Computing and Collaboration / Arto Teräs 2007-03-12 Slide

More information

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing

PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

The ENEA-EGEE site: Access to non-standard platforms

The ENEA-EGEE site: Access to non-standard platforms V INFNGrid Workshop Padova, Italy December 18-20 2006 The ENEA-EGEE site: Access to non-standard platforms C. Sciò**, G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, F. Simoni, S. Podda

More information

CNR-INFM DEMOCRITOS and SISSA elab Trieste

CNR-INFM DEMOCRITOS and SISSA elab Trieste elab and the FVG grid Stefano Cozzini CNR-INFM DEMOCRITOS and SISSA elab Trieste Agenda/Aims Present elab ant its computational infrastructure GRID-FVG structure basic requirements technical choices open

More information

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu

More information

LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.

More information

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First

More information

Report on WorkLoad Management activities

Report on WorkLoad Management activities APROM CERN, Monday 26 November 2004 Report on WorkLoad Management activities Federica Fanzago, Marco Corvo, Stefano Lacaprara, Nicola de Filippis, Alessandra Fanfani [email protected] INFN Padova,

More information

HappyFace for CMS Tier-1 local job monitoring

HappyFace for CMS Tier-1 local job monitoring HappyFace for CMS Tier-1 local job monitoring G. Quast, A. Scheurer, M. Zvada CMS Offline & Computing Week CERN, April 4 8, 2011 INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK, KIT 1 KIT University of the State

More information

System Requirements Table of contents

System Requirements Table of contents Table of contents 1 Introduction... 2 2 Knoa Agent... 2 2.1 System Requirements...2 2.2 Environment Requirements...4 3 Knoa Server Architecture...4 3.1 Knoa Server Components... 4 3.2 Server Hardware Setup...5

More information

Welcome to the User Support for EGEE Task Force Meeting

Welcome to the User Support for EGEE Task Force Meeting Welcome to the User Support for EGEE Task Force Meeting The agenda is as follows: Welcome Note & Presentation of the current GGUS Support system Basic Support Model Coffee brake Processes Lunch Break Interfaces

More information

How To Use Happyface (Hf) On A Network (For Free)

How To Use Happyface (Hf) On A Network (For Free) Site Meta-Monitoring The HappyFace Project G. Quast, A. Scheurer, M. Zvada CMS Monitoring Review, 16. 17. November 2010 KIT University of the State of Baden-Wuerttemberg and National Research Center of

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal (*) work in collaboration with A. Falzone and A. Rodolico EGEE NA4 Workshop, Paris, 18.12.2003 CHEP 2000, 10.02.2000 Outline Introduction Grid portal architecture and requirements

More information

PassMark - CPU Mark Multiple CPU Systems - Updated 17th of July 2012

PassMark - CPU Mark Multiple CPU Systems - Updated 17th of July 2012 1 z 5 2012-07-17 08:26 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks ť Multiple CPU Systems CPU Benchmarks Video Card Benchmarks Hard

More information

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers This section includes system requirements for DMENE Network configurations that utilize virtual

More information

HIP Computing Resources for LHC-startup

HIP Computing Resources for LHC-startup HIP Computing Resources for LHC-startup Tomas Lindén Finnish CMS meeting in Kumpula 03.10. 2007 Kumpula, Helsinki October 3, 2007 1 Tomas Lindén Contents 1. Finnish Tier-1/2 computing in 2007 and 2008

More information

PassMark - CPU Mark High Mid Range CPUs - Updated 17th of July 2012

PassMark - CPU Mark High Mid Range CPUs - Updated 17th of July 2012 1 z 7 2012-07-17 08:20 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» High to Mid Range CPU's CPU Benchmarks Video Card Benchmarks Hard

More information

Global Grid User Support - GGUS - start up schedule

Global Grid User Support - GGUS - start up schedule Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support

More information

Status and Integration of AP2 Monitoring and Online Steering

Status and Integration of AP2 Monitoring and Online Steering Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität

More information

ATLAS GridKa T1/T2 Status

ATLAS GridKa T1/T2 Status ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot

More information

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept

Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the

More information

ALICE GRID & Kolkata Tier-2

ALICE GRID & Kolkata Tier-2 ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

PassMark CPU Benchmarks - High Mid Range CPU's

PassMark CPU Benchmarks - High Mid Range CPU's Strona 1 z 7 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» Mid Range CPU's CPU Benchmarks Video Card Benchmarks Hard Drive Benchmarks

More information

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart)

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart) 1 z 9 2012-11-16 08:32 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» High End CPU's CPU Benchmarks Over 600,000 CPUs Benchmarked High

More information

Auto-administration of glite-based

Auto-administration of glite-based 2nd Workshop on Software Services: Cloud Computing and Applications based on Software Services Timisoara, June 6-9, 2011 Auto-administration of glite-based grid sites Alexandru Stanciu, Bogdan Enciu, Gabriel

More information

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)

More information

Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team

Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given

More information

A National Computing Grid: FGI

A National Computing Grid: FGI A National Computing Grid: FGI Vera Hansper, Ulf Tigerstedt, Kimmo Mattila, Luis Alves 3/10/2012 FGI Grids in Finland : a short history 3/10/2012 FGI In the beginning, we had M-Grid Interest in Grid technology

More information

Sun Grid Engine, a new scheduler for EGEE

Sun Grid Engine, a new scheduler for EGEE Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May

More information

Laptopy Fujitsu LIFEBOOK Nowoczesna technologia z Intel Core i7 vpro drugiej generacji. lifebook.pl.ts.fujitsu.com

Laptopy Fujitsu LIFEBOOK Nowoczesna technologia z Intel Core i7 vpro drugiej generacji. lifebook.pl.ts.fujitsu.com Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» High to Mid Range CPU's CPU Benchmarks Video Card Benchmarks Hard Drive Benchmarks System

More information

Basics of Virtualisation

Basics of Virtualisation Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are

More information

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week

Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly

More information

CMS Query Suite. CS4440 Project Proposal. Chris Baker Michael Cook Soumo Gorai

CMS Query Suite. CS4440 Project Proposal. Chris Baker Michael Cook Soumo Gorai CMS Query Suite CS4440 Project Proposal Chris Baker Michael Cook Soumo Gorai I) Motivation Relational databases are great places to efficiently store large amounts of information. However, information

More information

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart)

Home Software Hardware Benchmarks Services Store Support Forums About Us. CPU Mark Price Performance (Click to select desired chart) 1 z 10 2014-05-06 09:59 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home Ä CPU Benchmarks Ä High to Mid Range CPU's CPU Benchmarks Over 600,000 CPUs Benchmarked

More information

www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 SEE-GRID-SCI Virtualization and Grid Computing with XEN www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 Milan Potocnik University

More information

CPU Benchmarks Over 600,000 CPUs Benchmarked

CPU Benchmarks Over 600,000 CPUs Benchmarked Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» High to Mid Range CPUs CPU Benchmarks Video Card Benchmarks Hard Drive Benchmarks RAM

More information

OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

More information

A quantitative comparison between xen and kvm

A quantitative comparison between xen and kvm Home Search Collections Journals About Contact us My IOPscience A quantitative comparison between xen and kvm This content has been downloaded from IOPscience. Please scroll down to see the full text.

More information

Home Software Hardware Benchmarks Services Store Support Forums About Us

Home Software Hardware Benchmarks Services Store Support Forums About Us 1 z 9 2013-11-26 19:24 Shopping cart Search Home Software Hardware Benchmarks Services Store Support Forums About Us Home» CPU Benchmarks» Mid to Low Range CPU's CPU Benchmarks Video Card Benchmarks Hard

More information

ontune SPA - Server Performance Monitor and Analysis Tool

ontune SPA - Server Performance Monitor and Analysis Tool ontune SPA - Server Performance Monitor and Analysis Tool Product Components - ontune is composed of the Manager; the Agents ; and Viewers Manager - the core ontune component, and installed on the management/viewing

More information

The glite File Transfer Service

The glite File Transfer Service Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline

More information

Overview of DFN`s Certificate Services - Regular, Grid and short-lived -

Overview of DFN`s Certificate Services - Regular, Grid and short-lived - Overview of DFN`s Certificate Services - Regular, Grid and short-lived - Marcus Pattloch (DFN-Verein) DESY Computing Seminar 13. July 2009, Hamburg Overview Certificates what are they good for (and what

More information