LHC schedule: what does it imply for SRM deployment? CERN, July 2007

Size: px
Start display at page:

Download "LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007"

Transcription

1 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? WLCG Storage Workshop CERN, July 2007

2 Agenda The machine The experiments The service

3 LHC Schedule Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec Operation testing of available sectors Machine Checkout Beam Commissioning to 7 TeV Interconnection of the continuous cryostat Global pressure test &Consolidation Warm up Leak tests of the last sub-sectors Flushing Powering Tests Inner Triplets repairs & interconnections. Cool-down Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. 20/6/2007 LHC commissioning - CMS June 07 3

4 2008 LHC Accelerator schedule 20/6/2007 LHC commissioning - CMS June 07 4

5 2008 LHC Accelerator schedule 20/6/2007 LHC commissioning - CMS June 07 5

6 Machine Summary No engineering run in 2007 Startup in May 2008 we aim to be seeing high energy collisions by the summer. No long shutdown end 2008 See also DG s talk on

7 Experiments Continue preparations for Full Dress Rehearsals Schedule from CMS is very clear: CSA07 runs September 10 for 30 days Ready for cosmics runinnovember Another such run in March ALICE have stated FDR from November Expecting concurrent exports from ATLAS & CMS end July 1GB/s from ATLAS, 300MB/s from CMS Bottom line: continuous activity post CHEP likely to be (very) busy

8 ATLAS Event sizes We already needed more hardware in the T0 because In the TDR there was no full ESD copy to BNL included Transfers require more disk servers than expected 10% less disk space in CAF From TDR: RAW=1.6 MB, ESD=0.5 MB, AOD=0.1 MB 5-day buffer at CERN 127 Tbyte; Currently 50 disk servers 300 TByte For Release 13: RAW=1.6 MB, ESD=1.5 MB, AOD=0.23 MB (incl. trigger&truth) MB 3.3 = 50% more at T0 3 ESD, 10 AOD copies: MB = factor 2 more for exports More disk servers needed for T0 internal and exports 40% less disk in CAF Extra tapes and drives 25% cost increase Have to be taken away from CAF again Also implications for T1/2 sites Can store 50% less data Goal: run this summer 2 weeks uninterrupted at nominal rates with all T1 sites Event sizes from cosmic run ~8MB (no zero suppression) CERN, June 26,2007 Software & Computing Workshop 8

9 ATLAS T0 T1 Exports situation at May 28/ Tier-1 Site Efficiency (%) Average Nominal 50% of 100% of 150% of 200% of Thruput Rate nominal nominal nominal nominal (MB/s) (MB/s) achieved achieved achieved achieved ASGC BNL CNAF FZK Lyon NDGF PIC RAL SARA Triumf CERN, June 26,2007 Software & Computing Workshop 9

10

11 Services Schedule Q: What td do you (CMS) need for CSA07? A: Nothing would like FTS 2.0 at Tier1s (and not too late) but not required for CSA07 to succeed Trying to ensure that this is done at CMS T1s Other major residual service : SRM v2.2 2 Windows of opportunity: post CSA07, early 2008 Q: How long will SRM 1.1 services be needed? 1 week? 1 month? 1 year? LHC annual schedule has significant impact on larger service upgrades / migrations cf COMPASS triple migration

12 S.W.O.T. Analysis of WLCG Services Strengths Weaknesses Threats Opportunities We do have a service that is used, albeit with a small number of well known and documented deficiencies (with work-arounds) Continued service instabilities; holes in operational tools & procedures; ramp-up will take at least several (many?) months more Hints of possible delays could re-ignite discussions on new features Maximise time remaining until high-energy running to: 1)Ensure 1.) all remaining residual services are deployed as rapidly as possible, but only when sufficiently tested & robust; 2.) Focus on smooth service delivery, with emphasis on improving all operation, service and support activities. All services (including residual ) should be in place no later than All services (including residual ) should be in place no later than Q1 2008, by which time a marked improvement in the measurable service level should also be achievable.

13 LCG Steep ramp-up still needed before first physics run CERN + Tier-1s - Installed and Required CPU Capacity (MSI2K) CERN + Tier-1s - Installed and Required DISK Capacity (PetaBytes) x x Apr 06 May Jun Jul Aug 06 Sep 06 Oct 06 Nov Dec Jan Feb 07 Mar 07 Apr 07 May Jun Jul Aug 07 Sep 07 Oct 07 Nov Dec Jan Feb 08 Mar Apr Apr 06 May Jun Jul Aug 06 Sep 06 Oct 06 Nov Dec Jan Feb 07 Mar 07 Apr 07 May Jun Jul Aug 07 Sep 07 Oct 07 Nov Dec Jan Feb 08 Mar Apr installed target installed target Evolution of installed capacity from April 06 to June 07 Target capacity from MoU pledges for 2007 (due July07) and 2008 (due April 08)

14 WLCG Service: S / M / L vision Short-term: ready for Full Dress Rehearsals now expected to fully ramp-up ~mid-september (>CHEP) The only thing I see as realistic on this time-frame is FTS 2.0 services at WLCG Tier0 & Tier1s Schedule: June 18 th at CERN; available mid-july for Tier1s Medium-term: what is needed & possible for 2008 LHC data taking & processing The remaining residual services must be in full production mode early Q at all WLCG sites! Significant improvements in monitoring, reporting, logging more timely error response service improvements Long-term: anything else The famous sustainable e-infrastructure? WLCG Service Deployment Lessons Learnt 14

15 WLCG Service Deployment Lessons Learnt 15

16 Types of Intervention 0. (Transparent) load balanced servers / (ices) 1. Infrastructure: power, cooling, network 2. Storage services: CASTOR, dcache 3. Interaction with backend DB: LFC, FTS, VOMS, SAM etc.

17 Transparent Interventions - Definition Have reached agreement with the LCG VOs that the combination of hardware / middleware / experiment-ware should be resilient to service glitches A glitch is defined as a short interruption of (one component of) the service that can be hidden at least to batch behind some retry mechanism(s) How long is a glitch? All central CERN services are covered for power glitches of up to 10 minutes Some are also covered for longer by diesel UPS but any non-trivial service seen by the users is only covered for 10 Can we implement the services so that ~all interventions are transparent? YES with some provisos EGI Preparation Meeting, Munich, March Jamie.Shiers@cern.ch 17

18 Targetted Interventions Common interventions include: Adding additional resources to an existing service; Replacing hardware used by an existing service; Operating system / middleware upgrade / patch; Similar operations on DB backend (where applicable). Pathological cases include: Massive machine room reconfigurations, as was performed at CERN (and elsewhere) to prepare for LHC; Wide-spread power or cooling problems; Major network problems, such as DNS / router / switch problems. Pathological cases clearly need to be addressed too! Lessons Learnt from WLCG Service Deployment - Jamie.Shiers@cern.ch 18

19 More Transparent Interventions I am preparing to restart our SRM server here at IN2P3-CC so I have closed the IN2P3 channel on prod-fts-ws in order to drain current transfer queues. I will open them in 1 hour or 2. Is this a transparent intervention or an unscheduled one? A: technically unscheduled, since it's SRM downtime. An EGEE broadcast was made, but this is just an example But if the channel was first paused which would mean that no files will fail it becomes instead transparent at least to the FTS which is explicitly listed as a separate service in the WLCG MoU, both for T0 & T1! i.e. if we can trivially limit the impact of an intervention, we should (c.f. WLCG MoU services at Tier0/Tier1s/Tier2s) / WLCG Service Deployment Lessons Learnt 19

20 Service Review For each service need current status of: Power supply (redundant including power feed? Critical? Why?) Servers (single or multiple? DNS load-balanced? HA Linux? RAC? Other?) Network (are servers connected to separate network switches?) Middleware? (can middleware transparently handle loss of one of more servers?) Impact (what is the impact on other services and / or users of a loss / degradation of service?) Quiesce / recovery (can the service be cleanly paused? Is there built-in recovery? (e.g. buffers) What length of interruption?) Tested (have interventions been made transparently using the above features?) Documented (operations procedures, service information) WLCG Service Deployment Lessons Learnt 20

21 WLCG Service Deployment Lessons Learnt 21

22 Why a Grid Solution? The LCG Technical Design Report lists: 1. Significant costs of [ providing ] maintaining and upgrading the necessary resources more easily handled in a distributed ib t d environment, where individual id institutes t and organisations can fund local resources whilst contributing to the global goal 2. no single points of failure. Multiple copies of the data, automatic reassigning of tasks to resources facilitates access to data for all scientists independent of location. round the clock monitoring and support. The Worldwide LHC Computing Grid - Jamie.Shiers@cern.ch - CCP Gyeongju, Republic of Korea

23 Services - Summary Its open season on SPOFs You Seek! are a SPOF! You Locate! are the enemy of the Grid! You Exterminate! will be exterminated! WLCG Service Deployment Lessons Learnt 23

24 Summary 2008 / 2009 LHC running will be lower than design luminosity (but same data rate?) Work kh has (re-)started with CMS to jointly address critical services R li ti ll it ill t k it ff t d Realistically, it will take quite some effort and time to get services up to design luminosity

25 Questions for this workshop 1. Given the schedule of the experiments and the LHC machine, (when) can we realistically deploy SRM 2.2 in production? 2. What is the roll-out out schedule? (WLCG sites by name & possibly VO) 3. How long is the validation period including possible fixes to clients (FTS etc.) 4. For how long do we need to continue to run SRM v1.1 1 services? Migration issues? Clients?

26 ATLAS Visit For those who have registered, now is a good time to pay the 10 deposit RDV 14:00 Geneva time, CERN reception, B33

27 Backup Slides

28 Component LFC DPM FTS 2.0 3D VOMS roles Summary updates presented at June GDB Service Progress Summary Bulk queries deployed in February, Secondary groups deployed in April. ATLAS and LHCb are currently giving new specifications for other bulk operations that are scheduled for deployment this Autumn. Matching GFAL and lcg-utils changes. SRM 2.2 support released in November. Secondary groups deployed in April. Support for ACLs on disk pools has just passed certification. SL4 32 and 64-bit versions certified apart from vdt (gridftp) dependencies. Has been through integration and testing including certificate delegation, SRM v2.2 support and service enhancements now being validated in PPS and pilot service (already completed by ATLAS and LHCb); will then be used in CERN production for 1 month (from June 18 th ) before release to Tier-1. Ongoing (less critical) developments to improve monitoring piece by piece continue. All Tier 1 sites in production mode and validated with respect to ATLAS conditions DB requirements. 3D monitoring integrated into GGUS problem reporting system. Testing to confirm streams failover procedures in next few weeks then will exercise coordinated DB recovery with all sites. Also starting Tier 1 scalability tests with many ATLAS and LHCb clients to have correct DB server resources in place by the Autumn. Mapping to job scheduling priorities has been implemented at Tier 0 and most Tier 1 but behavior is not as expected (ATLAS report that production role jobs map to both production and normal queues) so this is being re-discussed.

29 Service Progress Summary Component glite 3.1 WMS glite 3.1 CE SL4 SRM 2.2 DAQ-Tier-0 Integration Operations Summary updates presented at June GDB WMS passed certification and is now in integration. It is being used for validation work at CERN by ATLAS and CMS with LHCb to follow. Developers at CNAF fix any bugs then run 2 weeks of local testing before giving g patches back to CERN. CE still under test with no clear date for completion. Backup solution is to keep the existing 3.0 CE which will require SLC3 systems. Also discussing alternative solutions. SL3 built SL4 compatibility mode UI and WN released but decision to deploy left to sites. Native SL4 32 WN in PPS now and UI ready to go in. Will not be released to production until after experiment testing is completed. SL4 DPM (needs vdt) important for sites that buy new hardware. CASTOR2 work is coupled to the ongoing performance enhancements; dcache 1.8 beta has test installations at FNAL, DESY, BNL, FZK, Edinburgh, IN2P3 and NDGF, most of which also are in the PPS. Integration of ALICE with the Tier-0 has been tested with a throughput of 1 GByte/sec. LHCb testing planned for June then ATLAS and CMS from September. Many improvements are under way for increasing the reliability of all services. See this workshop & also WLCG Collaboration N.B. its not all dials & dashboards!

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Service Challenge Tests of the LCG Grid

Service Challenge Tests of the LCG Grid Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

The glite File Transfer Service

The glite File Transfer Service Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Techniques for implementing & running robust and reliable DB-centric Grid Applications

Techniques for implementing & running robust and reliable DB-centric Grid Applications Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust

More information

Distributed Database Services - a Fundamental Component of the WLCG Service for the LHC Experiments - Experience and Outlook

Distributed Database Services - a Fundamental Component of the WLCG Service for the LHC Experiments - Experience and Outlook Distributed Database Services - a Fundamental Component of the WLCG Service for the LHC Experiments - Experience and Outlook Maria Girone CERN, 1211 Geneva 23, Switzerland Maria.Girone@cern.ch Abstract.

More information

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).

More information

Retirement of glite3.1 and unsupported glite 3.2

Retirement of glite3.1 and unsupported glite 3.2 EGI-InSPIRE Retirement of glite3.1 and unsupported glite 3.2 T. Ferrari/EGI.eu 10/10/2012 1 glite 3.1 glite 3.1 distribution no longer supported end of security support for most of the products in May-Oct

More information

The LCG Distributed Database Infrastructure

The LCG Distributed Database Infrastructure The LCG Distributed Database Infrastructure Dirk Düllmann, CERN & LCG 3D DESY Computing Seminar 21. May 07 CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Outline of the Talk Why databases

More information

LHCb activities at PIC

LHCb activities at PIC CCRC08 post-mortem LHCb activities at PIC G. Merino PIC, 19/06/2008 LHCb Computing Main user analysis supported at CERN + 6Tier-1s Tier-2s essentially MonteCarlo production facilities 2 CCRC08: Planned

More information

ATLAS grid computing model and usage

ATLAS grid computing model and usage ATLAS grid computing model and usage RO-LCG workshop Magurele, 29th of November 2011 Sabine Crépé-Renaudin for the ATLAS FR Squad team ATLAS news Computing model : new needs, new possibilities : Adding

More information

AT&T Global Network Client for Windows Product Support Matrix January 29, 2015

AT&T Global Network Client for Windows Product Support Matrix January 29, 2015 AT&T Global Network Client for Windows Product Support Matrix January 29, 2015 Product Support Matrix Following is the Product Support Matrix for the AT&T Global Network Client. See the AT&T Global Network

More information

Status and Evolution of ATLAS Workload Management System PanDA

Status and Evolution of ATLAS Workload Management System PanDA Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full

More information

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

dcache, a managed storage in grid

dcache, a managed storage in grid dcache, a managed storage in grid support and funding by Patrick for the dcache Team Topics Project Topology Why do we need storage elements in the grid world? The idea behind the LCG (glite) storage element.

More information

ATLAS Software and Computing Week April 4-8, 2011 General News

ATLAS Software and Computing Week April 4-8, 2011 General News ATLAS Software and Computing Week April 4-8, 2011 General News Refactor requests for resources (originally requested in 2010) by expected running conditions (running in 2012 with shutdown in 2013) 20%

More information

Recovery and Backup TIER 1 Experience, status and questions. RMAN Carlos Fernando Gamboa, BNL Gordon L Brown, RAL Meeting at CNAF June 12-1313 of 2007, Bologna, Italy 1 Table of Content Factors that define

More information

CERN s Scientific Programme and the need for computing resources

CERN s Scientific Programme and the need for computing resources This document produced by Members of the Helix Nebula consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available at

More information

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb

Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers

More information

Global Grid User Support - GGUS - start up schedule

Global Grid User Support - GGUS - start up schedule Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support

More information

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS* COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun

More information

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*

COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS* COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun

More information

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III

More information

Improvement Options for LHC Mass Storage and Data Management

Improvement Options for LHC Mass Storage and Data Management Improvement Options for LHC Mass Storage and Data Management Dirk Düllmann HEPIX spring meeting @ CERN, 7 May 2008 Outline DM architecture discussions in IT Data Management group Medium to long term data

More information

IPv6 Traffic Analysis and Storage

IPv6 Traffic Analysis and Storage Report from HEPiX 2012: Network, Security and Storage david.gutierrez@cern.ch Geneva, November 16th Network and Security Network traffic analysis Updates on DC Networks IPv6 Ciber-security updates Federated

More information

John Kennedy LMU München DESY HH seminar 18/06/2007

John Kennedy LMU München DESY HH seminar 18/06/2007 ATLAS Data Management in the GridKa Cloud John Kennedy LMU München DESY HH seminar Overview Who am I Cloud Overview DDM Design DDM OPS in the DE Cloud Other issues Conclusion 2 Who Am I and what do I do

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

Computing at the HL-LHC

Computing at the HL-LHC Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,

More information

Managing managed storage

Managing managed storage Managing managed storage CERN Disk Server operations HEPiX 2004 / BNL Data Services team: Vladimír Bahyl, Hugo Caçote, Charles Curran, Jan van Eldik, David Hughes, Gordon Lee, Tony Osborne, Tim Smith Outline

More information

Enhanced Vessel Traffic Management System Booking Slots Available and Vessels Booked per Day From 12-JAN-2016 To 30-JUN-2017

Enhanced Vessel Traffic Management System Booking Slots Available and Vessels Booked per Day From 12-JAN-2016 To 30-JUN-2017 From -JAN- To -JUN- -JAN- VIRP Page Period Period Period -JAN- 8 -JAN- 8 9 -JAN- 8 8 -JAN- -JAN- -JAN- 8-JAN- 9-JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- 8-JAN- 9-JAN- -JAN- -JAN- -FEB- : days

More information

Next Generation Tier 1 Storage

Next Generation Tier 1 Storage Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012 Why are we doing this?

More information

IT-INFN-CNAF Status Update

IT-INFN-CNAF Status Update IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 1 INFN CNAF CNAF is the main computing facility of the INFN Core business:

More information

Database Services for Physics @ CERN

Database Services for Physics @ CERN Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning

More information

ATLAS job monitoring in the Dashboard Framework

ATLAS job monitoring in the Dashboard Framework ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

Big Data and Storage Management at the Large Hadron Collider

Big Data and Storage Management at the Large Hadron Collider Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!

More information

HTCondor at the RAL Tier-1

HTCondor at the RAL Tier-1 HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core

More information

Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT

Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT SS Data & Storage Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT (with slides from Andreas Peters and Jan Iven) 5th International Conference "Distributed Computing and Grid-technologies

More information

Mass Storage System for Disk and Tape resources at the Tier1.

Mass Storage System for Disk and Tape resources at the Tier1. Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk

More information

Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security

Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security Overview Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security Blackboard Collaborate web conferencing is available in a hosted environment and this document

More information

PoS(EGICF12-EMITC2)110

PoS(EGICF12-EMITC2)110 User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: Julia.Andreeva@cern.ch Mattia

More information

OSG Operational Infrastructure

OSG Operational Infrastructure OSG Operational Infrastructure December 12, 2008 II Brazilian LHC Computing Workshop Rob Quick - Indiana University Open Science Grid Operations Coordinator Contents Introduction to the OSG Operations

More information

Analysis One Code Desc. Transaction Amount. Fiscal Period

Analysis One Code Desc. Transaction Amount. Fiscal Period Analysis One Code Desc Transaction Amount Fiscal Period 57.63 Oct-12 12.13 Oct-12-38.90 Oct-12-773.00 Oct-12-800.00 Oct-12-187.00 Oct-12-82.00 Oct-12-82.00 Oct-12-110.00 Oct-12-1115.25 Oct-12-71.00 Oct-12-41.00

More information

Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138. Exhibit 8

Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138. Exhibit 8 Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138 Exhibit 8 Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 2 of 138 Domain Name: CELLULARVERISON.COM Updated Date: 12-dec-2007

More information

Objectivity Data Migration

Objectivity Data Migration Objectivity Data Migration M. Nowak, K. Nienartowicz, A. Valassi, M. Lübeck, D. Geppert CERN, CH-1211 Geneva 23, Switzerland In this article we describe the migration of event data collected by the COMPASS

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

Distributed Database Access in the LHC Computing Grid with CORAL

Distributed Database Access in the LHC Computing Grid with CORAL Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &

More information

An objective comparison test of workload management systems

An objective comparison test of workload management systems An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: sfiligoi@fnal.gov Abstract. The Grid

More information

Data storage services at CC-IN2P3

Data storage services at CC-IN2P3 Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:

More information

Council, 6 February 2014. IT Report. Executive summary and recommendations. Introduction

Council, 6 February 2014. IT Report. Executive summary and recommendations. Introduction Council, 6 February 2014 IT Report Executive summary and recommendations Introduction The report sets out the main activities of the IT Department since the last meeting of Council. It includes statistical

More information

Virtualization, Grid, Cloud: Integration Paths for Scientific Computing

Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager Davide.Salomoni@cnaf.infn.it

More information

Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013

Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department

More information

The glite File Transfer Service

The glite File Transfer Service The glite File Transfer Service Peter Kunszt Paolo Badino Ricardo Brito da Rocha James Casey Ákos Frohner Gavin McCance CERN, IT Department 1211 Geneva 23, Switzerland Abstract Transferring data reliably

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Tier 1 Services - CNAF to T1

Tier 1 Services - CNAF to T1 CDF Report on Tier 1 Usage Donatella Lucchesi for the CDF Italian Computing Group INFN Padova Outline The CDF Computing Model Tier1 resources usage as today CDF portal for European GRID: lcgcaf People

More information

Relational databases for conditions data and event selection in ATLAS

Relational databases for conditions data and event selection in ATLAS Relational databases for conditions data and event selection in ATLAS F Viegas 1, R Hawkings 1,G Dimitrov 1,2 1 CERN, CH-1211 Genève 23, Switzerland 2 LBL, Lawrence-Berkeley National Laboratory, Berkeley,

More information

Support Model for SC4 Pilot WLCG Service

Support Model for SC4 Pilot WLCG Service Model for SC4 Pilot WLCG Flavia Donno CERN www.eu-egee.org Problems reporting SC : what s implied? Deployment and configuration, middleware, external components, mass storage support, etc. (from site admins,

More information

Monitoring the Grid at local, national, and global levels

Monitoring the Grid at local, national, and global levels Home Search Collections Journals About Contact us My IOPscience Monitoring the Grid at local, national, and global levels This content has been downloaded from IOPscience. Please scroll down to see the

More information

Virtualization. (and cloud computing at CERN)

Virtualization. (and cloud computing at CERN) Virtualization (and cloud computing at CERN) Ulrich Schwickerath Special thanks: Sebastien Goasguen Belmiro Moreira, Ewan Roche, Romain Wartel See also related presentations: CloudViews2010 conference,

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk

More information

Analisi di un servizio SRM: StoRM

Analisi di un servizio SRM: StoRM 27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The

More information

ATLAS GridKa T1/T2 Status

ATLAS GridKa T1/T2 Status ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot

More information

FermiGrid Highly Available Grid Services

FermiGrid Highly Available Grid Services FermiGrid Highly Available Grid Services Eileen Berman, Keith Chadwick Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359. Outline FermiGrid - Architecture &

More information

Scalable NAS for Oracle: Gateway to the (NFS) future

Scalable NAS for Oracle: Gateway to the (NFS) future Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change

More information

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,

More information

Science Days 2010. Stefan Freitag. 03. November 2010. Robotics Research Institute Dortmund University of Technology. Cloud Computing in D-Grid

Science Days 2010. Stefan Freitag. 03. November 2010. Robotics Research Institute Dortmund University of Technology. Cloud Computing in D-Grid in at and in Science Days 2010 Stefan Freitag Robotics Research Institute Dortmund University of Technology 03. November 2010 Collaboration of Mad Rocket Scientists in Site B at Site A and Site D Site

More information

NT1: An example for future EISCAT_3D data centre and archiving?

NT1: An example for future EISCAT_3D data centre and archiving? March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier

More information

<Insert Picture Here> Pacific Gas and Electric Backup & Recovery Case Study

<Insert Picture Here> Pacific Gas and Electric Backup & Recovery Case Study Pacific Gas and Electric Backup & Recovery Case Study Eugene Psoter Pacific Gas and Electric Company Base Content Slide PG&E Agenda First Level Bullet Second level bullet Introduction

More information

LCG POOL, Distributed Database Deployment and Oracle Services@CERN

LCG POOL, Distributed Database Deployment and Oracle Services@CERN LCG POOL, Distributed Database Deployment and Oracle Services@CERN Dirk Düllmann, D CERN HEPiX Fall 04, BNL Outline: POOL Persistency Framework and its use in LHC Data Challenges LCG 3D Project scope and

More information

ALICE GRID & Kolkata Tier-2

ALICE GRID & Kolkata Tier-2 ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every

More information

Page Views. 1 Oct 1, 2007 Dec 3, 2007 Feb 4, 2008 Apr 6, 2008 Jun 7, 2008 Aug 9, 2008 Oct 14, 2008

Page Views. 1 Oct 1, 2007 Dec 3, 2007 Feb 4, 2008 Apr 6, 2008 Jun 7, 2008 Aug 9, 2008 Oct 14, 2008 IT Overview Web Site Scalability 2600000 Pagviews Peak Capability Page Views 135512 7063 368 19 1 Oct 1, 2007 Dec 3, 2007 Feb 4, 2008 Apr 6, 2008 Jun 7, 2008 Aug 9, 2008 Oct 14, 2008 The graph displays

More information

Summer Student Project Report

Summer Student Project Report Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months

More information

High Availability Implementation for JD Edwards EnterpriseOne

High Availability Implementation for JD Edwards EnterpriseOne High Availability Implementation for JD Edwards EnterpriseOne Ken Yeh, Manager, ERP Systems/JDE Enersource Colin Dawes, Director of Technology Services, Syntax Presentation Abstract Enersource Corporation

More information

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive

More information

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)

More information

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment

More information

HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.

HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1. Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy

More information

Cost effective methods of test environment management. Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA

Cost effective methods of test environment management. Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA Cost effective methods of test environment management Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA 2013 Agenda Basic complexity Dynamic needs for test environments Traditional

More information

Experiences with the GLUE information schema in the LCG/EGEE production Grid

Experiences with the GLUE information schema in the LCG/EGEE production Grid Experiences with the GLUE information schema in the LCG/EGEE production Grid Stephen Burke, Sergio Andreozzi and Laurence Field CHEP07, Victoria, Canada www.eu-egee.org EGEE and glite are registered trademarks

More information

ASGC Incident Report. Simon C. Lin Eric Yen. 11 March 2009, GDB, CERN

ASGC Incident Report. Simon C. Lin Eric Yen. 11 March 2009, GDB, CERN ASGC Incident Report Simon C. Lin Eric Yen Academia Sinica Grid Computing 11 March 2009, GDB, CERN ASGC Data Centre Background Information Total Capacity 2MW, 330 tons 100 racks 2007 Expansion Capacity

More information

PowerSteering Product Roadmap Your Success Is Our Bottom Line

PowerSteering Product Roadmap Your Success Is Our Bottom Line Drive strategy. Accelerate results. cloud-based program & portfolio management software PowerSteering Product Roadmap Your Success Is Our Bottom Line Jay Hoskins Director of Product Management PowerSteering

More information

Cloud Computing PES. (and virtualization at CERN) Cloud Computing. GridKa School 2011, Karlsruhe. Disclaimer: largely personal view of things

Cloud Computing PES. (and virtualization at CERN) Cloud Computing. GridKa School 2011, Karlsruhe. Disclaimer: largely personal view of things PES Cloud Computing Cloud Computing (and virtualization at CERN) Ulrich Schwickerath et al With special thanks to the many contributors to this presentation! GridKa School 2011, Karlsruhe CERN IT Department

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Cloud Accounting. Laurence Field IT/SDC 22/05/2014

Cloud Accounting. Laurence Field IT/SDC 22/05/2014 Cloud Accounting Laurence Field IT/SDC 22/05/2014 Helix Nebula Pathfinder project Development and exploitation Cloud Computing Infrastructure Divided into supply and demand Three flagship applications

More information

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr

Plateforme de Calcul pour les Sciences du Vivant. SRB & glite. V. Breton. http://clrpcsv.in2p3.fr SRB & glite V. Breton http://clrpcsv.in2p3.fr Introduction Goal: evaluation of existing technologies for data and tools integration and deployment Data and tools integration should be addressed using web

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni (torsten.antoni@iwr.fzk.de) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

Data Management. Issues

Data Management. Issues Data Management Issues Lassi A. Tuura Northeastern University ARDA Workshop 22 June 2004 Part I Data Management at DC04 ARDA Workshop 22 June 2004 DC04 Transfer System Data challenge started in March 2004

More information

Monitoring Evolution WLCG collaboration workshop 7 July 2014. Pablo Saiz IT/SDC

Monitoring Evolution WLCG collaboration workshop 7 July 2014. Pablo Saiz IT/SDC Monitoring Evolution WLCG collaboration workshop 7 July 2014 Pablo Saiz IT/SDC Monitoring evolution Past Present Future 2 The past Working monitoring solutions Small overlap in functionality Big diversity

More information

U.S. Department of Energy Golden Field Office Information Technology. GOanywhere Real World Virtual Desktops in the DOE

U.S. Department of Energy Golden Field Office Information Technology. GOanywhere Real World Virtual Desktops in the DOE Information Technology GOanywhere Real World Virtual Desktops in the DOE Agenda 1. Brief Intro to the 2. VDI Drivers and Architecture 3. Lessons Learned 4. Benefits of VDI 5. Follow-On Initiatives About

More information

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney

More information

The Italian Grid Infrastructure (IGI) CGW08 Cracow. Mirco Mazzucato Italian Grid Infrastructure Coordinator INFN CNAF Director

The Italian Grid Infrastructure (IGI) CGW08 Cracow. Mirco Mazzucato Italian Grid Infrastructure Coordinator INFN CNAF Director The Italian Grid Infrastructure (IGI) CGW08 Cracow Mirco Mazzucato Italian Grid Infrastructure Coordinator INFN CNAF Director IGI -> National Grid Initiative Applications: Archeology Astronomy Astrophysics

More information

Integrated Performance & Risk Management -

Integrated Performance & Risk Management - www.pwc.nl Integrated Performance & Risk Management - How Leading Enterprises Manage Performance and Risk D&B Seminar Agenda 1. Introduction and objectives of today s session 2. Insights from the Annual

More information

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

CMS Tier-3 cluster at NISER. Dr. Tania Moulik CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend

More information

How To Get A Certificate From Ms.Net For A Server Server

How To Get A Certificate From Ms.Net For A Server Server Last Activity Recorded : December 19, 2014 Microsoft Certification ID : 2665612 MARC GROTE Wittorfer Strasse 4 Bardowick, Lower Saxony 21357 DE grotem@it-training-grote.de ACTIVE MICROSOFT CERTIFICATIONS:

More information