LHC schedule: what does it imply for SRM deployment? CERN, July 2007
|
|
|
- Sophia Parrish
- 10 years ago
- Views:
Transcription
1 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? WLCG Storage Workshop CERN, July 2007
2 Agenda The machine The experiments The service
3 LHC Schedule Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec Operation testing of available sectors Machine Checkout Beam Commissioning to 7 TeV Interconnection of the continuous cryostat Global pressure test &Consolidation Warm up Leak tests of the last sub-sectors Flushing Powering Tests Inner Triplets repairs & interconnections. Cool-down Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. Jan. Feb. Mar. Apr. May Jun. Jul. Aug. Sep. Oct. Nov. Dec. 20/6/2007 LHC commissioning - CMS June 07 3
4 2008 LHC Accelerator schedule 20/6/2007 LHC commissioning - CMS June 07 4
5 2008 LHC Accelerator schedule 20/6/2007 LHC commissioning - CMS June 07 5
6 Machine Summary No engineering run in 2007 Startup in May 2008 we aim to be seeing high energy collisions by the summer. No long shutdown end 2008 See also DG s talk on
7 Experiments Continue preparations for Full Dress Rehearsals Schedule from CMS is very clear: CSA07 runs September 10 for 30 days Ready for cosmics runinnovember Another such run in March ALICE have stated FDR from November Expecting concurrent exports from ATLAS & CMS end July 1GB/s from ATLAS, 300MB/s from CMS Bottom line: continuous activity post CHEP likely to be (very) busy
8 ATLAS Event sizes We already needed more hardware in the T0 because In the TDR there was no full ESD copy to BNL included Transfers require more disk servers than expected 10% less disk space in CAF From TDR: RAW=1.6 MB, ESD=0.5 MB, AOD=0.1 MB 5-day buffer at CERN 127 Tbyte; Currently 50 disk servers 300 TByte For Release 13: RAW=1.6 MB, ESD=1.5 MB, AOD=0.23 MB (incl. trigger&truth) MB 3.3 = 50% more at T0 3 ESD, 10 AOD copies: MB = factor 2 more for exports More disk servers needed for T0 internal and exports 40% less disk in CAF Extra tapes and drives 25% cost increase Have to be taken away from CAF again Also implications for T1/2 sites Can store 50% less data Goal: run this summer 2 weeks uninterrupted at nominal rates with all T1 sites Event sizes from cosmic run ~8MB (no zero suppression) CERN, June 26,2007 Software & Computing Workshop 8
9 ATLAS T0 T1 Exports situation at May 28/ Tier-1 Site Efficiency (%) Average Nominal 50% of 100% of 150% of 200% of Thruput Rate nominal nominal nominal nominal (MB/s) (MB/s) achieved achieved achieved achieved ASGC BNL CNAF FZK Lyon NDGF PIC RAL SARA Triumf CERN, June 26,2007 Software & Computing Workshop 9
10
11 Services Schedule Q: What td do you (CMS) need for CSA07? A: Nothing would like FTS 2.0 at Tier1s (and not too late) but not required for CSA07 to succeed Trying to ensure that this is done at CMS T1s Other major residual service : SRM v2.2 2 Windows of opportunity: post CSA07, early 2008 Q: How long will SRM 1.1 services be needed? 1 week? 1 month? 1 year? LHC annual schedule has significant impact on larger service upgrades / migrations cf COMPASS triple migration
12 S.W.O.T. Analysis of WLCG Services Strengths Weaknesses Threats Opportunities We do have a service that is used, albeit with a small number of well known and documented deficiencies (with work-arounds) Continued service instabilities; holes in operational tools & procedures; ramp-up will take at least several (many?) months more Hints of possible delays could re-ignite discussions on new features Maximise time remaining until high-energy running to: 1)Ensure 1.) all remaining residual services are deployed as rapidly as possible, but only when sufficiently tested & robust; 2.) Focus on smooth service delivery, with emphasis on improving all operation, service and support activities. All services (including residual ) should be in place no later than All services (including residual ) should be in place no later than Q1 2008, by which time a marked improvement in the measurable service level should also be achievable.
13 LCG Steep ramp-up still needed before first physics run CERN + Tier-1s - Installed and Required CPU Capacity (MSI2K) CERN + Tier-1s - Installed and Required DISK Capacity (PetaBytes) x x Apr 06 May Jun Jul Aug 06 Sep 06 Oct 06 Nov Dec Jan Feb 07 Mar 07 Apr 07 May Jun Jul Aug 07 Sep 07 Oct 07 Nov Dec Jan Feb 08 Mar Apr Apr 06 May Jun Jul Aug 06 Sep 06 Oct 06 Nov Dec Jan Feb 07 Mar 07 Apr 07 May Jun Jul Aug 07 Sep 07 Oct 07 Nov Dec Jan Feb 08 Mar Apr installed target installed target Evolution of installed capacity from April 06 to June 07 Target capacity from MoU pledges for 2007 (due July07) and 2008 (due April 08)
14 WLCG Service: S / M / L vision Short-term: ready for Full Dress Rehearsals now expected to fully ramp-up ~mid-september (>CHEP) The only thing I see as realistic on this time-frame is FTS 2.0 services at WLCG Tier0 & Tier1s Schedule: June 18 th at CERN; available mid-july for Tier1s Medium-term: what is needed & possible for 2008 LHC data taking & processing The remaining residual services must be in full production mode early Q at all WLCG sites! Significant improvements in monitoring, reporting, logging more timely error response service improvements Long-term: anything else The famous sustainable e-infrastructure? WLCG Service Deployment Lessons Learnt 14
15 WLCG Service Deployment Lessons Learnt 15
16 Types of Intervention 0. (Transparent) load balanced servers / (ices) 1. Infrastructure: power, cooling, network 2. Storage services: CASTOR, dcache 3. Interaction with backend DB: LFC, FTS, VOMS, SAM etc.
17 Transparent Interventions - Definition Have reached agreement with the LCG VOs that the combination of hardware / middleware / experiment-ware should be resilient to service glitches A glitch is defined as a short interruption of (one component of) the service that can be hidden at least to batch behind some retry mechanism(s) How long is a glitch? All central CERN services are covered for power glitches of up to 10 minutes Some are also covered for longer by diesel UPS but any non-trivial service seen by the users is only covered for 10 Can we implement the services so that ~all interventions are transparent? YES with some provisos EGI Preparation Meeting, Munich, March [email protected] 17
18 Targetted Interventions Common interventions include: Adding additional resources to an existing service; Replacing hardware used by an existing service; Operating system / middleware upgrade / patch; Similar operations on DB backend (where applicable). Pathological cases include: Massive machine room reconfigurations, as was performed at CERN (and elsewhere) to prepare for LHC; Wide-spread power or cooling problems; Major network problems, such as DNS / router / switch problems. Pathological cases clearly need to be addressed too! Lessons Learnt from WLCG Service Deployment - [email protected] 18
19 More Transparent Interventions I am preparing to restart our SRM server here at IN2P3-CC so I have closed the IN2P3 channel on prod-fts-ws in order to drain current transfer queues. I will open them in 1 hour or 2. Is this a transparent intervention or an unscheduled one? A: technically unscheduled, since it's SRM downtime. An EGEE broadcast was made, but this is just an example But if the channel was first paused which would mean that no files will fail it becomes instead transparent at least to the FTS which is explicitly listed as a separate service in the WLCG MoU, both for T0 & T1! i.e. if we can trivially limit the impact of an intervention, we should (c.f. WLCG MoU services at Tier0/Tier1s/Tier2s) / WLCG Service Deployment Lessons Learnt 19
20 Service Review For each service need current status of: Power supply (redundant including power feed? Critical? Why?) Servers (single or multiple? DNS load-balanced? HA Linux? RAC? Other?) Network (are servers connected to separate network switches?) Middleware? (can middleware transparently handle loss of one of more servers?) Impact (what is the impact on other services and / or users of a loss / degradation of service?) Quiesce / recovery (can the service be cleanly paused? Is there built-in recovery? (e.g. buffers) What length of interruption?) Tested (have interventions been made transparently using the above features?) Documented (operations procedures, service information) WLCG Service Deployment Lessons Learnt 20
21 WLCG Service Deployment Lessons Learnt 21
22 Why a Grid Solution? The LCG Technical Design Report lists: 1. Significant costs of [ providing ] maintaining and upgrading the necessary resources more easily handled in a distributed ib t d environment, where individual id institutes t and organisations can fund local resources whilst contributing to the global goal 2. no single points of failure. Multiple copies of the data, automatic reassigning of tasks to resources facilitates access to data for all scientists independent of location. round the clock monitoring and support. The Worldwide LHC Computing Grid - [email protected] - CCP Gyeongju, Republic of Korea
23 Services - Summary Its open season on SPOFs You Seek! are a SPOF! You Locate! are the enemy of the Grid! You Exterminate! will be exterminated! WLCG Service Deployment Lessons Learnt 23
24 Summary 2008 / 2009 LHC running will be lower than design luminosity (but same data rate?) Work kh has (re-)started with CMS to jointly address critical services R li ti ll it ill t k it ff t d Realistically, it will take quite some effort and time to get services up to design luminosity
25 Questions for this workshop 1. Given the schedule of the experiments and the LHC machine, (when) can we realistically deploy SRM 2.2 in production? 2. What is the roll-out out schedule? (WLCG sites by name & possibly VO) 3. How long is the validation period including possible fixes to clients (FTS etc.) 4. For how long do we need to continue to run SRM v1.1 1 services? Migration issues? Clients?
26 ATLAS Visit For those who have registered, now is a good time to pay the 10 deposit RDV 14:00 Geneva time, CERN reception, B33
27 Backup Slides
28 Component LFC DPM FTS 2.0 3D VOMS roles Summary updates presented at June GDB Service Progress Summary Bulk queries deployed in February, Secondary groups deployed in April. ATLAS and LHCb are currently giving new specifications for other bulk operations that are scheduled for deployment this Autumn. Matching GFAL and lcg-utils changes. SRM 2.2 support released in November. Secondary groups deployed in April. Support for ACLs on disk pools has just passed certification. SL4 32 and 64-bit versions certified apart from vdt (gridftp) dependencies. Has been through integration and testing including certificate delegation, SRM v2.2 support and service enhancements now being validated in PPS and pilot service (already completed by ATLAS and LHCb); will then be used in CERN production for 1 month (from June 18 th ) before release to Tier-1. Ongoing (less critical) developments to improve monitoring piece by piece continue. All Tier 1 sites in production mode and validated with respect to ATLAS conditions DB requirements. 3D monitoring integrated into GGUS problem reporting system. Testing to confirm streams failover procedures in next few weeks then will exercise coordinated DB recovery with all sites. Also starting Tier 1 scalability tests with many ATLAS and LHCb clients to have correct DB server resources in place by the Autumn. Mapping to job scheduling priorities has been implemented at Tier 0 and most Tier 1 but behavior is not as expected (ATLAS report that production role jobs map to both production and normal queues) so this is being re-discussed.
29 Service Progress Summary Component glite 3.1 WMS glite 3.1 CE SL4 SRM 2.2 DAQ-Tier-0 Integration Operations Summary updates presented at June GDB WMS passed certification and is now in integration. It is being used for validation work at CERN by ATLAS and CMS with LHCb to follow. Developers at CNAF fix any bugs then run 2 weeks of local testing before giving g patches back to CERN. CE still under test with no clear date for completion. Backup solution is to keep the existing 3.0 CE which will require SLC3 systems. Also discussing alternative solutions. SL3 built SL4 compatibility mode UI and WN released but decision to deploy left to sites. Native SL4 32 WN in PPS now and UI ready to go in. Will not be released to production until after experiment testing is completed. SL4 DPM (needs vdt) important for sites that buy new hardware. CASTOR2 work is coupled to the ongoing performance enhancements; dcache 1.8 beta has test installations at FNAL, DESY, BNL, FZK, Edinburgh, IN2P3 and NDGF, most of which also are in the PPS. Integration of ALICE with the Tier-0 has been tested with a throughput of 1 GByte/sec. LHCb testing planned for June then ATLAS and CMS from September. Many improvements are under way for increasing the reliability of all services. See this workshop & also WLCG Collaboration N.B. its not all dials & dashboards!
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
The glite File Transfer Service
Enabling Grids Enabling for E-sciencE Grids for E-sciencE The glite File Transfer Service Paolo Badino On behalf of the JRA1 Data Management team EGEE User Forum - CERN, 2 Mars 2006 www.eu-egee.org Outline
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Techniques for implementing & running robust and reliable DB-centric Grid Applications
Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust
AT&T Global Network Client for Windows Product Support Matrix January 29, 2015
AT&T Global Network Client for Windows Product Support Matrix January 29, 2015 Product Support Matrix Following is the Product Support Matrix for the AT&T Global Network Client. See the AT&T Global Network
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Evolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Enhanced Vessel Traffic Management System Booking Slots Available and Vessels Booked per Day From 12-JAN-2016 To 30-JUN-2017
From -JAN- To -JUN- -JAN- VIRP Page Period Period Period -JAN- 8 -JAN- 8 9 -JAN- 8 8 -JAN- -JAN- -JAN- 8-JAN- 9-JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- -JAN- 8-JAN- 9-JAN- -JAN- -JAN- -FEB- : days
Next Generation Tier 1 Storage
Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012 Why are we doing this?
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
ATLAS job monitoring in the Dashboard Framework
ATLAS job monitoring in the Dashboard Framework J Andreeva 1, S Campana 1, E Karavakis 1, L Kokoszkiewicz 1, P Saiz 1, L Sargsyan 2, J Schovancova 3, D Tuckett 1 on behalf of the ATLAS Collaboration 1
HTCondor at the RAL Tier-1
HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core
Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT
SS Data & Storage Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT (with slides from Andreas Peters and Jan Iven) 5th International Conference "Distributed Computing and Grid-technologies
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security
Overview Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security Blackboard Collaborate web conferencing is available in a hosted environment and this document
PoS(EGICF12-EMITC2)110
User-centric monitoring of the analysis and production activities within the ATLAS and CMS Virtual Organisations using the Experiment Dashboard system Julia Andreeva E-mail: [email protected] Mattia
OSG Operational Infrastructure
OSG Operational Infrastructure December 12, 2008 II Brazilian LHC Computing Workshop Rob Quick - Indiana University Open Science Grid Operations Coordinator Contents Introduction to the OSG Operations
Analysis One Code Desc. Transaction Amount. Fiscal Period
Analysis One Code Desc Transaction Amount Fiscal Period 57.63 Oct-12 12.13 Oct-12-38.90 Oct-12-773.00 Oct-12-800.00 Oct-12-187.00 Oct-12-82.00 Oct-12-82.00 Oct-12-110.00 Oct-12-1115.25 Oct-12-71.00 Oct-12-41.00
Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138. Exhibit 8
Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138 Exhibit 8 Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 2 of 138 Domain Name: CELLULARVERISON.COM Updated Date: 12-dec-2007
Distributed Database Access in the LHC Computing Grid with CORAL
Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
Council, 6 February 2014. IT Report. Executive summary and recommendations. Introduction
Council, 6 February 2014 IT Report Executive summary and recommendations Introduction The report sets out the main activities of the IT Department since the last meeting of Council. It includes statistical
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
Relational databases for conditions data and event selection in ATLAS
Relational databases for conditions data and event selection in ATLAS F Viegas 1, R Hawkings 1,G Dimitrov 1,2 1 CERN, CH-1211 Genève 23, Switzerland 2 LBL, Lawrence-Berkeley National Laboratory, Berkeley,
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
ATLAS GridKa T1/T2 Status
ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot
Scalable NAS for Oracle: Gateway to the (NFS) future
Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware
Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
Summer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
High Availability Implementation for JD Edwards EnterpriseOne
High Availability Implementation for JD Edwards EnterpriseOne Ken Yeh, Manager, ERP Systems/JDE Enersource Colin Dawes, Director of Technology Services, Syntax Presentation Abstract Enersource Corporation
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS EKSPERYMENTY FIZYKI WYSOKICH ENERGII W SIECIACH KOMPUTEROWYCH GRID. 1.
Computer Science Vol. 9 2008 Andrzej Olszewski HIGH ENERGY PHYSICS EXPERIMENTS IN GRID COMPUTING NETWORKS The demand for computing resources used for detector simulations and data analysis in High Energy
Cost effective methods of test environment management. Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA
Cost effective methods of test environment management Prabhu Meruga Director - Solution Engineering 16 th July SCQAA Irvine, CA 2013 Agenda Basic complexity Dynamic needs for test environments Traditional
PowerSteering Product Roadmap Your Success Is Our Bottom Line
Drive strategy. Accelerate results. cloud-based program & portfolio management software PowerSteering Product Roadmap Your Success Is Our Bottom Line Jay Hoskins Director of Product Management PowerSteering
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Monitoring Evolution WLCG collaboration workshop 7 July 2014. Pablo Saiz IT/SDC
Monitoring Evolution WLCG collaboration workshop 7 July 2014 Pablo Saiz IT/SDC Monitoring evolution Past Present Future 2 The past Working monitoring solutions Small overlap in functionality Big diversity
U.S. Department of Energy Golden Field Office Information Technology. GOanywhere Real World Virtual Desktops in the DOE
Information Technology GOanywhere Real World Virtual Desktops in the DOE Agenda 1. Brief Intro to the 2. VDI Drivers and Architecture 3. Lessons Learned 4. Benefits of VDI 5. Follow-On Initiatives About
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC
ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney
Integrated Performance & Risk Management -
www.pwc.nl Integrated Performance & Risk Management - How Leading Enterprises Manage Performance and Risk D&B Seminar Agenda 1. Introduction and objectives of today s session 2. Insights from the Annual
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
How To Get A Certificate From Ms.Net For A Server Server
Last Activity Recorded : December 19, 2014 Microsoft Certification ID : 2665612 MARC GROTE Wittorfer Strasse 4 Bardowick, Lower Saxony 21357 DE [email protected] ACTIVE MICROSOFT CERTIFICATIONS:
