ALICE GRID & Kolkata Tier-2
|
|
- Avis Patrick
- 8 years ago
- Views:
Transcription
1 ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata
2 Events at LHC Luminosity : cm -2 s MHz every 25 ns 20 events overlaying
3 The Grid Computing Model Lab a Uni x ATLAS USA Lab m CERN Tier 1 UK Uni a CMS Tier3 physics department Tier2 Tier 1 Italy CERN CERN Tier 0 LHCb France Japan Uni n Desktop Lab b Scandinavia Germany Lab c Uni y Uni b Tier 0 Centre at CERN
4 ALICE computing model Online System RAW data delivered by DAQ undergo Calibration and Reconstruction which produce for each event 3 kinds of objects: 1. ESD object 2. AOD object 3. Tag object Online Farm This is done in Tier-0 site. Further reconstruction and calibration of RAW data will be done at Tier 1 and Tier 2. Tier 1 France Regional Center 10Gb/s Germany Regional Center Tier 0 Italy Regional Center ~40 Gb/s CERN Computer Center APROC Taiwan The generation, reconstruction, storage and distribution of Monte-Carlo simulated data will be the main task of Tier 1 and Tier /622 Mb/s Tier 3 Tier 2 Kolkata 1-10 Gb/s Tier2 Center Tier2 Center Tier2 Center Tier2 Center Physics data cache Institute Institute Institute Institute Mb/s DPD (Derived Physics Data) objects will be Processed in Tier 3 and Tier 4. Tier 4
5 Size: 16 x 26 meters Weight: 10,000 tons HMPID ALICE Setup LHC Utilization -- ALICE TOF TRD TPC PMD ITS PHOS Muon Arm Indian contribution to ALICE : PMD, Muon Arm
6 ALICE Collaboration ~ 1/2 ATLAS, CMS, ~ 2x LHCb ~1100 people 30 countries, 80 Institutes Total weight 10,000t Overall diameter 16.00m Overall length 25m Magnetic Field 0.4Tesla The ALICE collaboration & detector
7 Data volumes RAW data 2.5 PB/year Two distinct periods p+p (~7.5 months) and Pb+Pb (~40 days) Reconstructed and simulated data 1.5PB first level RAW filtering (ESDs) 200TB second level RAW filtering (AODs) 1PB of simulated data User generated data ~500TB Total ~5 PB of data per year (without replicas) Replication 2x RAW, 3x ESD/AODs, 2x user files Taken from L. Betev Slides in T1-T2 Meeting at Karlsruhe during Jan 2012
8 Processing RAW data reconstruction ~10K CPU cores MC processing ~15K CPU cores User analysis ~7K CPU cores (450 distinct users) ~40Mio jobs per year ~ 1.3 job completed every second ½ production, ½ user jobs 200 Mio files per year Taken from L. Betev Slides in T1-T2 Meeting at KIT Karlsruhe during Jan 2012
9 KOLKATA ALICE
10 ALICE Sites on MONALISA Europe Africa South America 72 active computing sites Asia North America
11 Why Tier 2? 1. Tier-2 is the lowest level to be accessible by the entire collaboration. 2. Each sub-detector of ALICE has to be associated with minimum Tier-2 because of large volume of calibration and simulated data. 3. PMD is one of the important sub-detectors of ALICE. 4. We are solely responsible for PMD from conception to commissioning.
12 Grid Site As per WLCG & Experiment Requirement WMS MyProxy VOMS. Site BDII CREAM-CE WNs (More and More WNs) SE (PureXrootD) LCG-UI Disks (More and More Disks)
13 KOLKATA or General Site Central Services WMS MyProxy VO-BOX Cooling, UPS Fire Alarm, Access Control etc Monitoring Server Site BDII CREAM-CE DPM PureXrootD XrootD Redirector XrootD Disk Server Installation,DHCP Server etc.. Local and Global Network / Fiber Line from Network NFS SERVER PBS SERVER DNS SERVER UI SERVER Tier3 Manage ment Server and cluster HA SERVER Blade 64 bit Servers With Blade Enclosures 32or64bit Servers Few Tower Servers 1U & 2U Servers New SAN Box Old NAS Older NAS HP DELL IBM Etc Even Older DAS Disks Arrays (More and More Arrays)
14 Frontend component of Site & Installation VO-BOX Site BDII LCG-CE CREAM-CE Grid middleware meta-packages installed through YUM and configured through YAIM. Middleware changed time to time like GLITE EMI. (follow manual) During Kolkata Site installation and configuration we experienced about RPM dependencies with JAVA, Security packages etc. SE PURE XrootD LCG-UI Community and mailing list helps a lot. For most of the problem we got the solution from mailing list. Thanks to APROC, Taiwan for helping at each stage
15 Middleware installed on IN-DAE-VECC-02 Site 1.Installed SLC 5.8 (x86_64) operating system on x86_64 Machine. 2. Upgrading below middleware packages to EMI middleware. glite-vobox CREAM-CE (64bit) glite-bdii Pure XROOTD Redirector as Storage Element glite-wn (64bit) grid01.tier2-kol.res.in gridce02.tier2-kol.res.in dcache-server.tier2-kol.res.in For 79 Worker Nodes (476 core) wn045-wn123.internal.tier2-kol.res.in
16 Backend Component of SITE Router & Switch 2 networks, one Public Network and another Private network. Domain Name Server DNS server is critical component. We have 2 redundant Name servers Naamak & suchak for High Availability. Time Server Configured NTP protocol Installer Using Network installation and Automated configuration Quattor like tools. Storage Server Using NFS mounted Common shared space PBS Server CE & PBS batch scheduler on a Server. Configured Firewall (through iptables) and did NAT ing on it. TIER-3 Cluster Separate cluster for local users with Interactive and non interactive nodes. Monitoring Server Configured MRTG (Network Traffic Monitoring) and cluster monitoring tool.
17 Doing Preventive Maintenance Once in a Year
18 x.x (Stand by) x.x (Stand by) KOLKATA Grid Kolkata TIER-2 centre logical diagram 300Mbps Internet Router 1Gbps Fiber Backbone xx/27 Switch grid Grid-peer gridse001 naamak suchak Installer gridce02 grid01 SINP Backup-server wn001 wn TB Backup Switch-1 wn002 Switch-2 dache-server Switch-1 wn046 Switch-2 Computing Nodes wn Nodes Dell and Wipro Blades Cluster with 25 TB of As Tier-3 4 Xrootd Disk Servers Consisting of 230 TB of IBM And HP SAN system Computing Nodes wn122 DELL and HP Blade Server with Multi Core Xeon 3.0 GHz wn025 GRID-PEER Tier-3 cluster with 32 & 64 bit machine wn123 IN-DAE-VECC-02 Site with 64 bit machine
19 ALICE Tier-2 Grid Started in 2002 CERN 512Kbps Ethernet Bandwidth S. K. Pal & T. Samanta Started in Operating System Scientific Linux 3.05 Middleware Alice Environment with PBS as batch system Hardware (CPU, Disk) 1xDuel Xeon,4GB Compute Node 2xDuel Xeon,2GB WNs 2x80GB Disk Space Bandwidth 512Kbps Shared
20 From 2 Core to 700 Cores Started with Desktop Machine Tower Like Servers HP 1U Servers Wipro 1U Servers Single Core HP Blades Dual Core HP Blades Quad Core Dell Bladed Dual Processor Dual Core GPU Server with Tesla 2070 with 448Cores 2012
21 Kolkata Tier2 on Monalisa
22 From 512MB Disk to 300TB Disk Started with MB in Desktop Machine GB in Tower Like Servers as DAS GB in HP MSA TB Wipro NAS TB HP EVA SAN TB i-scsi TB IBM DS TB Hard disk in GPU Server 2012
23
24 From 128Kbps to 1Gbps Disk Started with Kbps shared link Kbps Mbps Dedicated Link Mbps from Bharti Mbps from Reliance Mbps from VSNL (ERNET) Mbps from NKN Upgrading with 1Gpbs 2012
25 Efficient Cooling Concept and Implementation Hot and Cool Air is separated. For air separation, Cold Air Containment is created. Cold Air Containment is least accessible Area. Cool only hardware racks, not human, walls etc. Human intervention to Cold Aisle Containment is restricted. All the management and monitoring of the server, storage is from outside Cold Aisle Containment. All the power and Ethernet cables are also from outside Cold Aisle Containment. Temperature gradient between Cold and Hot aisle is 5 o C
26 Kolkata Tier-2 After renovation
27 Major Achievements Consistently more than 400 ALICE Jobs are running after Commissioning of the efficient Cooling Solution.
28 Achieved pledged resources Kolkata Tier-2 provided total 6.0K HEP SPEC2006 CPU and 230TB of Disk Storage.
29 Jobs completed KOLKATA Grid 1M ALICE Job completed during Last Year Performance: ~1M jobs successfully completed during last one year Time ->
30 Total Kolkata Tier-2 Resources Computing Resources:- Total :- 476 Cores DELL Blades 32 * 8 = 256 HP Quad Core Blades 8*8= 64 HP Dual Core Blades 39 * 4 = 156 Storage :- 230TB under one HP 2U Management Server 74TB : HP EVA 6100 under 2 * 2U HP disk server 156TB : IBM DS 5100 under 2 * 1U IBM disk server 300Mbps Network speed. It will be increased upto 1Gbps during this year.
31 After NKN Network, Speed Increased to 300 Mbps
32 Grid-Peer Tier-3 Cluster 1U Sliding LCD Monitor with 16 port KVM Dell(TM) PowerEdge(TM) M1000e Blade Server Chassis. 16 Number of Dell(TM) PowerEdge(TM) M610 High Performance Intel Blade Each blade has latest Nehalem based 2 * Intel Quad Core E5530 Xeon 2.4GHz CPU with 8MB cache. Each blade has16gb RAM. Each blade has 2 * 146GB Mounted as RAID1. Installed SLC 5.6 x86_64 OS (kernel version el5). Dell ISCSI EqualLogic Storage 16 * 2TB SAS hard disks TB Usable space after RAID5 and Hot Spare.
33 Grid-Peer Tier-3 Cluster cont Total 25 Nodes for VECC users and PMD Collaborators bit nodes 13 64bit computing nodes 32 bit nodes are on oldest hardware procured in 2004 (slowly we will deprecate them as High noise, power and Heat Generation.). 25 TB of Total storage active users (across India.) 30 + active users (in VECC.) Quota implemented. Root, Geant3, Aliroot, Alien, Fortran etc user specific software installed according to hardware like 32 bit and 64 bit. Extensively used by the users, need to extend.
34 Bi-product of WLCG GRID Intra-DAE Grid EU-India Grid Health Grid IGCA GARUDA Grid
35 Thank You
36 Supporting Slides
37 Main data types in ALICE Raw data Event Summary Data Pass1 T0 Conditions Calibration Alignment data OCDB (updated by pass0 -passn AliEn FC ESD AliRoot RECONSTRUCTION filtering Event Summary Data Event Summary Data AOD standard Pass2 T1 PassN T1 Analysis Analysis Analysis Monte Carlo + extra ESD run/event numbers, trigger word, primary vertex, arrays of tracks/vertices, detector info AOD standard cleaned-up ESD s, reducing the size by a factor of 5 Can be extended on user demand with extra information ESD and AOD inheriting from the same base class (keep same event interface)
38 ALICE Job Catalogue Job 1.1 Job 21.2 Job 31.3 Job 2.1 Job 2.1 Job 3.1 Job 3.2 lfn1, lfn2, lfn3, lfn4 lfn1, lfn2 lfn2, lfn3, lfn4 lfn1, lfn3, lfn2, lfn4 lfn3 lfn1, lfn3 lfn2, lfn4 lfn1, lfn3 lfn2 Optimizer Registers output Close SE s & Software Matchmaking KOLKATA Tier-2@Alice Grid Job submission Asks work-load Submits job ALICE File Catalogue lfn guid {se s} lfn guid {se s} lfn guid {se s} lfn guid {se s} lfn guid {se s} User Job ALICE central ALICE catalogues services Site Execs agent Yes User Env OK? VO-Box LCG No Die with grac e Updates TQ Retrieves workload Receives work-load Sends job result AliEn CE packman WMS Sends job agent to site CE WN
39 Xrootd architecture Global redirector (not in picture) intra-site storage collaboration Redirectors Cache file location open file X 2 nd open X Redirector (Head Node) go to C Who has file X? go to C A B Client Client sees all servers as xrootd data servers All storages are on WAN C Data Servers Cluster
40 Grid security (in a nutshell!) Important to be able to identify and authorise users Possibly to enable/disable certain actions Using X509 certificates The Grid passport, delivered by a certification authority. (IGCA for India) For using the Grid, create short-lived proxies Same information as the certificate but only valid for the time of the action Possibility to add group and role to a proxy Using the VOMS extensions Allows a same person to wear different hats (e.g. normal user or production manager) Your certificate is your passport, you should sign whenever you use it, don t give it away! Less danger if a proxy is stolen (short lived)
41 The VOBOX The VOBOX is a WLCG service developed in 2006 to provide the experiments with a service to: a) Run their own services. b) In addition it also provides file system access to the experiment software area. The concept of VOBOX is not the same for the 4 LHC experiments a) ALICE requires the STANDARD WLCG VOBOX
42 SRM SRM SRM SRM KOLKATA Grid Storage strategy SE head node xrootd (manager) xrootd (worker) Disk xrootd (worker) DPM Old implement ation WN xrootd (worker) Castor Current version MSS DPM, CASTOR, dcache are LCGdeveloped SEs, xrootd is entering as a strategic solution xrootd emulation (worker) MSS dcache Working, but severe limits with multiple clients
43 What is MonALISA? Caltech project started in Java-based set of distributed, self-describing services Offers the infrastructure to collect any type of information Can process it in near real time The services can cooperate in performing the monitoring tasks Can act as a platform for running distributed user agents
44 MonALISA software components and the connections between them Clients HL services Data consumers Proxies Multiplexing layer Helps firewalled endpoints connect Agents MonALISA services Data gathering services Network of JINI-Lookup Services Secure & Public ully Distributed System with no Single Point of Failure Registration and discovery
45 PROOF KOLKATA Grid Parallel ROOT Facility Interactive parallel analysis on a local cluster Parallel processing of (local) data Fast Feedback Output handling with direct visualization PROOF is part of ROOT
46 PROOF Schema Client Local PC Result Remote PROOF Cluster root stdout/result ana.c root node1 Result ana.c Data root Data node2 root node3 Result Data Proof master Proof slave root Result node4 Vikas Singhal, VECC, Data INDIA
Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan
Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).
More informationCERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
More informationReport from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
More informationHAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
More informationVirtualization Infrastructure at Karlsruhe
Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout
More informationAnalisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
More informationData storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
More informationBetriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil
Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle
More informationCMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
More informationNT1: An example for future EISCAT_3D data centre and archiving?
March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier
More informationAppendix 3. Specifications for e-portal
Appendix 3. Specifications for e-portal 1. e-portal H/W and S/W No Equipment Specification Unit Q'ty 1 Web Server for users and administrator 2 WAS Server 3 LMS Server 4 DB Server 4U Rack Type 8 Core Intel
More informationVirtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
More informationEvolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it
Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III
More information(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
More informationSAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
More informationService Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
More informationPart-1: SERVER AND PC
Part-1: SERVER AND PC Item Item Details Manufacturer Quantity Unit Price Total Dell server or equivalent Intel Xeon E5-2420 1.90GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W or equivalent PCIE Riser for
More informationHP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
More informationThe CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
More informationSolution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What
More informationout of this world guide to: POWERFUL DEDICATED SERVERS
out of this world guide to: POWERFUL DEDICATED SERVERS Our dedicated servers offer outstanding performance for even the most demanding of websites with the latest Intel & Dell technology combining unparalleled
More informationIntegrating a heterogeneous and shared Linux cluster into grids
Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund
More informationIntroduction to Cloud Computing
Introduction to Cloud Computing Cloud Computing II (Qloud) 15 319, spring 2010 3 rd Lecture, Jan 19 th Majd F. Sakr Lecture Motivation Introduction to a Data center Understand the Cloud hardware in CMUQ
More informationComputing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
More informationBuilding a Linux Cluster
Building a Linux Cluster CUG Conference May 21-25, 2001 by Cary Whitney Clwhitney@lbl.gov Outline What is PDSF and a little about its history. Growth problems and solutions. Storage Network Hardware Administration
More informationMicrosoft Exchange Server 2003 Deployment Considerations
Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers
More informationPost Genie TM WebMail Server 2400/2208R
具 備 容 錯 備 援 機 制 的 網 路 郵 件 伺 服 器 Post Genie TM WebMail Server 2400/2208R October 27, 2004 Presented by Kevin Liou Product Manager, Message Communication QNAP, Member of ICP Electronics In.c (IEI) 1 Agenda
More informationEnabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
More informationMaurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
More informationNoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de info@bankmark.de T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
More informationScientific Computing Data Management Visions
Scientific Computing Data Management Visions ELI-Tango Workshop Szeged, 24-25 February 2015 Péter Szász Group Leader Scientific Computing Group ELI-ALPS Scientific Computing Group Responsibilities Data
More informationEnabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
More informationLarge Scale Storage. Orlando Richards, Information Services orlando.richards@ed.ac.uk. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services orlando.richards@ed.ac.uk LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
More informationREQUEST FOR QUOTE. All out of date servers contain approximately 1 TB of data that needs to be migrated to the new Windows domain environment.
REQUEST FOR QUOTE The Housing Authority of the City of Hartford is seeking quotations for the following project: Systems Department Enterprise Technology Enhancements The current infrastructure consists
More informationComputing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
More informationThe Ultimate Business & Enterprise Hosting Solutions. www.radonhosting.com
The Ultimate Business & Enterprise Hosting Solutions Radon is for businesses that demand high performance, versatile and scalable solutions. From hosting your Website, Email, Voice, SMS and Business Applications,
More informationUN 4013 V - Virtual Tape Libraries solutions update...
UN 4013 V - Virtual Tape Libraries solutions update... - a Unisys storage partner Key issues when considering virtual tape Connectivity is my platform supported by whom? (for Unisys environments, MCP,
More informationGrid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
More informationBig Data and Storage Management at the Large Hadron Collider
Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!
More informationKFUPM Enterprise Network. Sadiq M. Sait sadiq@kfupm.edu.sa
KFUPM Enterprise Network Sadiq M. Sait sadiq@kfupm.edu.sa 1 Outline KFUPM Enterprise Network Diagram KFUPM Network Model KFUPM Backbone Network Connectivity: Academic Buildings, SDN, RAS Major Acheivements
More informationEMC SYMMETRIX VMAX 10K
EMC SYMMETRIX VMAX 10K EMC Symmetrix VMAX 10K with the Enginuity operating environment delivers a true Tier-1 multi-controller, scale-out architecture with consolidation and efficiency for the enterprise.
More informationAddendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department
Addendum No. 1 to Packet No. 28-13 Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department The following clarifications, modifications and/or revisions to the above project shall
More informationHigh Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
More information1. Specifiers may alternately wish to include this specification in the following sections:
GUIDE SPECIFICATION Seneca is a leading US designer and manufacturer of xvault security and surveillance solutions. The xvault product line combines our IP expertise and solution partnerships with certified
More information27 22 00 Data Communications Hardware 27 22 16 Data Communications Storage and Backup 27 22 19 Data Communications Servers
Pivot3 has over 1200 customers across the globe that rely on purpose-built Pivot3 appliances for highcapacity video surveillance and high-iop virtual desktop environments. The company is the leading supplier
More informationOSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
More informationMass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
More informationPowerful Dedicated Servers
Powerful Dedicated Servers Our dedicated servers offer outstanding performance for even the most demanding of websites with the latest Intel & Dell technology combining unparalleled server specification,
More informationCisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
More informationIT-INFN-CNAF Status Update
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 1 INFN CNAF CNAF is the main computing facility of the INFN Core business:
More informationThe new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links Filippo Costa on behalf of the ALICE DAQ group DATE software 2 DATE (ALICE Data Acquisition and Test Environment) ALICE is a
More informationManaged Hosting. PlusServer AG Overview
PlusServer AG Overview Managed Hosting Germany, Version 4.0-EN, as of February 25, 2010 PlusServer AG Tel. +49 22 33 612 4300 Daimlerstrasse 9-11 Fax +49 22 33 612 5140 50354 Huerth, Germany www.plusserver.com
More informationSPRACE Site Report. Guilherme Amadio SPRACE UNESP
SPRACE Site Report Guilherme Amadio SPRACE UNESP Compu5ng resources 144 worker nodes Physical CPUs: 288 Logical CPUs (cores): 1088 HEPSpec06: 13698 02 head nodes CE: HTCondor- CE job gateway and HTCondor
More informationEvaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
More informationLHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
More informationTerms of Reference Microsoft Exchange and Domain Controller/ AD implementation
Terms of Reference Microsoft Exchange and Domain Controller/ AD implementation Overview Maldivian Red Crescent will implement it s first Microsoft Exchange server and replace it s current Domain Controller
More informationImplementing Enterprise Disk Arrays Using Open Source Software. Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012
Implementing Enterprise Disk Arrays Using Open Source Software Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012 Mott Community College (MCC) Mott Community College is a mid-sized
More informationATLAS Cloud Computing and Computational Science Center at Fresno State
ATLAS Cloud Computing and Computational Science Center at Fresno State Cui Lin and (CS/Physics Departments, Fresno State) 2/24/2012 at CSU Chancellor s Office LHC ATLAS Tier 3 at CSUF Tier 1 France ~PByte/sec
More informationAlternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
More informationSolution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details System requirements and installation How to get it? 2 What is CC1? The CC1 system is a complete solution
More informationThe dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
More informationManaged Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
More informationHUS-IPS-5100S(D)-E (v.4.2)
Honeywell s HUS-IPS-5100S(D)-E is a controller-based IP SAN unified storage appliance. Designed for centralized mass data storage, this IP SAN solution can be used with the high performance streaming server
More informationDistributed Computing for CEPC. YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep. 12-13, 2014 1 Outline Introduction Experience of BES-DIRAC Distributed
More informationRFP-MM-1213-11067 Enterprise Storage Addendum 1
Purchasing Department August 16, 2012 RFP-MM-1213-11067 Enterprise Storage Addendum 1 A. SPECIFICATION CLARIFICATIONS / REVISIONS NONE B. REQUESTS FOR INFORMATION Oracle: 1) What version of Oracle is in
More information- Brazoria County on coast the north west edge gulf, population of 330,242
TAGITM Presentation April 30 th 2:00 3:00 slot 50 minutes lecture 10 minutes Q&A responses History/Network core upgrade Session Outline of how Brazoria County implemented a virtualized platform with a
More informationRFP - Equipment for the Replication of Critical Systems at Bank of Mauritius Tower and at Disaster Recovery Site. 06 March 2014
RFP - Equipment for the Replication of Critical Systems at Bank of Mauritius Tower and at Disaster Recovery Site Response to Queries: 06 March 2014 (1) Please specify the number of drives required in the
More informationScientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
More informationTier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
More informationGridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing
GridKa site report Manfred Alef, Andreas Heiss, Jos van Wezel Steinbuch Centre for Computing KIT The cooperation of and Universität Karlsruhe (TH) www.kit.edu KIT? SCC? { = University ComputingCentre +
More informationPanasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
More informationDELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING
DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
More informationSPACI & EGEE LCG on IA64
SPACI & EGEE LCG on IA64 Dr. Sandro Fiore, University of Lecce and SPACI December 13 th 2005 www.eu-egee.org Outline EGEE Production Grid SPACI Activity Status of the LCG on IA64 SPACI & EGEE Farm Configuration
More information112 Linton House 164-180 Union Street London SE1 0LH T: 020 7960 5111 F: 020 7960 5100
112 Linton House 164-180 Union Street London SE1 0LH T: 020 7960 5111 F: 020 7960 5100 Our dedicated servers offer outstanding performance for even the most demanding of websites with the low monthly fee.
More informationCORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER
CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended
More informationBoas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, 2015. 20014 IBM Corporation
Boas Betzler Cloud IBM Distinguished Computing Engineer for a Smarter Planet Globally Distributed IaaS Platform Examples AWS and SoftLayer November 9, 2015 20014 IBM Corporation Building Data Centers The
More informationRICOH Data Center Services
RICOH Data Center Services 1 About RICOH RICOH Overview We are Fortune Global 500 Company Established in 1993 Global 100 Most Sustainable Corporations in the world (9 consecutive Yrs) World s 100 Most
More informationRed Hat Enterprise Virtualization - KVM-based infrastructure services at BNL
Red Hat Enterprise Virtualization - KVM-based infrastructure services at Presented at NLIT, June 16, 2011 Vail, Colorado David Cortijo Brookhaven National Laboratory dcortijo@bnl.gov Notice: This presentation
More informationAvid ISIS 2500-2000 v4.7.7 Performance and Redistribution Guide
Avid ISIS 2500-2000.7 Performance and Redistribution Guide Change History Date Release Changes 11/13/2015 4.7.7 Added support for El Capitan (Mac OS 10.11) Added support for Atto Thunderlink 10Gb for Mac
More informationCERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
More informationDSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
More informationQHR Accuro EMR IT Hardware Requirements
QHR Accuro EMR IT Hardware Requirements Hardware Requirements for Accuro EMR Table of Contents Local Install Platform:... 3 Server Requirements:... 3 Workstation Requirements:... 4 Peripheral Requirements:...
More informationREPLIES TO PRE BID REPLIES FOR REQUEST FOR PROPOSAL FOR SUPPLY, INSTALLATION AND MAINTENANCE OF SERVERS & STORAGE
REPLIES TO PRE BID REPLIES FOR REQUEST FOR PROPOSAL FOR SUPPLY, INSTALLATION AND MAINTENANCE OF SERVERS & STORAGE NPCI/RFP/2015-16/IT/012 dated 27.08.2015 S.No Document Reference Page No Clause No Description
More informationGrid on Blades. Basil Smith 7/2/2005. 2003 IBM Corporation
Grid on Blades Basil Smith 7/2/2005 2003 IBM Corporation What is the problem? Inefficient utilization of resources (MIPS, Memory, Storage, Bandwidth) Fundamentally resources are being wasted due to wide
More informationArrow ECS sp. z o.o. Oracle Partner Academy training environment with Oracle Virtualization. Oracle Partner HUB
Oracle Partner Academy training environment with Oracle Virtualization Technology Oracle Partner HUB Overview Description of technology The idea of creating new training centre was to attain light and
More informationTier-1 Services for Tier-2 Regional Centres
Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)
More informationVirtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationManaging managed storage
Managing managed storage CERN Disk Server operations HEPiX 2004 / BNL Data Services team: Vladimír Bahyl, Hugo Caçote, Charles Curran, Jan van Eldik, David Hughes, Gordon Lee, Tony Osborne, Tim Smith Outline
More informationMain Memory Data Warehouses
Main Memory Data Warehouses Robert Wrembel Poznan University of Technology Institute of Computing Science Robert.Wrembel@cs.put.poznan.pl www.cs.put.poznan.pl/rwrembel Lecture outline Teradata Data Warehouse
More informationHP Proliant BL460c G7
HP Proliant BL460c G7 The HP Proliant BL460c G7, is a high performance, fully fault tolerant, nonstop server. It s well suited for all mid-level operations, including environments with local storage, SAN
More informationEMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
More informationNetwork Attached Storage Common Configuration for Entry-Level
Common Configuration for Entry-Level ATTACHMENT VI Product Attribute Entry Level -- Description RFI Response Main Memory (Base): 4 Internal Storage Controller: Fiber Channel RAID Controller: Dual redundant
More informationIntegration of Virtualized Worker Nodes in Standard-Batch-Systems CHEP 2009 Prague Oliver Oberst
Integration of Virtualized Worker Nodes in Standard-Batch-Systems CHEP 2009 Prague Oliver Oberst Outline General Description of Virtualization / Virtualization Solutions Shared HPC Infrastructure Virtualization
More informationRO-11-NIPNE, evolution, user support, site and software development. IFIN-HH, DFCTI, LHCb Romanian Team
IFIN-HH, DFCTI, LHCb Romanian Team Short overview: The old RO-11-NIPNE site New requirements from the LHCb team User support ( solution offered). Data reprocessing 2012 facts Future plans The old RO-11-NIPNE
More informationMinimum Hardware Specifications Upgrades
Minimum Hardware Specifications Upgrades http://www.varian.com/hardwarespecs ARIA for Medical Oncology, & ARIA for Radiation Oncology Version 13.0 1 ARIA Version 13.0 Minimum Hardware Specifications Minimum:
More informationQsan Document - White Paper. Performance Monitor Case Studies
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
More informationCentrata IT Management Suite 3.0
Centrata IT Management Suite 3.0 Technical Operating Environment April 9, 2004 Centrata Incorporated Copyright 2004 by Centrata Incorporated All rights reserved. April 9, 2004 Centrata IT Management Suite
More informationA SIMULATION STUDY FOR T0/T1 DATA REPLICATION AND PRODUCTION ACTIVITIES. Iosif C. Legrand *
A SIMULATION STUDY FOR T0/T1 DATA REPLICATION AND PRODUCTION ACTIVITIES Iosif C. Legrand * Ciprian Mihai Dobre**, Ramiro Voicu**, Corina Stratan**, Catalin Cirstoiu**, Lucian Musat** * California Institute
More information