GridKa: Roles and Status
|
|
|
- Coral Newman
- 10 years ago
- Views:
Transcription
1 GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Holger Marten
2 History 10/2000: First ideas about a German Regional Centre for LHC Computing - planning and cost estimates 05/2001: Start a BaBar-Tier-B with Univ. Bochum, Dresden, Rostock 07/2001: German HEP communities send Requirements for a Regional Data and Computing Centre in Germany (RDCCG) - more planning and cost estimates 12/2001: Launching committee establishes RDCCG (renamed to Grid Computing Centre Karlsruhe, GridKa later) 04/2002: First prototype 10/2002: GridKa Inauguration meeting
3 High Energy Physics experiments served by GridKa Atlas (SLAC, USA) u p m o id C y today r G to ad e d r e l t a it m ta a m o d C l rea e v Ha (FNAL,USA) LHC experiments (FNAL,USA) ting (CERN) non-lhc experiments Other sciences later
4 GridKa Project Organization Technical Advisory Board Overview Board Board Alice Atlas CMS LHCb BaBar CDF D0 Compass Physics Committees DESY Project Leader GridKa Planning Development Technical realization Operation BMBF Physics Committees HEP Experiments LCG FZK Management Head FZK Comp. Centre Chairman of TAB Project Leader
5 Aachen (4) Bielefeld (2) Bochum (2) Bonn (3) Darmstadt (1) Dortmund (1) Dresden (2) Erlangen (1) Frankfurt (1) Freiburg (2) Hamburg (1) Heidelberg (1) (6) Karlsruhe (2) Mainz (3) Mannheim (1) München (1) (5) Münster (1) Rostock (1) Siegen (1) Wuppertal (2) German Users of GridKa 22 institutions 44 user groups 350 scientists
6 GridKa in the network of international Tier-1 centres France: Germany: Italy: Japan: Spain: Switzerland: Taiwan: UK: USA: USA: IN2P3, Lyon CNAF, Bologna ICEPP, University Tokio PIC, Barcelona CERN, Genf Academia Sinica, Taipei Rutherford Laboratory, Chilton Fermi Laboratory, Batavia, IL BNL Warning: List not fixed.
7 The fifth LHC subproject Lab z Uni e The global LHC Computing Centre ATLAS Virtual Organizations Lab y USA (Fermi, BNL) Tier 3 Tier 1 (Institute Tier 2 computers) (Uni-CCs, Lab-CCs) Uni d UK (RAL) Uni b CERN Tier 0 LHCb.. Lab x Uni a France (IN2P3) Italy (CNAF) Tier 4 (Desktop) CERN Tier 1 CMS Germany (FZK) Lab i Working Groups Uni c Tier 0 Centre at CERN
8 desktops portables Santiago RAL small Tier-2 Weizmann centres Forschungszentrum Karlsruhe Tier-1 MSU IC IN2P3 IFCA UB FNAL Cambridge LHC Computing Model (simplified!!) Tier-0 the accelerator centre Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 CNAF Budapest Prague FZK Taipei PIC TRIUMF Legnaro ICEPP CSCS Rome CIEMAT Krakow NIKHEF USC Tier-1 Les Robertson, GDB, May 2004 BNL Permanent storage and management of raw, ESD, calibration data, meta- online to data acquisition process data, analysis data and databases -- high availability (24h x7d) grid-enabled data service -- managed mass storage Data-heavy analysis -- long-term commitment Re-processing raw ESD -- resources: 50% of average National, regional support Tier-1
9 Tier-2 Well-managed disk storage grid-enabled Simulation End-user analysis batch and interactive High performance parallel analysis (PROOF?) MSU IC IN2P3 IFCA UB FNAL Cambridge CNAF Budapest Prague FZK Taipei PIC TRIUMF Legnaro ICEPP CSCS Rome Each Tier-2 is associated with a Tier-1 that Serves as the primary data source Takes responsibility for long-term storage and management of all of the data generated at the Tier-2 (grid-enables mass storage) May also provide other support services (grid expertise, software distribution, maintenance, ) BNL CERN will not provide these services for Tier-2s GridKa School 2004, September 20-23, 2004, Karlsruhe, Germany except by special arrangement CIEMAT Krakow NIKHEF USC Les Robertson, GDB, May 2004 desktops portables Santiago RAL small Tier-2 Weizmann centres Forschungszentrum Karlsruhe Tier-1
10 GridKa planned resources Tbyte 6000 Jan CPU Disk Tape LCG Phase I Phase II Phase III ksi
11 Distribution of planned resources at GridKa 100% 80% 60% 40% 20% 0% 100% 80% 60% 40%!! C H on-l non-lhc n o t ns o i t u rib t LHC n o c t n a c re2007 t A ifi 2004 n e 2002 ign r l C S Tie a n r o i a eg ab R B F non-lhc CD, 0 D 20% 80% 60% Jan Disk LHC 0% 100% 2008 CPU non-lhc Tape 40% 20% LHC 0%
12 GridKa Environment
13 IW R 441,442 Main building Tape Storage
14 Worker Nodes & Test beds Production environment 97x dual PIII, 1,26 GHz 97 ksi x dual PIV, 2,2 GHz 102 ksi x dual PIV, 2,667 GHz 130 ksi x dual PIV, 3,06 GHz 534 ksi x dual Opteron ksi GB mem, 40 GB HD 1 GB RAM, 40 GB HD 1 GB RAM, 40 GB HD 1 GB RAM, 40/80 GB HD 2 GB RAM, 80 GB HD Σ 536 nodes, 1072 CPUs, 953 ksi2000 installed with RH7.3, LCG (except for Opterons) Test environment additional 30 machines in several test beds Next OS Scientific Linux if middleware and applications are ready
15 PBSPro fair share according to requirements experiment ksi2000 share percentage Alice Atlas CMS LHCb BaBar CDF Dzero Compass % LHC 55 % nlhc 1-oct-2004 The default (test) queue is not handled by the fair share. These CPUs are kept free for test jobs.
16 Disk Space available for HEP experiments: 202 TB Oct % LHC 71 % nlhc Compass D0 CDF BaBar LHCb CMS ATLAS 0 ALICE TByte 40
17 Online Storage I about 40 TB stored in NAS (better: DAS) dual CPU, 16 EIDE disks, 3Ware controller Experience hardware cheap, but not very reliable RAID software & management messages not always useful good throughput for a few simultaneous jobs, but doesn t scale to a few hundred simultaneous file accesses Workarounds disk mirroring management software ( managed disks ): file copies on multiple boxes) more reliable disks + parallel file system
18 Online Storage: I/O Design with NAS (DAS) Compute nodes TCP/IP/NFS Expansion ~ 30 MB/s r/w bottleneck disk access bottleneck Alice Atlas
19 Online Storage II about 160 TB stored in a SAN SCSI disks (rpm 10k) with redundant controllers parallel file system on a file server cluster exported via NFS on a cluster of file server to the WNs
20 Online Storage: Scalable I/O Design Compute nodes TCP/IP/NFS Expansion file server cluster SAN/SCSI Fibre Channel Alice Atlas RAID 5 storage striping + parallel file system; MB/s I/O measured
21 Online Storage II about 160 TB stored in a SAN SCSI disks (rpm 10k) with redundant controllers parallel file system on a file server cluster exported via NFS on a cluster of file server to the WNs Advantages high availability through multiple redundant servers load balancing via automounter program map Experience many teething problems (bugs, learn how to configure,...) ratio (CPU/Wall clock) near to 1 in some applications more expensive > next try cheaper S-ATA systems
22 Why telling all this? Because we need your experience and feedback as users!
23 Tape Space available for HEP experiments: 374 TB Oct % LHC 73 % nlhc Compass D0 CDF BaBar LHCb CMS ATLAS 0 ALICE TByte 80
24 Tape Storage tape library IBM 3584 LTO Ultrium 8 drives LTO-1, 4 drives LTO TB native (uncompressed) Tivoli Storage Manager (TSM) for Backup and Archive installation of dcache in progress - tape backend interfaced to Tivoli Storage Manager - installation with 1 head and 3 pool nodes currently tested by CMS & CDF other - SAM station caches for D0 and CDF - JIM (Job information management) station for D0 - tape connection via scripts (D0) - CORBA Naming service (for CDF)
25 GridKa Plan for WAN connectivity 2 Gbps 155 Mbps Start 10 Gbps tests 10 Gbps 20 Gbps Start discussion with Dante! 34 Mbps Sept 2004 DFN upgraded the capacity from Karlsruhe to Géant to 10 Gbps; tests have been started! Routing (full 10 Gbps): GridKa DFN (Karlsruhe) DFN (Frankfurt) Géant (Frankfurt) Géant (Milano) Géant (Geneva) CERN
26 Further services & sources of information
27 GGUS (Global Grid User Support)
28 User information GridKa Info - user registration globus installation batch system PBS backup & archive getting a certificate from GermanGrid CA listserver / mailing lists monitoring status with Ganglia HEP experiments - experiment specific information - FAQ Documentaion...
29 Tools gridmon.fzk.de/ganglia
30 Final remarks
31 Europe on the way to e-science EU-Project EGEE April 2004 to March Mio. Euro f. personnel Russland 70 partner institutes in 27 countries organized in 9 federations applications LHC grid, Biomed,... Op Co Re Provide distributed European research communities with a common market of computing, offering round-the-clock access to major computing resources, independent of geographic location,..
32 Status of LCG / EGEE
33 Last but not least We want to help - our users on our systems - support/discuss cluster installations at other institutes - support/discuss middleware installations at other centres - creating a German Grid Infrastructure and... We will continue the balancing act between - testing & Data Challanges - production with real data
34 No equipment without people. Thanks! We appreciate the continuous interest and support by the Federal Ministry of Education and Research, BMBF.
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS
Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
EGEE is a project funded by the European Union under contract IST-2003-508833
www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. GridKa User Meeting
GridKa User Meeting Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen Dr. Holger Marten http://grid.fzk.de
Linux and the Higgs Particle
Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004 Outline What is CERN The Physics The Physics Tools The Accelerator The
Global Grid User Support - GGUS - in the LCG & EGEE environment
Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni ([email protected]) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New
Global Grid User Support - GGUS - start up schedule
Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support
The CMS analysis chain in a distributed environment
The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Mass Storage at GridKa
Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
Implementing Offline Digital Video Storage using XenData Software
using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
irods at CC-IN2P3: managing petabytes of data
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods at CC-IN2P3: managing petabytes of data Jean-Yves Nief Pascal Calvat Yonny Cardenas Quentin Le Boulc h
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Grid Computing in Aachen
GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for
CHESS DAQ* Introduction
CHESS DAQ* Introduction Werner Sun (for the CLASSE IT group), Cornell University * DAQ = data acquisition https://en.wikipedia.org/wiki/data_acquisition Big Data @ CHESS Historically, low data volumes:
AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill
AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS
ATLAS GridKa T1/T2 Status
ATLAS GridKa T1/T2 Status GridKa TAB, FZK, 19 Oct 2007 München GridKa T1/T2 status Production and data management operations Computing team & cloud organization T1/T2 meeting summary Site monitoring/gangarobot
NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS
This portion of the survey is for clients who are NOT on TSI Healthcare s ASP and are hosting NG software on their own server. This information must be collected by an IT staff member at your practice.
KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch
Virtual Server and Storage Provisioning Service. Service Description
RAID Virtual Server and Storage Provisioning Service Service Description November 28, 2008 Computer Services Page 1 TABLE OF CONTENTS INTRODUCTION... 4 VIRTUAL SERVER AND STORAGE PROVISIONING SERVICE OVERVIEW...
DCMS Tier 2/3 prototype infrastructure
DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier
Das HappyFace Meta-Monitoring Framework
Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5
Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach
Low-cost BYO Mass Storage Project. James Cizek Unix Systems Manager Academic Computing and Networking Services
Low-cost BYO Mass Storage Project James Cizek Unix Systems Manager Academic Computing and Networking Services The Problem Reduced Budget Storage needs growing Storage needs changing (Tiered Storage) I
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
Annex 1: Hardware and Software Details
Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
WHITEPAPER: Understanding Pillar Axiom Data Protection Options
WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases
Status and Integration of AP2 Monitoring and Online Steering
Status and Integration of AP2 Monitoring and Online Steering Daniel Lorenz - University of Siegen Stefan Borovac, Markus Mechtel - University of Wuppertal Ralph Müller-Pfefferkorn Technische Universität
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
Solution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What
Software Scalability Issues in Large Clusters
Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built
XenData Video Edition. Product Brief:
XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
How To Monitor Your Computer With Nagiostee.Org (Nagios)
Host and Service Monitoring at SLAC Alf Wachsmann Stanford Linear Accelerator Center [email protected] DESY Zeuthen, May 17, 2005 Monitoring at SLAC Alf Wachsmann 1 Monitoring at SLAC: Does not really
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1
NAS or iscsi? Selecting a storage system White Paper 2007 Copyright 2007 Fusionstor www.fusionstor.com No.1 2007 Fusionstor Inc.. All rights reserved. Fusionstor is a registered trademark. All brand names
System Requirements Version 8.0 July 25, 2013
System Requirements Version 8.0 July 25, 2013 For the most recent version of this document, visit our documentation website. Table of Contents 1 System requirements 3 2 Scalable infrastructure example
CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY
White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
Preview of a Novel Architecture for Large Scale Storage
Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Storage Virtualization from clusters to grid
Seanodes presents Storage Virtualization from clusters to grid Rennes 4th october 2007 Agenda Seanodes Presentation Overview of storage virtualization in clusters Seanodes cluster virtualization, with
Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration
Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta
