Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. GridKa User Meeting

Size: px
Start display at page:

Download "Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. GridKa User Meeting"

Transcription

1 GridKa User Meeting Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D Eggenstein-Leopoldshafen Dr. Holger Marten

2 Content 1. FZ Karlsruhe & GridKa 2. User accounts & certificates 3. Cluster Installation 4. A rack-based cooling solution 5. Data management 6. Final remarks & summary

3 1. FZ Karlsruhe & GridKa

4 Forschungszentrum Karlsruhe 40 institutes and divisions employees 13 research programs for - Structure of Matter - Earth & Environment - Health - Energy - Key Technologies many close collaborations with TU Karlsruhe

5 Central Information and Communication Technologies Dept. (Hauptabteilung Informations- und Kommunikationstechnik, HIK) Your Key to Success HIK provides institutes of the Research Centre with state-of-the-art high performance computers and IT solutions for each purpose. vector computers, parallel computers, Linux Clusters, workstations, ~2500 PCs, online storage, tape robots, networking infrastructure, printers and printing services, central software, user support,... + R&D About 90 persons in 7 departments

6 HIK (K.-P. Mickel) Organisational Structure Zentralabtlg. Sekretariat DASI HLR GIS GES NiNa PC/BK Repro Datendienste Anwendungen Systemüberwachung Infrastruktur Hochleistungsrechnen Grid-Computing Infrastruktur und Service GridKa Grid-Computing und e-science Competence Centre Netzinfrastruktur und Netzanwendungen PC-Betreuung und Bürokommunikation Reprografie (R. Kupsch) (F. Schmitz) (H. Marten) (M. Kunze) (K. -P. Mickel) (A. Lorenz) (G.Dech) GridKa Zentralabteilung und Sekretariat: IT-Innovationen, Accounting, Billing, Budgetierung, Ausbildungskoordination, sonstige zentrale Aufgaben

7 Grid Computing Centre Karlsruhe - The mission German Tier-1 Regional Centre for 4 LHC HEP-Experiments test phase main set-up phase production phase German Computing Centre for 4 non-lhc HEP-Experiments production environment for BaBar, CDF, D0, Compass

8 Regional Data and Computing Centre Germany, Requirements For LHC (Alice, Atlas, CMS, LHCb) + BaBar, Compass, D0, CDF : Test Phase : LHC Setup Phase 2007+: Operation Including resources for BaBar Tier-A month/year 11/2001 4/2002 4/2003 4/ CPU (ksi95) disk (TByte) tape (Tbyte) started in 2001! + services... + other sciences

9 Organization of GridKa Project Leader & Deputy H. Marten, M. Kunze Overview Board controls execution & financing, arbitrates in case of conflicts Technical Advisory Board defines technical requirements It s a project with 41 user groups from 19 German institutions

10 GridKa Overview Board (OB) Chairman Representative of BMBF R. Maschuw (FZK board of directors) J. Richter Project Leader & Deputy Head of FZK Computing Department H. Marten, M. Kunze (FZK) K.-P. Mickel (FZK) Chaiman of TAB Representatives of KET & KHK P. Malzacher (GSI) R.-D. Heuer, (DESY), P. Braun-Munzinger (GSI) 2 Representatives of LHC/non-LHC each L. Köppke (U Mainz) for Atlas P. Mättig (U Wuppertal) for CDF/D0 T. Müller (TU Karlsruhe) for CMS M. Schmelling (MPI Heidelberg) for Alice & LHCb B. Spaan (TU Dresden) for BaBar & Compass

11 GridKa Technical Advisory Board (TAB) Chairmen K.-P. Mickel (FZK), P. Malzacher (GSI) Project Leader & Deputy H. Marten, M. Kunze (FZK) Representatives of KET & KHK Representative of DESY R.-D. Heuer, (DESY) P. Braun-Munzinger (GSI) R. Mankel (DESY) 8 representatives of LHC experiments 8 representatives of non-lhc experiments Alice K. Schwarz (GSI Darmstadt) BaBar H. Lacker (TU Dresden) P. Malzacher (GSI Darmstadt) M. Steinke (RU Bochum) Atlas G. Duckeck (LMU München) CDF T. Müller (TU Karlsruhe) L. Köppke (U Mainz) K. Rinnert (TU Karlsruhe) CMS G. Quast (TU Karlsruhe) Compass F.-H. Heinsius (U Freiburg) F. Raupach (RWTH Aachen) L. Schmitt (TU München) LHCb M. Schmelling (MPI Heidelberg) D0 D. Wicke (U Wuppertal) U. Uwer (U Heidelberg) Ch. Zeitnitz (U Mainz)

12 Role of the 16 HEP-Representatives in TAB For each of the 8 HEP experiments they are the primary interface between experiments & GridKa staff responsible for installation of experiment specific software (exp-admin) responsible for data import & data management per experiment responsible for information flow and Web pages per experiment responsible for allocation/de-allocation of user accounts per experiment

13 2. User accounts & certificates

14 GridKa how to get an account? Application form at -> GridKa Info -> user insert your adress, institute, ,... select experiment(s) and/or project(s) you are working in (projects = CrossGrid, DataGrid, LCG) print the form sign by yourself let it sign by your experiment representative(s)! send application form back to FZK by fax or official post login by ssh on port 24 only notice pw guiding rules (e.g. max. age = 180 days)

15 Grid Certification Authority Basic problem: I have a job here. Shall I give this job access to my resources? Is it really a job of user A? Solution: Grid users are identified by their certificate. A certificate is a public key signed by a Certification Authority (CA). Everybody can prove that this certificate has not been corrupted by a man in the middle. user applies for certificate (grid-cert-request / Web-interface); this generates users private/public key pair CA guarantees that a users public key belongs to the respective natural person - user sends copy of his identity card with own signature and signature of IL - CA calls back by phone to user and IL - CA signs users public key with CAs private key (off-line!!) everybody can prove users public key using the public key of the CA

16 GermanGrid CA delivers x509 certificates for German Grid research activities of Alice, Atlas, CMS, LHCb BaBar, CDF, Compass, D0 CrossGrid, DataGrid, LCG - already delivered certificates to GridLab Certificate Policy and Certification Practice Statement available Certificates accepted by DataGrid- and CrossGrid; Certificates delivered by globus.org not accepted!

17 3. GridKa Cluster Installation

18 Support for multiple experiments Strategy as starting point: Compute Nodes = general purpose, shared resources - GridKa Technical Advisory Board agreed to RedHat 7.2 Experiment-specific software server - arbitrary development environments & login at the beginning - pure software servers and/or Globus gatekeepers later-on

19 Experiment Specific Software Server 8x dual PIII for Alice, Atlas, BaBar,..., each with 2 GB ECC RAM, 4x 80 GB IDE-Raid5, 2x Gbit Ethernet Linux & basic software on demand: RH 6.2, 7.1.1, 7.2, Fermi-Linux, SuSE 7.2 Used as Development environment per experiment Interactive login & Globus gatekeeper per experiment Basic admin (root) by FZK Specific software installation by experiment admin

20 Grid LAN Backbone Extreme Black Diamond 6808 redundant power supply redundant management board 128 Gbit/s back plane max. 96 Gbit ports currently 80 ports available

21 Compute Nodes 124x dual PIII, each with 1 GHz or 1.26 GHz 1 GB ECC RAM 40 GB HDD IDE 100 Mbit Ethernet Total numbers: 5 TB local disk 124 GByte RAM R peak > 270 GFlops RedHat 7.2 OpenPBS automatic installation with NPACI Rocks

22 OpenPBS batch system provides batch commands qsub, qstat, qdel, qalter provides job status monitoring commands xpbs, xpbsmon examples for job submit and example scripts on the Web default (test) queue for jobs <20 min fair share queues short/ long/ extralong with 1/ 10/ 48 h one job per processor

23 Cluster Installation, Monitoring & Management scalability: many nodes to install and maintain (~ 2000) heterogeneity: different (Intel-based?) hardware over time consistency: software must be consistent on all nodes manpower: administration by few persons only This is for administrators, not for a Grid Resource Broker

24 Architecture for Scalable Cluster Administration Cabinet 1 Cabinet 2 Cabinet n Nodes C 1 Nodes C 2 Nodes C n Private Compute Network Manager C 1 Manager C 2 Manager C n Management Network Master Public Net Naming scheme: C ; F ,...

25 Installation - NPACI Rocks with FZK extensions subnet subnet subnet Nodes C 1 Nodes C 2 Nodes C n Private Compute Network DHCP-server compute IP install nodes Manager C 1 Manager C 2 Manager C n Management Network DHCP-server for Managers C1...Cn RH kickstart incl. IP conf. for Managers rpm to Managers Master Public Net reinstall all nodes in < 1 h

26 System Monitoring with Ganglia also installed on fileservers - CPU usage - Bytes I/O - Packets I/O - disk space -... and published on the Web

27 System Monitoring with Ganglia - Combined Push-Pull Ganglia daemon on each node info via multicast no routing subnet Nodes C 1 Nodes C 2 Nodes C n Private Compute Network Manager C 1 Manager C 2 Manager C n request Ganglia Master report write into round robin DB 300 kb / node Management Network publish to Web

28 Cluster Management with Nagios - Combined Push-Pull subnet Nodes C 1 Nodes C 2 Nodes C n analyse data handle local events ping SNMP syslog report Manager C 1 Manager C 2 Private Compute Network Manager C n Management Network report GPL analyse data handle events Nagios Master publish to Web

29 Software Installation & Central Services Software Installation Service (NPACI Rocks) System Monitoring (Ganglia Cluster Toolkit) Globus MDS, LDAP,... (Globus 2.0) System Management User Statistics (home made)

30 4. GridKa rack-based cooling system

31 Infrastructure We all want to build a Linux cluster do we have the cooling capacity?

32 Closed rack-based cooling system - A common development of FZK and Knürr rear CPU disk front 19 technique 38 units height usable 70x 120 cm floor space 10 kw cooling redundant DC fans temperature controlled CPU shut-down internal heat exchanger

33 Closed rack-based cooling system - No air conditioning channels required; Estimated cost reduction > 70% building external heat exchanger

34 Build a cluster of clusters (or a Campus Grid?) bld. C bld. A LAN bld. B bld. D

35 5. GridKa data management

36 Available Disk Space for HEP: 36 TByte net TByte ALICE ATLAS CMS LHCb BaBar CDF D0 Compass soll ist

37 Online Storage 59 TB brutto 45 TB net capacity ~ 500 disk drives mixed IDE, SCSI, FC ext3 & ufs file systems DAS: 2.6 TB brutto, 7.2k SCSI 120 GB, attached to SUN Enterprise 220 R SAN: 2x 5.1 TB brutto, 10k FC 73,4 GB, IBM Fast500 NAS: 42.2 TB brutto, 19x IDE-System, dual PIII 1.0/ 1.26 GHz, dual 3Ware Raid Controller, 16x 5.4k IDE 100/ 120/ 160 GB SAN-IDE: 3.8 TB brutto, 2 Systems, 12x 5.4k IDE 160 GB, driven by Linux-PC

38 Scheme of (Disk) Storage Cluster Nodes (clients) Grid backbone MDC Fileserver Tests with IDE NAS Linux TB/System Server & FC/IDE successful SAN (shared w. FZK) SAN goes commodity! Disk Subsystems Tape Drives

39 Available Tape Space for HEP: 106 TByte native TByte ALICE ATLAS CMS LHCb BaBar CDF D0 Compass soll ist

40 GridKa Tape System FZK Tape 1 FZK Tape 2 FZK SAN IBM 3584 ~ 2400 slots LTO Ultrium 100 GB/tape 106 TB native available 8 drives, 15 MByte/s each Backup/Archive with Tivoli Storage Manager

41 GridKa Backup Policy backup = short-term storage of (frequently) changing data (on tape) online data last version keep 60 days after online data is removed extra version keep max. 30 days remove if online data is removed automatic incremental backup of all file servers every night reverse process: restore = recreate the online copy from tape restore only by system administrators (trouble shooting) no warnings before versions removed from tape!

42 GridKa Archive Policy archive = long-term storage of data (on tape) archival for 1 / 2 / 5 / 10 years (default = 2 years) reverse process: retrieve = recreate the online copy from tape archival/retrieval to be done by users themselves (no automatic process) may be used to store multiple version for historical reasons command dsmc available on each login server online data may be removed after archival no warnings before archives removed from tape!

43 Hierarchical Storage Management (HSM) HSM = automatic migration & recall automatically migrate data from online to secondary storage (disk or tape) migration based on file age, available disk volume,... keep stub file online automatically recall file to original location if it is accessed data is lost if stub file is deleted or corrupted! -> needs additional backup HSM not available at GridKa! (but tests currently under discussion)

44 Storage & Data Management does it scale? >600 automount operations per second for 150 processors measured IDE NAS throughput - >150 MB/s local read (2x RAID5 + RAID0) MB/s w/r... over NFS... but < 10 MB/s with multiple I/O and multiple users 150 jobs write to a single NAS box Linux file system limit 2 TB disk volumes of >50 TB with flexible volume management desirable mature system needed now! Tests for gfs & gpfs under discussion

45 Storage & Data Management under discussion GridKa disk storage DataGrid Castor backup SAM dcache TSM - robotics - tape error handling - tape recycling

46 6. GridKa a few final remarks

47 Internet Connection 34 Mbps currently available 155 Mbps available from January Gbps test connection to CERN via DFN & Geant available flexibility will increase due to dedicated Grid firewall transfer of 1 TeraByte of data needs ~ 4 days at 34 Mbps ~ 1 day at 155 Mbps ~ 4 hours at 1 Gbps Still: high networking cost for bandwidth and transfer volume!

48 CrossGrid DataGrid Testbed dedicated CrossGrid installation available at FZK EDG installed FZK + Portugal + Spain were first to connect to DataGrid testbed GermanGrid certificates accepted CrossGrid-DataGrid demo at IST2002, Nov. 4, Copenhagen Our first goals: test middleware stability transfer middleware to GridKa a.s.a.p.

49 Summary - The GridKa Installation Gbit backbone 250 CPUs PIV this week for testing + ~60 PIV until April experiment specific server 35 TB net disk + 40 TB net until April TB tape TB until April 2003 (or on demand) a few central servers Globus, batch, installation, management,... WAN Gbit-test Karlsruhe-Cern FZK-Grid-CA -- testbed... exclusively for HEP and Grid Computing

50 GIS Your GridKa Support Team Holger Marten Manfred Alef Bruno Hoeft Axel Jaeger Melanie Knoch Ingrid Schäffner Bernhard Verstege Jos van Wezel Division & Project Leader Login Server, Security, File Server, Linux LAN/WAN Network Infrastructure, Cluster Management Globus, batch, user accounts Globus, Web, user accounts, certificates Cluster Management, Linux Data Management, File Server, Linux Ursula Epting (NiNa) + few persons from other divisions GermanGrid CA grid@hik.fzk.de; user forum rdccg-user@hik.fzk.de; HyperNews coming soon.

51 We appreciate the continuous interest and support by the Federal Ministry of Education and Research, BMBF.

The GridKa Installation for HEP Computing

The GridKa Installation for HEP Computing The GridKa Installation for HEP Computing Forschungszentrum Karlsruhe GmbH Central Information and Communication Technologies Department Hermann-von-Helmholtz-Platz 1 D-76344 Eggenstein-Leopoldshafen Holger

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

Mass Storage System for Disk and Tape resources at the Tier1.

Mass Storage System for Disk and Tape resources at the Tier1. Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk

More information

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend

OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu

More information

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de

Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH

More information

Global Grid User Support - GGUS - in the LCG & EGEE environment

Global Grid User Support - GGUS - in the LCG & EGEE environment Global Grid User Support - GGUS - in the LCG & EGEE environment Torsten Antoni (torsten.antoni@iwr.fzk.de) Why Support? New support groups Network layer Resource centers CIC / GOC / etc. more to come New

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Global Grid User Support - GGUS - within the LCG & EGEE environment Global Grid User Support - GGUS - within the LCG & EGEE environment Abstract: For very large projects like the LHC Computing Grid Project (LCG) involving some 8,000 scientists from universities and laboratories

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

Report from SARA/NIKHEF T1 and associated T2s

Report from SARA/NIKHEF T1 and associated T2s Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch

More information

Data Information and Management System (DIMS) The New DIMS Hardware

Data Information and Management System (DIMS) The New DIMS Hardware Data Information and Management System (DIMS) The New DIMS Hardware Wilhelm Wildegger, DFD-IT; Jens Pollex, DFD-BN 04. Mai 2007 Viewgraph 1 The New DIMS Hardware > W. Wildegger > 2006-12-08 Outline DIMS

More information

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS

Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Grid @ Forschungszentrum Karlsruhe: GridKa and GGUS Forschungszentrum Karlsruhe GmbH Institute for Scientific omputing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten (for the GridKa and GGUS teams)

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk

More information

Implementing a Digital Video Archive Based on XenData Software

Implementing a Digital Video Archive Based on XenData Software Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding

More information

File server infrastructure @NIKHEF

File server infrastructure @NIKHEF File server infrastructure @NIKHEF CT system support June 2003 1 CT NIKHEF Outline Protocols Naming scheme (Unix, Windows) Backup and archiving Server systems Disk quota policy AFS June 2003 2 CT NIKHEF

More information

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing

GridKa site report. Manfred Alef, Andreas Heiss, Jos van Wezel. www.kit.edu. Steinbuch Centre for Computing GridKa site report Manfred Alef, Andreas Heiss, Jos van Wezel Steinbuch Centre for Computing KIT The cooperation of and Universität Karlsruhe (TH) www.kit.edu KIT? SCC? { = University ComputingCentre +

More information

Mass Storage at GridKa

Mass Storage at GridKa Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool

More information

An Introduction to High Performance Computing in the Department

An Introduction to High Performance Computing in the Department An Introduction to High Performance Computing in the Department Ashley Ford & Chris Jewell Department of Statistics University of Warwick October 30, 2012 1 Some Background 2 How is Buster used? 3 Software

More information

Building a Linux Cluster

Building a Linux Cluster Building a Linux Cluster CUG Conference May 21-25, 2001 by Cary Whitney Clwhitney@lbl.gov Outline What is PDSF and a little about its history. Growth problems and solutions. Storage Network Hardware Administration

More information

Global Grid User Support - GGUS - start up schedule

Global Grid User Support - GGUS - start up schedule Global Grid User Support - GGUS - start up schedule GDB Meeting 2004-07 07-13 Concept Target: 24 7 support via time difference and 3 support teams Currently: GGUS FZK GGUS ASCC Planned: GGUS USA Support

More information

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on

More information

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions

HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:

More information

LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0

LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0 LBNC Compute Cluster Installation and Configuration Author: Markus Baertschi Owner: Markus Baertschi Customer: LBNC Subject: LBNC Compute Cluster Installation and Configuration Page 1 of 14 Contents 1.

More information

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative

Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Towards a Comprehensive Accounting Solution in the Multi-Middleware Environment of the D-Grid Initiative Jan Wiebelitz Wolfgang Müller, Michael Brenner, Gabriele von Voigt Cracow Grid Workshop 2008, Cracow,

More information

Storage Virtualization from clusters to grid

Storage Virtualization from clusters to grid Seanodes presents Storage Virtualization from clusters to grid Rennes 4th october 2007 Agenda Seanodes Presentation Overview of storage virtualization in clusters Seanodes cluster virtualization, with

More information

Data storage services at CC-IN2P3

Data storage services at CC-IN2P3 Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:

More information

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it

Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it Evolution of the Italian Tier1 (INFN-T1) Umea, May 2009 Felice.Rosso@cnaf.infn.it 1 In 2001 the project of the Italian Tier1 in Bologna at CNAF was born. First computers were based on Intel Pentium III

More information

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have

More information

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

CMS Tier-3 cluster at NISER. Dr. Tania Moulik CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend

More information

Implementing Offline Digital Video Storage using XenData Software

Implementing Offline Digital Video Storage using XenData Software using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

TotalStorage Network Attached Storage 300G Cost effective integration of NAS and LAN solutions

TotalStorage Network Attached Storage 300G Cost effective integration of NAS and LAN solutions TotalStorage Network Attached Storage 300G Cost effective integration of NAS and LAN solutions Overview The new IBM TotalStorage Network Attached Storage 300G series is part of the overall IBM Storage

More information

The GRID and the Linux Farm at the RCF

The GRID and the Linux Farm at the RCF The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture

More information

Software Scalability Issues in Large Clusters

Software Scalability Issues in Large Clusters Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built

More information

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Preview of a Novel Architecture for Large Scale Storage

Preview of a Novel Architecture for Large Scale Storage Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the

More information

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

IBM TotalStorage Network Attached Storage 300G

IBM TotalStorage Network Attached Storage 300G High-performance storage access solution IBM TotalStorage Network Attached Storage 300G IBM Network Attached Storage 300G (dual-engine configuration) Highlights Optimized performance Fault tolerant Dual

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

Fakultät für Physik Ludwig Maximilians Universität Klaus Steinberger Ralph Simmler Alexander Thomas. HA Cluster using Open Shared Root

Fakultät für Physik Ludwig Maximilians Universität Klaus Steinberger Ralph Simmler Alexander Thomas. HA Cluster using Open Shared Root Ludwig Maximilians Universität Klaus Steinberger Ralph Simmler Alexander Thomas HA Cluster using Open Shared Root Agenda Our Requirements The Hardware Why Openshared Root Technical Details of Openshared

More information

Dcache Support and Strategy

Dcache Support and Strategy Helmholtz Alliance 2nd Grid Workshop HGF Mass Storage Support Group Christoph Anton Mitterer christoph.anton.mitterer@physik.uni-muenchen.de for the group Group members Filled positions Christopher Jung

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

File System Design and Implementation

File System Design and Implementation WAN Transfer Acceleration Product Description Functionality Interfaces Specifications Index 1 Functionality... 3 2 Integration... 3 3 Interfaces... 4 3.1 Physical Interfaces...5 3.1.1 Ethernet Network...5

More information

Milestone Solution Partner IT Infrastructure Components Certification Summary

Milestone Solution Partner IT Infrastructure Components Certification Summary Milestone Solution Partner IT Infrastructure Components Certification Summary Dell FS8600 NAS Storage 12-1-2014 Table of Contents Introduction:... 2 Dell Storage Architecture:... 3 Certified Products:...

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

XenData Product Brief: SX-550 Series Servers for LTO Archives

XenData Product Brief: SX-550 Series Servers for LTO Archives XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production

More information

Virtualization Infrastructure at Karlsruhe

Virtualization Infrastructure at Karlsruhe Virtualization Infrastructure at Karlsruhe HEPiX Fall 2007 Volker Buege 1),2), Ariel Garcia 1), Marcus Hardt 1), Fabian Kulla 1),Marcel Kunze 1), Oliver Oberst 1),2), Günter Quast 2), Christophe Saout

More information

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1 NAS or iscsi? Selecting a storage system White Paper 2007 Copyright 2007 Fusionstor www.fusionstor.com No.1 2007 Fusionstor Inc.. All rights reserved. Fusionstor is a registered trademark. All brand names

More information

Solution Brief: Creating Avid Project Archives

Solution Brief: Creating Avid Project Archives Solution Brief: Creating Avid Project Archives Marquis Project Parking running on a XenData Archive Server provides Fast and Reliable Archiving to LTO or Sony Optical Disc Archive Cartridges Summary Avid

More information

FermiGrid Highly Available Grid Services

FermiGrid Highly Available Grid Services FermiGrid Highly Available Grid Services Eileen Berman, Keith Chadwick Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359. Outline FermiGrid - Architecture &

More information

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007 Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements

More information

Building a Private Cloud with Eucalyptus

Building a Private Cloud with Eucalyptus Building a Private Cloud with Eucalyptus 5th IEEE International Conference on e-science Oxford December 9th 2009 Christian Baun, Marcel Kunze KIT The cooperation of Forschungszentrum Karlsruhe GmbH und

More information

Globus and the Centralized Research Data Infrastructure at CU Boulder

Globus and the Centralized Research Data Infrastructure at CU Boulder Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, daniel.milroy@colorado.edu Conan Moore, conan.moore@colorado.edu Thomas Hauser, thomas.hauser@colorado.edu Peter Ruprecht,

More information

Managed Hosting. PlusServer AG Overview

Managed Hosting. PlusServer AG Overview PlusServer AG Overview Managed Hosting Germany, Version 4.0-EN, as of February 25, 2010 PlusServer AG Tel. +49 22 33 612 4300 Daimlerstrasse 9-11 Fax +49 22 33 612 5140 50354 Huerth, Germany www.plusserver.com

More information

Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor

Servers, Clients. Displaying max. 60 cameras at the same time Recording max. 80 cameras Server-side VCA Desktop or rackmount form factor Servers, Clients Displaying max. 60 cameras at the same time Recording max. 80 cameras Desktop or rackmount form factor IVR-40/40-DSKT Intellio standard server PC 60 60 Recording 60 cameras Video gateway

More information

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010 Best Practices for Data Sharing in a Grid Distributed SAS Environment Updated July 2010 B E S T P R A C T I C E D O C U M E N T Table of Contents 1 Abstract... 2 1.1 Storage performance is critical...

More information

Fujitsu PRIMERGY Servers Portfolio

Fujitsu PRIMERGY Servers Portfolio Fujitsu Servers Portfolio Dynamic Infrastructures for workgroup, datacenter and cloud computing shaping tomorrow with you Higher IT efficiency and reduced total cost of ownership Fujitsu Micro and Tower

More information

GridKa Database Services

GridKa Database Services Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft GridKa Database Services Overview about our activities and possible tasks in 3D Rainer Kupsch 13.12.2004 in LCG-3D Kickoff-Workshop Institut für

More information

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC2013 - Denver 1 The PHI solution Fujitsu Industry Ready Intel XEON-PHI based solution SC2013 - Denver Industrial Application Challenges Most of existing scientific and technical applications Are written for legacy execution

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a

More information

IEEE Mass Storage Conference Vendor Reception Lake Tahoe, NV

IEEE Mass Storage Conference Vendor Reception Lake Tahoe, NV IEEE Mass Storage Conference Vendor Reception Lake Tahoe, NV 11 Manager May 04, 2010 Joe Rotiroti Client Systems IBM, Federal 484 433 9756 cell 845 491 5227 fax rotiroti@us.ibm.com How is GPFS different?

More information

Virtualization of a Cluster Batch System

Virtualization of a Cluster Batch System Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational

More information

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First

More information

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill

AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS

More information

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan

Status of Grid Activities in Pakistan. FAWAD SAEED National Centre For Physics, Pakistan Status of Grid Activities in Pakistan FAWAD SAEED National Centre For Physics, Pakistan 1 Introduction of NCP-LCG2 q NCP-LCG2 is the only Tier-2 centre in Pakistan for Worldwide LHC computing Grid (WLCG).

More information

Are Blade Servers Right For HEP?

Are Blade Servers Right For HEP? Are Blade Servers Right For HEP? Rochelle Lauer Yale University Physics Department rochelle.lauer@yale.edu c 2002 Rochelle Lauer:1 Outline Blade Server Evaluation Why and How The HP BL Blade Servers The

More information

Network Attached Storage 300

Network Attached Storage 300 Network Attached Storage 300 High performance, dual-engine NAS solution Overview The IBM Network Attached Storage 300 was designed to meet your storage requirements across many demanding environments.

More information

(Scale Out NAS System)

(Scale Out NAS System) For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages

More information

How To Monitor Your Computer With Nagiostee.Org (Nagios)

How To Monitor Your Computer With Nagiostee.Org (Nagios) Host and Service Monitoring at SLAC Alf Wachsmann Stanford Linear Accelerator Center alfw@slac.stanford.edu DESY Zeuthen, May 17, 2005 Monitoring at SLAC Alf Wachsmann 1 Monitoring at SLAC: Does not really

More information

Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF

Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide

More information

How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On

How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On Solutions with Open-E Data Storage Software (DSS V6) Software Version: DSS ver. 6.00 up40 Presentation updated: September 2010 Different s opportunities using Open-E DSS The storage market is still growing

More information

Symantec NetBackup 5000 Appliance Series

Symantec NetBackup 5000 Appliance Series A turnkey, end-to-end, global deduplication solution for the enterprise. Data Sheet: Data Protection Overview Symantec NetBackup 5000 series offers your organization a content aware, end-to-end, and global

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

Annex 1: Hardware and Software Details

Annex 1: Hardware and Software Details Annex : Hardware and Software Details Hardware Equipment: The figure below highlights in more details the relation and connectivity between the Portal different environments. The number adjacent to each

More information

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC

KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch

More information

DCMS Tier 2/3 prototype infrastructure

DCMS Tier 2/3 prototype infrastructure DCMS Tier 2/3 prototype infrastructure 1 Anja Vest, Uni Karlsruhe DCMS Meeting in Aachen, Overview LCG Queues/mapping set up Hardware capacities Supported software Summary DCMS overview CMS DCMS: Tier

More information

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper

More information

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Technical Product Management Team Endpoint Security Copyright 2007 All Rights Reserved Revision 6 Introduction This

More information

External Storage 200 Series. User s Manual

External Storage 200 Series. User s Manual External Storage 200 Series User s Manual Version 1.2 00P3DS200ZSEA2 Table of Contents User s Manual 1. Overview...3 2. Key Features...3 3. Rear Connectors...4 4. Setup the External Storage 200...4 5.

More information

Content Repository Benchmark Loading 100 million documents

Content Repository Benchmark Loading 100 million documents Content Repository Benchmark Loading 100 million documents Goal The goal of this benchmark is to prove that the Content Repository is scalable enough to be called Enterprise Content Repository. To achieve

More information

Technological Overview of High-Performance Computing. Gerolf Ziegenhain - TU Kaiserslautern, Germany

Technological Overview of High-Performance Computing. Gerolf Ziegenhain - TU Kaiserslautern, Germany Technological Overview of High-Performance Computing Gerolf Ziegenhain - TU Kaiserslautern, Germany Outline of This Talk Give a glance at the important technologies The most important stuff is mentioned

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

LAN/Server Free* Backup Experiences

LAN/Server Free* Backup Experiences LAN/Server Free* Backup Experiences Senior Consultant, TSM Specialist Solution Technology Inc * Beware ambiguous terminology Who is Solution Technology IBM Premier Business Partner $70M+ in Sales 80+ Employees

More information

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 SWsoft Virtuozzo 3.5.1 (for Windows) Review 2 Summary 0. Introduction 1. Installation 2. VPSs creation and modification

More information

What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1

What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing

More information

Fujitsu PRIMERGY Servers Portfolio

Fujitsu PRIMERGY Servers Portfolio Fujitsu Servers Portfolio Complete server solutions that drive your success shaping tomorrow with you Higher IT efficiency and reduced total cost of ownership Fujitsu Micro and Tower Servers MX130 S2 TX100

More information

Section A Notes to the Application

Section A Notes to the Application Explanatory Note 1 (Hosting OG/BSS - For Normal Trading) Section A Notes to the Application a. Hosting OG Server Hardware Configuration and Software Package : Hosting OG will support both standard and

More information

A Scalable Ethernet Clos-Switch

A Scalable Ethernet Clos-Switch A Scalable Ethernet Clos-Switch Norbert Eicker John von Neumann-Institute for Computing Research Centre Jülich Technisches Seminar Desy Zeuthen 9.5.2006 Outline Motivation Clos-Switches Ethernet Crossbar

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

XenData Product Brief: SX-250 Archive Server for LTO

XenData Product Brief: SX-250 Archive Server for LTO XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,

More information