KIT Site Report. Andreas Petzold. STEINBUCH CENTRE FOR COMPUTING - SCC
|
|
|
- Scott Atkinson
- 10 years ago
- Views:
Transcription
1 KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
2 GridKa Tier 1 - Batch Farm Hardware 620 WNs, mix of Intel & AMD 13k job slots, 150kHS06 (2014) 42kHS06 will be retired, 52kHS06 new Univa Grid Engine very smooth operation multi-core jobs dynamic slot allocation, no static cluster partition Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
3 GridKa Tier 1 Disk Storage 14PB on DDN systems S2A9900, SFA10k, SFA12k dcache for ATLAS, CMS, LHCb, Belle2, others Version /34 plan to move to 2.10 by December CMS now using NFSv4.1 automatic file replication off manual data set distribution ATLAS & CMS instances integrated in FAX & AAA w/o xrootd proxy xrootd for ALICE some servers already on version Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
4 GridKa Tier 1 Tape Storage 15.5PB used in 3 libraries (STK, IBM, Grau) tape technology currently LTO3/4/5 with 45 drives, ~22k cartridges 6 new T10kC drives, 400 cartridges in production by end of year tape management currently TSM & ERMM gradual migration to HPSS to start mid 2015 HPSS status intensive testing phase finished dedicated new STK library 1 st user: HLRS (Stuttgart) will archive data to HPSS at KIT starting January Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
5 GridKa Tier 1 Network currently 80Gb/s WAN connection based on 10Gb/s move to 100Gb/s based setup early in 2015 combine most existing connections into 100G link keep 2x10Gb/s VPN to CERN challenge of remote data access (ALICE&FAX&AAA) GridKa batch farm behind NAT 15Gb/s available bandwidth often saturated now Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
6 Large Scale Data Facility storage and computing for non-hep sciences 6.7PB disk; IBM SONAS + DDN based GPFS cluster w/ HPSS integration Hadoop cluster bwsync&share in production since Jan 1 st 2014 Dropbox-like service for universities and colleges in state of Baden- Württemberg based on Powerfolder steadily increasing number of users; currently ~8000 bwfilestorage central storage for HPC in Baden-Württemberg Storage for Human Brain Project S3 based on DDN WOS systems Climatology data (ENES) replication using EUDAT B2SAFE service Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
7 Federated Identity Management bwidm federated IdM for Baden-Württemberg all state wide service integrate bwidm bwsync&share, bwfilestorage, bwunicluster,... Shibboleth and LDAP available extend beyond Baden-Württemberg now working on extension to DFN-AAI (Germany wide) integration of Umbrella Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
8 Smart Data Innovation Lab federally funded project to enable industry and research partners to process their data on commercially available Big Data platforms communities: Industry 4.0, Energy, Smart Cities, Medicine 40 partners from industry and research project leader SAP Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
9 Smart Data Innovation Lab platform operated by SCC SAP Hana 4 machines: 2 IBM X3850, coupled with QPI interconnect, 80 (160 w/ HT) cores, 1TB RAM, 30TB HDD IBM Watson Foundations Hadoop, SPSS Modeler, Watson Content Analytics 7 Power8 SL822 machines, 140 cores, 4TB RAM total, 300TB HDD Software AG Terracotta & Apama soon to come will be integrated into federated IdM Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
10 HPC at KIT SCC operates HPC systems for KIT and state of Baden- Württemberg IC2 for Institutes of KIT (236 on Top ) 485 nodes, 162 TFLOPS, 32.6TB RAM, 470 TB Lustre bwunicluster for general HPC work in BW 520 nodes, 176 TFLOPS, 41.1TB RAM, 470TB Lustre shared with IC2, 230TB Lustre ForHLR1 for HPC research in BW 530 nodes, 216 TFLOPS, 41.1TB RAM, 470TB Lustre shared with IC2, 230TB Lustre ForHLR2 for HPC research in BW 45ºC water cooling, free cooling installation mid Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
11 New Building for ForHLR II Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
12 We are hiring! We are looking for computer scientists and physicists interested in IT Security Large Scale Data Management Large Scale Data Analysis Contact Andreas Petzold KIT Site Report HEPiX 2014 Lincoln, NE
KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu Data Intensive Science
Preview of a Novel Architecture for Large Scale Storage
Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the
The cloud storage service bwsync&share at KIT
The cloud storage service bwsync&share at KIT Alexander Yasnogor, Nico Schlitter, Andreas Petzold @CERN, Workshop on Cloud Services for File Synchronisation and Sharing STEINBUCH CENTRE FOR COMPUTING -
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
Why long time storage does not equate to archive
Why long time storage does not equate to archive Jos van Wezel HUF Toronto 2015 STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz
Lessons learned from parallel file system operation
Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Smart Data Innovation Lab (SDIL)
Smart Data Innovation Lab (SDIL) Accelerating Data driven Innovation NESSI Summit May 27, 2014 Prof. Dr.-Ing. Michael Beigl Department of Informatics KIT University of the State of Baden-Wuerttemberg and
(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT
Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT SCIENTIFIC COMPUTING, HPC AND GRIDS KIT the cooperation of Forschungszentrum Karlsruhe GmbH and Universität Karlsruhe (TH)
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
irods at CC-IN2P3: managing petabytes of data
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods at CC-IN2P3: managing petabytes of data Jean-Yves Nief Pascal Calvat Yonny Cardenas Quentin Le Boulc h
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT
SS Data & Storage Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT (with slides from Andreas Peters and Jan Iven) 5th International Conference "Distributed Computing and Grid-technologies
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30.
bwgrid Treff HD/MA Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 30. January 2013 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2012/2013 1 / 23 Course
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
Large File System Backup NERSC Global File System Experience
Large File System Backup NERSC Global File System Experience M. Andrews, J. Hick, W. Kramer, A. Mokhtarani National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory
Lustre tools for ldiskfs investigation and lightweight I/O statistics
Lustre tools for ldiskfs investigation and lightweight I/O statistics Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools
NERSC File Systems and How to Use Them
NERSC File Systems and How to Use Them David Turner! NERSC User Services Group! Joint Facilities User Forum on Data- Intensive Computing! June 18, 2014 The compute and storage systems 2014 Hopper: 1.3PF,
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
Flexible Scalable Hardware independent. Solutions for Long Term Archiving
Flexible Scalable Hardware independent Solutions for Long Term Archiving More than 20 years of experience in archival storage 2 OA HPA 2010 1992 2000 2004 2007 Mainframe Tape Libraries Open System Tape
HTCondor at the RAL Tier-1
HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
A parallel file system made in Germany Tiered Storage and HSM
A parallel file system made in Germany Tiered Storage and HSM Mai 7th 2012 HLRS Stuttgart Franz-Josef Pfreundt Competence Center for HPC Sven Breuner Fraunhofer Institut for Industrial Mathematics Mathematical
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
GPFS und HPSS am HLRS
GPFS und HPSS am HLRS Peter W. Haas Archivierung im Bereich Höchstleistungsrechner Swisstopo, Bern 3. Juli 2009 1 High Performance Computing Center Stuttgart Table of Contents 1. What are GPFS and HPSS
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
Data management challenges in todays Healthcare and Life Sciences ecosystems
Data management challenges in todays Healthcare and Life Sciences ecosystems Jose L. Alvarez Principal Engineer, WW Director Life Sciences [email protected] Evolution of Data Sets in Healthcare
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
EMC DATA PROTECTION. Backup ed Archivio su cui fare affidamento
EMC DATA PROTECTION Backup ed Archivio su cui fare affidamento 1 Challenges with Traditional Tape Tightening backup windows Lengthy restores Reliability, security and management issues Inability to meet
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Globus and the Centralized Research Data Infrastructure at CU Boulder
Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, [email protected] Conan Moore, [email protected] Thomas Hauser, [email protected] Peter Ruprecht,
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1
What is the real cost of Commercial Cloud provisioning? Thursday, 20 June 13 Lukasz Kreczko - DICE 1 SouthGrid in numbers CPU [cores] RAM [TB] Disk [TB] Manpower [FTE] Power [kw] 5100 10.2 3000 7 1.5 x
IBM System Storage Portfolio Overview
IBM System Storage Portfolio Overview Daniel Ndirangu: Storage Sales Specialist Email Address: [email protected] The Business Challenge Every two days now, we create as much information as we did from
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing
Virtualization, Grid, Cloud: Integration Paths for Scientific Computing Or, where and how will my efficient large-scale computing applications be executed? D. Salomoni, INFN Tier-1 Computing Manager [email protected]
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
Virtualization of a Cluster Batch System
Virtualization of a Cluster Batch System Christian Baun, Volker Büge, Benjamin Klein, Jens Mielke, Oliver Oberst and Armin Scheurer Die Kooperation von Cluster Batch System Batch system accepts computational
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info
High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and
CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006
CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006 1 Introduction Different h/w used for GRID services Various techniques & First
Estonian Scientific Computing Infrastructure (ETAIS)
Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
IBM System x SAP HANA
Place photo here IBM System x SAP HANA, IBM System X IBM SAP: 42 2012 Largest HANA implementation worldwide with 100 Terrabyte powered by IBM 2011 IBM Unveils Next Generation Smart Cloud Platform for Business
Archival Storage At LANL Past, Present and Future
Archival Storage At LANL Past, Present and Future Danny Cook Los Alamos National Laboratory [email protected] Salishan Conference on High Performance Computing April 24-27 2006 LA-UR-06-0977 Main points of
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
The Hartree Centre helps businesses unlock the potential of HPC
The Hartree Centre helps businesses unlock the potential of HPC Fostering collaboration and innovation across UK industry with help from IBM Overview The need The Hartree Centre needs leading-edge computing
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS Workshop on the Future of Big Data Management 27-28 June 2013 Philip Kershaw Centre for Environmental Data Archival
Deploying and managing a Visualization Farm @ Onera
Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC
IT and Storage for Big Data Analytics
IT and Storage for Big ata Analytics Randy Kerns Senior Strategist valuator Group verview Big data can mean two different things - Storage for large amounts of data - Analytics against very large amounts
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
risk of failure of or access to system applications. Backup software used for backup management is Bakbone Netvault : Backup.
CENTRALIZED BACKUP INTRODUCTION ICT Data Center houses a total number of servers and providing ICT services to UniMAP. The use of ICT is a major and important method in supporting the daily operations
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
Open Cirrus: Towards an Open Source Cloud Stack
Open Cirrus: Towards an Open Source Cloud Stack Karlsruhe Institute of Technology (KIT) HPC2010, Cetraro, June 2010 Marcel Kunze KIT University of the State of Baden-Württemberg and National Laboratory
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
Big + Fast + Safe + Simple = Lowest Technical Risk
Big + Fast + Safe + Simple = Lowest Technical Risk The Synergy of Greenplum and Isilon Architecture in HP Environments Steffen Thuemmel (Isilon) Andreas Scherbaum (Greenplum) 1 Our problem 2 What is Big
Mass Storage at GridKa
Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool
Solution for private cloud computing
The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details Use cases By scientist By HEP experiment System requirements and installation How to get it? 2 What
Computing at the HL-LHC
Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk
WHITE PAPER Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk 951 SanDisk Drive, Milpitas, CA 95035 2015 SanDisk Corporation. All rights reserved. www.sandisk.com Table of Contents Introduction
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Big Data and Cloud Computing for GHRSST
Big Data and Cloud Computing for GHRSST Jean-Francois Piollé ([email protected]) Frédéric Paul, Olivier Archer CERSAT / Institut Français de Recherche pour l Exploitation de la Mer Facing data deluge
Next Generation Tier 1 Storage
Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012 Why are we doing this?
