Experience of Data Transfer to the Tier-1 from a DIRAC Perspective
|
|
|
- Whitney Parsons
- 10 years ago
- Views:
Transcription
1 Experience of Data Transfer to the Tier-1 from a DIRAC Perspective Lydia Heck Institute for Computational Cosmology Manager of the DiRAC-2 Data Centric Facility COSMA 1
2 Talk layout Introduction to DiRAC? The DiRAC computing systems What is DiRAC What type of science is done on the DiRAC facility? Why do we need to copy data to RAL? Copying data to RAL network requirements Collaboration between DiRAC and RAL to produce the archive Setting up the archiving tools Archiving Open issues Conclusions 2
3 Introduction to DiRAC DIRAC -- Distributed Research utilising Advanced Computing established in 2009 with DiRAC-1 Support of research in theoretical astronomy, particle physics and nuclear physics Funded by STFC with infrastructure money allocated from the Department for Business, Innovation and Skills (BIS) The running costs, such as staff costs and electricity are funded by STFC 3
4 Introduction to DiRAC, cont d 2009 DiRAC-1 8 installations across the UK of which COSMA-4 at the ICC in Durham is one. Still a loose federation. 2011/2012 DiRAC-2 major funding of 15M for e-infrastructure in bidding to host 5 installations identified judged by peers for successful bidders scrutiny and interview by representatives for BIS to see if we could deliver by a tight deadline 4
5 Introduction to DiRAC, cont d DiRAC has full management structure. Computing time on the DiRAC facility is allocated through a peer-reviewed procedure. Current director: Dr Jeremy Yates, UCL Current technical director:prof Peter Boyle, Edinburgh 5
6 The DiRAC computing systems Blue Gene Edinburgh Cosmos Cambridge Complexity Leicester Data Centric Durham Data Analytic Cambridge 6
7 The DiRAC Edinburgh IBM Blue Gene cores 1 Pbyte of GPFS storage designed around (Lattice)QCD applications 7
8 DiRAC (Data Centric) Durham Data Centric system IBM IDataplex 6720 Intel Sandy Bridge cores 53.8 TB of RAM FDR10 infiniband 2:1 blocking 2.5 Pbyte of GPFS storage (2.2 Pbyte used!) 8
9 DiRAC Leicester Complexity HP system 4352 Intel Sandy Bridge cores 30 Tbyte of RAM FDR 1:1 non-blocking 0.8 Pbyte of Panasas storage 9
10 DiRAC (SMP) Cambridge COSMOS SGI shared memory system 1856 Intel Sandy Bridge cores 31 Intel Xeon Phi coprocessors 14.8 Tbyte of RAM 146 Tbyte of storage 10
11 DiRAC (Data Analytic) Cambridge Data Analytic Dell 4800 Intel Sandy Bridge cores 19.2 TByte of RAM FDR Infiniband 1:1 nonblocking 0.75 PB of Lustre storage 11
12 What is DiRAC A national service run/managed/allocated by the scientists who do the science funded by BIS and STFC The systems are built around and for the applications with which the science is done. We do not rival a facility like ARCHER, as we do not aspire to run a general national service. DiRAC is classed as a major research facility by STFC on a par with the big telescopes 12
13 What is DiRAC, cont d Long projects with significant amount of CPU hours allocated for 3 years typically on a specific system for with examples: Cosmos - dp002 : ~20M cpu hours on Cambridge Cosmos Virgo-dp004 : 63M cpu hours on Durham DC UK-MHD-dp010 : 40.5M cpu hours on Durham DC UK-QCD-dp008 : ~700M cpu hours on Edinburgh BG Exeter dp005: ~15M cpu hours on Leicester Complexity HPQCD dp019 : ~20M cpu hours on Cambridge Data Analytic 13
14 What type of Science is done on DiRAC? For the highlights of science carried out on the DiRAC facility please see: Specific example: Large scale structure calculations with the Eagle run 4096 cores ~8 GB RAM/core 47 days = 4,620,288 cpu hours 200 TB of data 14
15 Why do we need to copy data (to RAL)? Original plan - each research project should make provisions for storing the research data requires additional storage resource at researchers home institutions Not enough provision will require additional funds. data creation considerably above expectation? if disaster struck many cpu hours of calculations would be lost. 15
16 Why do we need to copy data (to RAL)? Research data must now be shared with/available to interested parties Install DiRAC s own archive requires funds and currently there is no budget. we needed to get started: Jeremy Yates negotiated access to the RAL archive system Acquire expertise Identify bottlenecks and technical challenges submitted 2,000,000 files and created an issue at the file servers How can we collaborate and make use of previous experience. AND: copy data! 16
17 Copying data to RAL network requirements network bandwidth situation for Durham now: currently possible Mbytes/sec required investment and collaboration from DU CIS upgrade to 6GBit/sec to JANET - Sep 2014 past: will be 10 Gbit/sec by end of 2015 infra structure already installed identified Durham related bottlenecks - FIREWALL 17
18 Copying data to RAL network requirements network bandwidth situation for Durham investment to by-pass of external campus firewall: two new routers (~ 80k) configured for throughput with minimal ACL enough to safeguard site. deploying internal firewalls part of new security infrastructure, essential for such a venture Security now relies on front-end system of Durham DiRAC and Durham GridPP. 18
19 Copying data to RAL network requirements Result for COSMA and GridPP in Durham guaranteed 2-3 Gbit/sec with bursts of up to 3-4Gbit/sec (3 Gbit/sec outside of term time) pushed the network performance for Durham GridPP from bottom 3 in the country to top 5 of the UK GridPP sites achieves up to Mbyte/sec throughput to RAL on archiving depending on file sizes. 19
20 Collaboration between DiRAC and GridPP/RAL Durham Institute for Computational Cosmology (ICC) volunteered to be the prototype installation Huge thanks to Jens Jensen and Brian Davies - there were many s exchanged, many questions asked and many answers given. Resulting document Setting up a system for data archiving using FTS3 by Lydia Heck, Jens Jensen and Brian Davies 20
21 Setting up the archiving tools Identify appropriate hardware could mean extra expense: need freedom to modify and experiment with - cannot have HPC users logged in and working! free to do very latest security updates requires optimal connection to storage - infiniband card 21
22 Setting up the archiving tools Create an interface to access the file/archving service at RAL using the GridPP tools gridftp Globus Toolkit also provides Globus Connect Trust anchors (egi-trustanchors) voms tools (emi3-xxx) fts3 (cern) 22
23 Archiving? long-lived voms proxy? myproxy-init; myproxy-logon; voms-proxy-init; ftstransfer-delegation How to create a proxy and delegation that lasts weeks even months? still an issue grid-proxy-init; fts-transfer-delegation grid-proxy-init valid HH:MM fts-transfer-delegation e time-in-seconds creates proxy that lasts up to certificate life time. 23
24 Archiving Large files optimal throughput limited by network bandwidth Many small files limited by latency; using -r flag to ftstransfer-submit to re-use connection Transferred: ~40 Tbytes since 20 August ~2M files challenge to FTS service at RAL User education on creating lots of small files 24
25 Open issues ownership and permissions are not preserved depends on single admin to carry out. what happens when content in directories change? complete new archive sessions? tries to archive all the files again but then fails as file already exists should be more like rsync 25
26 Conclusions With the right network speed we can archive the DiRAC data to RAL. The documentation has to be completed and shared with the system managers on the other DiRAC sites Each DiRAC site will have their own dirac0x account Start with and keep on archiving Collaboration between DiRAC and GridPP/RAL DOES work! Can we aspire to more? 26
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
Building a Top500-class Supercomputing Cluster at LNS-BUAP
Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
Sun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
CEDA Storage. Dr Matt Pritchard. Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk
CEDA Storage Dr Matt Pritchard Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk How we store our data NAS Technology Backup JASMIN/CEMS CEDA Storage Data stored as files on disk. Data is migrated
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC
Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional
Cluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
Running Native Lustre* Client inside Intel Xeon Phi coprocessor
Running Native Lustre* Client inside Intel Xeon Phi coprocessor Dmitry Eremin, Zhiqi Tao and Gabriele Paciucci 08 April 2014 * Some names and brands may be claimed as the property of others. What is the
Next Generation Tier 1 Storage
Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012 Why are we doing this?
Interact Intranet Version 7. Technical Requirements. August 2014. 2014 Interact
Interact Intranet Version 7 Technical Requirements August 2014 2014 Interact Definitions... 3 Licenses... 3 On-Premise... 3 Cloud... 3 Pulic Cloud... 3 Private Cloud... 3 Perpetual... 3 Self-Hosted...
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
Virtual Appliance Setup Guide
The Virtual Appliance includes the same powerful technology and simple Web based user interface found on the Barracuda Web Application Firewall hardware appliance. It is designed for easy deployment on
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS
Big Data and the Earth Observation and Climate Modelling Communities: JASMIN and CEMS Workshop on the Future of Big Data Management 27-28 June 2013 Philip Kershaw Centre for Environmental Data Archival
The Hartree Centre helps businesses unlock the potential of HPC
The Hartree Centre helps businesses unlock the potential of HPC Fostering collaboration and innovation across UK industry with help from IBM Overview The need The Hartree Centre needs leading-edge computing
Quick Reference Selling Guide for Intel Lustre Solutions Overview
Overview The 30 Second Pitch Intel Solutions for Lustre* solutions Deliver sustained storage performance needed that accelerate breakthrough innovations and deliver smarter, data-driven decisions for enterprise
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team
Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC Wenhao Wu Program Manager Windows HPC team Agenda Microsoft s Commitments to HPC RDMA for HPC Server RDMA for Storage in Windows 8 Microsoft
HPC data becomes Big Data. Peter Braam [email protected]
HPC data becomes Big Data Peter Braam [email protected] me 1983-2000 Academia Maths & Computer Science Entrepreneur with startups (5x) 4 startups sold Lustre emerged Held executive jobs with
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
Stovepipes to Clouds. Rick Reid Principal Engineer SGI Federal. 2013 by SGI Federal. Published by The Aerospace Corporation with permission.
Stovepipes to Clouds Rick Reid Principal Engineer SGI Federal 2013 by SGI Federal. Published by The Aerospace Corporation with permission. Agenda Stovepipe Characteristics Why we Built Stovepipes Cluster
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Parallel Processing using the LOTUS cluster
Parallel Processing using the LOTUS cluster Alison Pamment / Cristina del Cano Novales JASMIN/CEMS Workshop February 2015 Overview Parallelising data analysis LOTUS HPC Cluster Job submission on LOTUS
Globus and the Centralized Research Data Infrastructure at CU Boulder
Globus and the Centralized Research Data Infrastructure at CU Boulder Daniel Milroy, [email protected] Conan Moore, [email protected] Thomas Hauser, [email protected] Peter Ruprecht,
Hardware Performance Optimization and Tuning. Presenter: Tom Arakelian Assistant: Guy Ingalls
Hardware Performance Optimization and Tuning Presenter: Tom Arakelian Assistant: Guy Ingalls Agenda Server Performance Server Reliability Why we need Performance Monitoring How to optimize server performance
Lessons learned from parallel file system operation
Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
Lustre* is designed to achieve the maximum performance and scalability for POSIX applications that need outstanding streamed I/O.
Reference Architecture Designing High-Performance Storage Tiers Designing High-Performance Storage Tiers Intel Enterprise Edition for Lustre* software and Intel Non-Volatile Memory Express (NVMe) Storage
Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
Performance Evaluation of Amazon EC2 for NASA HPC Applications!
National Aeronautics and Space Administration Performance Evaluation of Amazon EC2 for NASA HPC Applications! Piyush Mehrotra!! J. Djomehri, S. Heistand, R. Hood, H. Jin, A. Lazanoff,! S. Saini, R. Biswas!
Development of Monitoring and Analysis Tools for the Huawei Cloud Storage
Development of Monitoring and Analysis Tools for the Huawei Cloud Storage September 2014 Author: Veronia Bahaa Supervisors: Maria Arsuaga-Rios Seppo S. Heikkila CERN openlab Summer Student Report 2014
Enterprise-class Backup Performance with Dell DR6000 Date: May 2014 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Senior Lab Analyst
ESG Lab Review Enterprise-class Backup Performance with Dell DR6000 Date: May 2014 Author: Kerry Dolan, Lab Analyst and Vinny Choinski, Senior Lab Analyst Abstract: This ESG Lab review documents hands-on
RFQ 12-21 IT Services. Questions and Answers
RFQ 12-21 IT Services Questions and Answers Question # 1: Just to clarify and I am more than certain that this is just a typo, but the due date for the submission of the IT Services RFP is January 7, 2013,
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer
Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
Architecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid
THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING José Daniel García Sánchez ARCOS Group University Carlos III of Madrid Contents 2 The ARCOS Group. Expand motivation. Expand
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
RICOH Data Center Services
RICOH Data Center Services 1 About RICOH RICOH Overview We are Fortune Global 500 Company Established in 1993 Global 100 Most Sustainable Corporations in the world (9 consecutive Yrs) World s 100 Most
Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
Virtual Appliance Setup Guide
The Barracuda SSL VPN Vx Virtual Appliance includes the same powerful technology and simple Web based user interface found on the Barracuda SSL VPN hardware appliance. It is designed for easy deployment
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
Virtual Server and Storage Provisioning Service. Service Description
RAID Virtual Server and Storage Provisioning Service Service Description November 28, 2008 Computer Services Page 1 TABLE OF CONTENTS INTRODUCTION... 4 VIRTUAL SERVER AND STORAGE PROVISIONING SERVICE OVERVIEW...
High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates
High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of
Enterprise Deployment
Enterprise Deployment Deployment Overview Version 1.1 Contents 1. Deployment Overview... 3 1.1 System Requirements... 3 2. ES1 Email Invite... 3 3. Web Based Method... 4 4. USB or Network Drive... 4 5.
ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist
ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge
Steven Newhouse, Head of Technical Services
Challenges at EMBL-EBI Steven Newhouse, Head of Technical Services European Bioinformatics Institute Outstation of the European Molecular Biology Laboratory International organisation created by treaty
REQUEST FOR PROPOSAL FOR DATA CENTRE CO-LOCATION AND NETWORK CONNECTIVITY SOLUTION Pre-Bid Meeting Held On : May 18, 2010, 15:30 Hrs
REQUEST FOR PROPOSAL FOR DATA CETRE CO-LOCATIO AD ETWORK COECTIVITY SOLUTIO Pre-Bid Meeting Held On : May 18, 2010, 15:30 Hrs S. Section o Description 1 Present Setup 2.2.1 The servers are under AMC with
IT Discovery / Assessment Report Conducted on: DATE (MM/DD/YYY) HERE On-site Discovery By: AOS ENGINEER NAME Assessment Document By: AOS ENGINEER NAME
IT Discovery / Assessment Report Conducted on: DATE (MM/DD/YYY) HERE On-site Discovery By: AOS ENGINEER NAME Assessment Document By: AOS ENGINEER NAME For: CLIENT NAME HERE CLIENT ADDRESS HERE 1. Current
Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution
Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction
Centrata IT Management Suite 3.0
Centrata IT Management Suite 3.0 Technical Operating Environment April 9, 2004 Centrata Incorporated Copyright 2004 by Centrata Incorporated All rights reserved. April 9, 2004 Centrata IT Management Suite
HPC Update: Engagement Model
HPC Update: Engagement Model MIKE VILDIBILL Director, Strategic Engagements Sun Microsystems [email protected] Our Strategy Building a Comprehensive HPC Portfolio that Delivers Differentiated Customer Value
A Flexible Cluster Infrastructure for Systems Research and Software Development
Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure
1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations
Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Making the Business and IT Case for Dedicated Hosting
Making the Business and IT Case for Dedicated Hosting Overview Dedicated hosting is a popular way to operate servers and devices without owning the hardware and running a private data centre. Dedicated
An Oracle White Paper June 2011. Oracle Database Firewall 5.0 Sizing Best Practices
An Oracle White Paper June 2011 Oracle Database Firewall 5.0 Sizing Best Practices Introduction... 1 Component Overview... 1 Database Firewall Deployment Modes... 2 Sizing Hardware Requirements... 2 Database
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2015
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2015 InfiniBand FDR and EDR Continue Growth and Leadership The Most Used Interconnect On The TOP500
Virtualized Disaster Recovery (VDR) Overview... 2. Detailed Description... 3
Service Description Virtualized Disaster Recovery (VDR) Terremark's Virtualized Disaster Recovery (VDR) service is a fully managed replication and Disaster Recovery (DR) service, where Terremark provides
OBSERVEIT DEPLOYMENT SIZING GUIDE
OBSERVEIT DEPLOYMENT SIZING GUIDE The most important number that drives the sizing of an ObserveIT deployment is the number of Concurrent Connected Users (CCUs) you plan to monitor. This document provides
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
Stingray Traffic Manager Sizing Guide
STINGRAY TRAFFIC MANAGER SIZING GUIDE 1 Stingray Traffic Manager Sizing Guide Stingray Traffic Manager version 8.0, December 2011. For internal and partner use. Introduction The performance of Stingray
DATA SCIENCE @ ED EDINBURGH DATA SCIENCE AND MANAGING NATIONAL DATA SERVICES AT EDINBURGH PROF MARK PARSONS
Data Science @ Edinburgh 1 DATA SCIENCE @ ED EDINBURGH DATA SCIENCE AND MANAGING NATIONAL DATA SERVICES AT EDINBURGH PROF MARK PARSONS EPCC Executive Director Associate Dean for e-research Data Science
Performance Characteristics of Large SMP Machines
Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark
Altix Usage and Application Programming. Welcome and Introduction
Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang
Network Security Platform 7.5
M series Release Notes Network Security Platform 7.5 Revision B Contents About this document New features Resolved issues Known issues Installation instructions Product documentation About this document
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
CHESS DAQ* Introduction
CHESS DAQ* Introduction Werner Sun (for the CLASSE IT group), Cornell University * DAQ = data acquisition https://en.wikipedia.org/wiki/data_acquisition Big Data @ CHESS Historically, low data volumes:
An overview of Drupal infrastructure and plans for future growth. prepared by Kieran Lal and Gerhard Killesreiter for the Drupal Association
An overview of Drupal infrastructure and plans for future growth prepared by Kieran Lal and Gerhard Killesreiter for the Drupal Association Drupal.org Old Infrastructure Problems: Web servers not efficiently
Dimension Data Enabling the Journey to the Cloud
Dimension Data Enabling the Journey to the Cloud Grant Morgan General Manager: Cloud 14 August 2013 Client adoption: What our clients were telling us The move to cloud services is a journey over time and
Cloud Computing and Amazon Web Services
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
E4 UNIFIED STORAGE powered by Syneto
E4 UNIFIED STORAGE powered by Syneto THE E4 UNIFIED STORAGE (US) SERIES POWERED BY SYNETO From working in the heart of IT environment and with our major customers coming from Research, Education and PA,
