Linux and the Higgs Particle

Size: px
Start display at page:

Download "Linux and the Higgs Particle"

Transcription

1 Linux and the Higgs Particle Dr. Bernd Panzer-Steindel Computing Fabric Area Manager, CERN/IT Linux World, Frankfurt 27.October 2004

2 Outline What is CERN The Physics The Physics Tools The Accelerator The Detectors The Computing Tools The Local Computing Fabric The World Wide GRID

3 The Institute CERN

4 CERN Conseil Européen pour la Recherche Nucléaire European Organisation for Particle Physics Basic Research Laboratory, world s largest particle physics centre Founded in 1954, 50 th Anniversary this year! Located on top of the French-Swiss border in Geneva (Switzerland) 2700 Staff members and Fellows plus ~6500 visitors on-site ~1000 MCHF (~700 MEuro) Annual Budget

5

6 CERN has some 6,500 visiting scientists from more than 500 institutes and 80 countries from around the world Europe: 267 institutes 4663 users Elsewhere: 238 institutes 1832 users

7 The Physics

8 Particle Physics Establish a periodic system of the fundamental building blocks and understand forces

9

10

11 The standard model of particle physics The Standard Model, the unification of three out of four theories. Great success with a precision of 0.1 % verified Constant interaction of theory and experiment but too many free input parameter and there are nonsense predictions at very high energies

12 The Higgs Particle The inclusion of the Higgs mechanism into the standard model fixes quite a few problems. The vacuum is not empty, but is filled with a Higgs particle condensate. All particles collide with the Higgs particle while they move through the vacuum. This acts like a molasses, slows the particles down and gives them mass. This is one of the key elements of the expansion of the standard model.

13 Open Questions Why are the parameters of the size as we observe them? What gives the particles their masses? How can gravity be integrated into a unified theory? Why is there only matter and no anti-matter in the universe? Are there more space-time dimensions than the 4 we know of? What is dark energy and dark matter which makes up 98% of the universe? finding the Higgs and possible new physics with LHC will give the answers!

14 The Physics Tools 1. The Accelerator

15 Methods of Particle Physics The most powerful microscope Creating conditions similar to the Big Bang

16 The principal accelerator machine components

17 The Large Hadron Collider LHC

18

19 View of the LHC Experiements

20 The LHC accelerator The largest superconducting installation in the world 27 kilometer long circle with two beam tubes 15 meter long dipole magnets at -271 o C 1700 superconducting magnets 7000 kilometers super conducting cables niobium-titanium with a copper matrix amps 8.3 Tesla magnetic field

21 Tides Stray currents Precision The 27 km length of the ring is sensitive to <1mm changes Rainfall

22 The Physics Tools 2. The Detectors

23 The ATLAS Experiment Diameter 25 m Barrel toroid length 26 m End-wall chamber span 46 m Overall weight 7000 Tons

24 The ATLAS Cavern m3 rock removed m3 concrete 6000 tons steel reinforcement 53 meters long 30 meters wide 53 meters high (10-storey building)

25

26 The CMS Magnet

27 The Dataflow of an Experiment

28 Data Rates Data Rates On-line System Multi-level trigger Filter out background Reduce data volume 24 x 7 operation 40 MHz 40 MHz (1000 TB/sec) (1000 TB/sec) 75 KHz 75 KHz (75 GB/sec) Level 1 - Special Hardware (75 GB/sec) 5 KHz 5 KHz (5 GB/sec) (5 GB/sec) Level 2 - Embedded Processors 100 Hz 100 Hz (100 MB/sec) (100 MB/sec) Level 3 Farm of commodity CPUs Data Recording Recording & Offline Offline Analysis Analysis

29 Particle physics data From raw data to physics results e + e - Z 0 f _ f Raw data Convert to physics quantities Detector response apply calibration, alignment Interaction with detector material Pattern, recognition, Particle identification Fragmentation, Decay, Physics analysis Basic physics Results Reconstruction Analysis Simulation (Monte-Carlo)

30 A Photo of a proton-proton collision (Event)

31 LHC data 40 million collisions per second After filtering, collisions of interest per second, 1-10 good! events 1-10 Megabytes of data digitised for each collision = recording rate of Gigabytes/sec collisions recorded each year = ~15 Petabytes/year of data 1 Megabyte (1MB) A digital photo 1 Gigabyte (1GB) = 1000MB A DVD movie 1 Terabyte (1TB) = 1000GB World annual book production 1 Petabyte (1PB) = 1000TB Annual production of one LHC experiment 1 Exabyte (1EB) = 1000 PB World annual information production CMS LHCb ATLAS ALICE

32 The Computing Tools 1. The Local Computing Fabric

33 Challenge : Large, distributed community ATLAS Offline software effort: 1000 person-years per experiment CMS Software life span: 20 years ~ 5000 Physicists around the world - around the clock LHCb

34 detector event event filter filter (selection (selection& reconstruction) reconstruction) Data Handling and Computation for Physics Analysis reconstruction event summary data processed data raw data event event reprocessing reprocessing analysis batch batch physics physics analysis analysis analysis objects (extracted by physics topic) event event simulation simulation simulation interactive physics analysis

35 Requirements and Boundaries (I) The High Energy Physics applications require integer processor performance and less floating point performance choice of processor type, benchmark reference Large amount of processing and storage needed, but optimization is for aggregate performance, not the single tasks + the events are independent units many components, moderate demands on the single components, coarse grain parallelism Basic infrastructure, environment availability of space, cooling and electricity heavy investment, don t underestimate

36

37 Requirements and Boundaries (II) the major boundary condition is cost, staying within the budget envelope + maximum amount of resources commodity equipment, best price/performance values cheapest! take into account reliability, functionality and performance together == total-cost-of-ownership Chaotic workload! - batch & interactive - research environment == physics analysis by collective iterative discovery unpredictable data acces no practical limit to the requirements

38 View of different Fabric areas Automation, Operation, Control Installation Configuration + monitoring Fault tolerance Infrastructure Electricity, Cooling, Space Storage system (AFS, CASTOR, disk server) Benchmarks, R&D, Architecture Prototype, Testbeds Batch system (LSF, CPU server) Network GRID services!? Purchase, Hardware selection, Resource planning Coupling of components through hardware and software

39 Current CERN Fabrics architecture based on : In general on commodity components Dual Intel processor PC hardware for CPU, disk and tape Server Hierarchical Ethernet (100, 1000, 10000) network topology NAS disk server with ATA/SATA disk arrays RedHat Linux operating system Medium end tape drive (linear) technology OpenSource software for storage (CASTOR, OpenAFS) and cluster management (Quattor, Lemon, ELF) Commercial software packages (LSF, Oracle)

40 Level of complexity CPU PC Cluster Couplings Disk Storage tray, NAS server, SAN element Hardware Motherboard, backplane, Bus, integrating devices (memory,power supply, controller,..) Physical and logical coupling Network (Ethernet, fibre channel, Myrinet,.) Hubs, switches, routers Software Operating system (Linux), driver, applications Batch system (LSF), Mass Storage (CASTOR) filesystems (AFS), Control software, World wide cluster Grid-Fabric Interfaces Wide area network (WAN) Grid middleware, monitoring, firewalls (Services)

41 Building the Farm CPU server + Fiber Channel Interface + tape drive == Tape server Processors desktop+ node == CPU server CPU server + larger case + 6*2 disks == Disk server All using the Linux OS

42 Today s schematic network topology WAN Gigabit Ethernet 1000 Mbit/s Backbone Multiple Gigabit Ethernet, 20 * 1000 Mbit/s Disk Server Tape Server Gigabit Ethernet 1000 Mbit/s CPU Server Fast Ethernet, 100 Mbit/s Tomorrow s schematic network topology Backbone WAN 10 Gigabit Ethernet Mbit/s Multiple 10 Gigabit Ethernet 200 * Mbit/s 10 Gigabit Ethernet Mbit/s Gigabit Ethernet 1000 Mbit/s Disk Server CPU Server Tape Server

43 General Fabric Layout Development cluster GRID testbeds New software, new hardware (purchase) Certification cluster Main cluster en miniature R&D cluster (new architecture and hardware) Benchmark and performance cluster (current architecture and hardware) Service control and management (e.g. stager, HSM, LSF master, repositories, GRID services, CA, etc Main fabric cluster 2-3 hardware generations 2-3 OS/software versions 4 Experiment environments old current new

44 Software glue management of the basic hardware and software : installation, configuration and monitoring system (from the European Data Grid project) management of the processor computing resources : Batch system (LSF from Platform Computing) management of the storage (disk and tape) : CASTOR (CERN developed Hierarchical Storage Management system)

45 Linux Linux is our choice as the OS for all LHC computing Using Redhat Enterprise version We have our own 4 person support team Linux deployed on ~2000 farm PC s and 1500 desktop nodes still trying to sort out an efficient TCO (Total-Cost_of_Ownership) model stability versus new features problem tracking and bug fixes community support versus licenses and support contract Boundary conditions support of old versions user community heterogeneous, can t move to new versions easily long and complicated certification process of a new version several third-party products to be supported

46 The CERN Computing Centre ~4000 processors ~400 TBytes of disk ~12 PB of magnetic tape Even with technology-driven improvements in performance and costs CERN can provide nowhere near enough capacity for LHC!

47 Considerations current state of performance, functionality and reliability is good and technology developments look still promising more of the same for the future!?!? How can we be sure that we are following the right path? How to adapt to changes?

48 Strategy continue and expand the current system BUT do in parallel : R&D activities SAN versus NAS, iscsi, IA64 processors,. technology evaluations infiniband clusters, new filesystem technologies,.. Data Challenges to test scalabilities on larger scales bring the system to it s limit and beyond we are very successful already with this approach, especially with the beyond part watch carefully the market trends

49 Challenges 1. Status of the current system Is the stability of the equipment acceptable? stress test the equipment? where and what are the weak points / bottlenecks? 2. Physics Data Challenges test the bookkeeping, organization and management of data processing 3. Computing Data Challenge scalability of software and hardware in the fabric try to verify whether the current architecture would survive the anticipated load in the LHC area.

50 Dataflow local CERN Fabric 2007 Complex organization with high data rates (~10 GBytes/s) and ~100k streams in parallel permanent Disk Storage Calibration Farm Analysis Farm Raw Data Calibration Data Online Filter Farm (HLT) Reconstruction Farm permanent Disk Storage EST Data Raw Data Calibration Data Raw Data EST Data AOD Data AOD Data Disk Storage Disk Storage Raw Data Calibration Data EST Data AOD Data EST Data AOD Data Raw Data Calibration Data Tape Storage Tape Storage Tape Storage Tier 1 Data Export

51 High Througput Prototype (openlab( + LCG prototype) (specific layout, October 2004) 12 Tape Server STK 9940B 4 * GE connections to the backbone 10GE WAN connection lxsharexxxd 36 Disk Server (dual P4, IDE disks, ~ 1TB disk space each) 4 *ENTERASYS N7 10 GE Switches 2 * Enterasys X-Series 2 * 50 Itanium 2 (dual 1.3/1.5 GHz, 2 GB mem) oplapro0xx 10 GE per node tbed00xx 10 GE per node 80 * IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 10GE 10GE 1 GE per node 20 TB, IBM StorageTank lxs50xx 40 * IA32 CPU Server (dual 2.4 GHz P4, 1 GB mem.) 80 IA32 CPU Server (dual 2.8 GHz P4, 2 GB mem.) 12 Tape Server STK 9940B

52 IT Data Challenge performance [ GBytes/s] 1.4 CPU Disk Tape running in parallel with increasing production service MB/s average daytime tape server intervention time in minutes

53 CERN computer center 2008 Hierarchical Ethernet network tree topology (280 GBytes/s) ~ 8000 mirrored disks ( 4 PB) ~ 4000 dual CPU nodes (20 million SI2000) ~ 170 tape drives (4 GB/s) ~ 25 PB tape storage estimated investment in : ~ 50 million Euro all numbers : IF exponential growth rate continues! (Moore s law)

54 The Computing Tools 2. The World Wide GRID

55 Why the GRID? The CERN computer center can only deliver only a fraction (~10%) of the cpu/disk capacity needed for the analysis of the huge amount of data delivered by the LHC experiments. Need a transparent mechanism for the physicists to run their analysis jobs anywhere in the world.

56 Scavenging unused cycles What is a Grid? Going strong since 1986 Not so easy to scavenge unused storage Berkeley Open Infrastructure for Network Computing

57 What is the Grid? Resource Sharing On a global scale, across the labs/universities Secure Access Needs a high level of trust Resource Use Load balancing, making most efficient use The Death of Distance Requires excellent networking Open Standards Allow constructive distributed development 5.44 Gbps 1.1 TB in 30 min Gbps 20 April 2004 There is not (yet) a single Grid

58 The GRID middleware: How will it work? Finds convenient places for the scientists job (computing task) to be run Optimises use of the widely dispersed resources Organises efficient access to scientific data Deals with authentication to the different sites that the scientists will be using Interfaces to local site authorisation and resource allocation policies Runs the jobs Monitors progress Recovers from problems and. Tells you when the work is complete and transfers the result back!

59 Virtual Organizations for LHC and others ATLAS VO BioMed VO CMS VO coupling of computer centres

60 UI JDL A Job Submission Example Input sandbox Data Management Services LFN->PFN Information Service Author. &Authen. Job Submit Job Query Output sandbox Job Status Resource Broker Job Submission Service Input sandbox Brokerinfo Storage Element Logging & Book-keeping keeping Job Status Output sandbox Compute Element

61 High Energy Physics Leading and Leveraging Grid Technology Many national, regional Grid projects -- GridPP(UK), INFN-grid(I), NorduGrid, Dutch Grid, US projects European projects

62 The LHC Computing Grid Project - LCG Collaboration LHC Experiments Grid projects: Europe, US Regional & national centres Choices Adopt Grid technology. Go for a Tier hierarchy. Use Intel CPUs in standard PCs Use LINUX operating system. Goal Prepare and deploy the computing environment to help the experiments analyse the data from the LHC detectors. Tier3 physics departmen t γ Lab a Tier2 β grid for a regional group CERN Tier 1 USA Lab b α Uni x Tier 1 Italy Uni y Desktop Taipei Lab m CERN Tier 0 UK France Japan Germany Uni b Uni a Lab c grid for a physics study group Uni n

63 desktops portables small centres IFCA LHC Computing Model (simplified!!) Tier-0 the accelerator centre Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 Tier-1 Permanent storage and management of raw, ESD, calibration data, metadata, analysis data and databases grid-enabled data service Data-heavy analysis Re-processing raw ESD National, regional support online to the data acquisition process high availability, long-term commitment managed mass storage UB MSU IC Cambridge Budapest Prague Taipei Tier-2 TRIUMF Legnaro Tier-1 RAL IN2P3 FNAL CSCS CNAF FZK Rome Tier-2 PIC CIEMAT BNL Well-managed disk storage grid-enabled Simulation Data distribution ~ 70 Gbits/s Krakow NIKHEF ICEPP USC End-user analysis batch and interactive High performance parallel analysis (PROOF)

64

65

66 Challenges Service quality Reliability, availability, scaling, performance Security our biggest risk Management and operations grid a collaboration of computing centres Maturity is some years away - a second (or third) generation of middleware will be needed before LHC starts In the short-term there will many grids and middleware implementations for LCG - inter-operability will be a major headache How homogeneous does it need to be? Standards help to avoid adapters

67 The Summary

68 The scientific collaborations are large, global, and already in place There will be a lot of data complex data handling, large amount of storage, 10 s of PB and that will need a lot of processing power order of 100K processors The vast majority of the PC s will use Linux as the Operating System key element of the architecture Need to pay attention to the market developments, technology is of secondary concern We need to have the computing facility in perfect operational shape by the end of 2006, not much time left for such a complex operation A utility grid looks like a very good fit for LHC and LHC looks like an ideal pilot application for a utility grid

Big Data and Storage Management at the Large Hadron Collider

Big Data and Storage Management at the Large Hadron Collider Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!

More information

GridKa: Roles and Status

GridKa: Roles and Status GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre

More information

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil

Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Betriebssystem-Virtualisierung auf einem Rechencluster am SCC mit heterogenem Anwendungsprofil Volker Büge 1, Marcel Kunze 2, OIiver Oberst 1,2, Günter Quast 1, Armin Scheurer 1 1) Institut für Experimentelle

More information

The CMS analysis chain in a distributed environment

The CMS analysis chain in a distributed environment The CMS analysis chain in a distributed environment on behalf of the CMS collaboration DESY, Zeuthen,, Germany 22 nd 27 th May, 2005 1 The CMS experiment 2 The CMS Computing Model (1) The CMS collaboration

More information

Beyond High Performance Computing: What Matters to CERN

Beyond High Performance Computing: What Matters to CERN Beyond High Performance Computing: What Matters to CERN Pierre VANDE VYVRE for the ALICE Collaboration ALICE Data Acquisition Project Leader CERN, Geneva, Switzerland 2 CERN CERN is the world's largest

More information

EGEE is a project funded by the European Union under contract IST-2003-508833

EGEE is a project funded by the European Union under contract IST-2003-508833 www.eu-egee.org NA4 Applications F.Harris(Oxford/CERN) NA4/HEP coordinator EGEE is a project funded by the European Union under contract IST-2003-508833 Talk Outline The basic goals of NA4 The organisation

More information

Grid Computing in Aachen

Grid Computing in Aachen GEFÖRDERT VOM Grid Computing in Aachen III. Physikalisches Institut B Berichtswoche des Graduiertenkollegs Bad Honnef, 05.09.2008 Concept of Grid Computing Computing Grid; like the power grid, but for

More information

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

Deploying a distributed data storage system on the UK National Grid Service using federated SRB Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications

More information

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015

(Possible) HEP Use Case for NDN. Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 (Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:

More information

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

CMS Tier-3 cluster at NISER. Dr. Tania Moulik CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend

More information

Tier0 plans and security and backup policy proposals

Tier0 plans and security and backup policy proposals Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle

More information

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern.

E-mail: guido.negri@cern.ch, shank@bu.edu, dario.barberis@cern.ch, kors.bos@cern.ch, alexei.klimentov@cern.ch, massimo.lamanna@cern. *a, J. Shank b, D. Barberis c, K. Bos d, A. Klimentov e and M. Lamanna a a CERN Switzerland b Boston University c Università & INFN Genova d NIKHEF Amsterdam e BNL Brookhaven National Laboratories E-mail:

More information

Computing at the HL-LHC

Computing at the HL-LHC Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini,

More information

High Availability Databases based on Oracle 10g RAC on Linux

High Availability Databases based on Oracle 10g RAC on Linux High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database

More information

Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013

Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer. Frank Würthwein Rick Wagner August 5th, 2013 Accelerating Experimental Elementary Particle Physics with the Gordon Supercomputer Frank Würthwein Rick Wagner August 5th, 2013 The Universe is a strange place! 67% of energy is dark energy We got no

More information

Implementing a Digital Video Archive Based on XenData Software

Implementing a Digital Video Archive Based on XenData Software Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding

More information

Data analysis in Par,cle Physics

Data analysis in Par,cle Physics Data analysis in Par,cle Physics From data taking to discovery Tuesday, 13 August 2013 Lukasz Kreczko - Bristol IT MegaMeet 1 $ whoami Lukasz (Luke) Kreczko Par,cle Physicist Graduated in Physics from

More information

Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín

Big Data Analytics. for the Exploitation of the CERN Accelerator Complex. Antonio Romero Marín Big Data Analytics for the Exploitation of the CERN Accelerator Complex Antonio Romero Marín Milan 11/03/2015 Oracle Big Data and Analytics @ Work 1 What is CERN CERN - European Laboratory for Particle

More information

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta

More information

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW

HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director brett.weninger@adurant.com Dave Smelker, Managing Principal dave.smelker@adurant.com

More information

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have

More information

IPv6 Traffic Analysis and Storage

IPv6 Traffic Analysis and Storage Report from HEPiX 2012: Network, Security and Storage david.gutierrez@cern.ch Geneva, November 16th Network and Security Network traffic analysis Updates on DC Networks IPv6 Ciber-security updates Federated

More information

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary

Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment

More information

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007

Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007 Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements

More information

The GRID and the Linux Farm at the RCF

The GRID and the Linux Farm at the RCF The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture

More information

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on

More information

Building a Linux Cluster

Building a Linux Cluster Building a Linux Cluster CUG Conference May 21-25, 2001 by Cary Whitney Clwhitney@lbl.gov Outline What is PDSF and a little about its history. Growth problems and solutions. Storage Network Hardware Administration

More information

Managing managed storage

Managing managed storage Managing managed storage CERN Disk Server operations HEPiX 2004 / BNL Data Services team: Vladimír Bahyl, Hugo Caçote, Charles Curran, Jan van Eldik, David Hughes, Gordon Lee, Tony Osborne, Tim Smith Outline

More information

Integration of Virtualized Worker Nodes in Batch Systems

Integration of Virtualized Worker Nodes in Batch Systems Integration of Virtualized Worker Nodes Dr. A. Scheurer, Dr. V. Büge, O. Oberst, P. Krauß Linuxtag 2010, Berlin, Session: Cloud Computing, Talk ID: #16197 KIT University of the State of Baden-Wuerttemberg

More information

Scalable Multi-Node Event Logging System for Ba Bar

Scalable Multi-Node Event Logging System for Ba Bar A New Scalable Multi-Node Event Logging System for BaBar James A. Hamilton Steffen Luitz For the BaBar Computing Group Original Structure Raw Data Processing Level 3 Trigger Mirror Detector Electronics

More information

Online Performance Monitoring of the Third ALICE Data Challenge (ADC III)

Online Performance Monitoring of the Third ALICE Data Challenge (ADC III) EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH European Laboratory for Particle Physics Publication ALICE reference number ALICE-PUB-1- version 1. Institute reference number Date of last change 1-1-17 Online

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

Testing the In-Memory Column Store for in-database physics analysis. Dr. Maaike Limper

Testing the In-Memory Column Store for in-database physics analysis. Dr. Maaike Limper Testing the In-Memory Column Store for in-database physics analysis Dr. Maaike Limper About CERN CERN - European Laboratory for Particle Physics Support the research activities of 10 000 scientists from

More information

An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/

An Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/ An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

NT1: An example for future EISCAT_3D data centre and archiving?

NT1: An example for future EISCAT_3D data centre and archiving? March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier

More information

Event Logging and Distribution for the BaBar Online System

Event Logging and Distribution for the BaBar Online System LAC-PUB-8744 August 2002 Event Logging and Distribution for the BaBar Online ystem. Dasu, T. Glanzman, T. J. Pavel For the BaBar Prompt Reconstruction and Computing Groups Department of Physics, University

More information

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group

DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a

More information

CHESS DAQ* Introduction

CHESS DAQ* Introduction CHESS DAQ* Introduction Werner Sun (for the CLASSE IT group), Cornell University * DAQ = data acquisition https://en.wikipedia.org/wiki/data_acquisition Big Data @ CHESS Historically, low data volumes:

More information

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez

IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data

More information

Scalable stochastic tracing of distributed data management events

Scalable stochastic tracing of distributed data management events Scalable stochastic tracing of distributed data management events Mario Lassnig mario.lassnig@cern.ch ATLAS Data Processing CERN Physics Department Distributed and Parallel Systems University of Innsbruck

More information

Accelerating Microsoft Exchange Servers with I/O Caching

Accelerating Microsoft Exchange Servers with I/O Caching Accelerating Microsoft Exchange Servers with I/O Caching QLogic FabricCache Caching Technology Designed for High-Performance Microsoft Exchange Servers Key Findings The QLogic FabricCache 10000 Series

More information

Database Virtualization and the Cloud

Database Virtualization and the Cloud Database Virtualization and the Cloud How database virtualization, cloud computing and other advances will reshape the database landscape by Mike Hogan, CEO ScaleDB Inc. December 10, 2009 Introduction

More information

Mass Storage System for Disk and Tape resources at the Tier1.

Mass Storage System for Disk and Tape resources at the Tier1. Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage pierpaolo.ricci@cnaf.infn.it ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk

More information

Objectivity Data Migration

Objectivity Data Migration Objectivity Data Migration M. Nowak, K. Nienartowicz, A. Valassi, M. Lübeck, D. Geppert CERN, CH-1211 Geneva 23, Switzerland In this article we describe the migration of event data collected by the COMPASS

More information

The dcache Storage Element

The dcache Storage Element 16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG

More information

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency

More information

Using Linux Clusters as VoD Servers

Using Linux Clusters as VoD Servers HAC LUCE Using Linux Clusters as VoD Servers Víctor M. Guĺıas Fernández gulias@lfcia.org Computer Science Department University of A Corunha funded by: Outline Background: The Borg Cluster Video on Demand.

More information

Object Database Scalability for Scientific Workloads

Object Database Scalability for Scientific Workloads Object Database Scalability for Scientific Workloads Technical Report Julian J. Bunn Koen Holtman, Harvey B. Newman 256-48 HEP, Caltech, 1200 E. California Blvd., Pasadena, CA 91125, USA CERN EP-Division,

More information

HEP computing and Grid computing & Big Data

HEP computing and Grid computing & Big Data May 11 th 2014 CC visit: Uni Trieste and Uni Udine HEP computing and Grid computing & Big Data CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data

More information

Protect Data... in the Cloud

Protect Data... in the Cloud QUASICOM Private Cloud Backups with ExaGrid Deduplication Disk Arrays Martin Lui Senior Solution Consultant Quasicom Systems Limited Protect Data...... in the Cloud 1 Mobile Computing Users work with their

More information

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage

More information

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL

Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive

More information

U-LITE Network Infrastructure

U-LITE Network Infrastructure U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster

More information

Accelerating and Simplifying Apache

Accelerating and Simplifying Apache Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly

More information

(Scale Out NAS System)

(Scale Out NAS System) For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages

More information

CERN s Scientific Programme and the need for computing resources

CERN s Scientific Programme and the need for computing resources This document produced by Members of the Helix Nebula consortium is licensed under a Creative Commons Attribution 3.0 Unported License. Permissions beyond the scope of this license may be available at

More information

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

Scalable Cloud Computing Solutions for Next Generation Sequencing Data Scalable Cloud Computing Solutions for Next Generation Sequencing Data Matti Niemenmaa 1, Aleksi Kallio 2, André Schumacher 1, Petri Klemelä 2, Eija Korpelainen 2, and Keijo Heljanko 1 1 Department of

More information

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training

More information

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC

ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC ATLAS Petascale Data Processing on the Grid: Facilitating Physics Discoveries at the LHC Wensheng Deng 1, Alexei Klimentov 1, Pavel Nevski 1, Jonas Strandberg 2, Junji Tojo 3, Alexandre Vaniachine 4, Rodney

More information

Scheduling and Resource Management in Computational Mini-Grids

Scheduling and Resource Management in Computational Mini-Grids Scheduling and Resource Management in Computational Mini-Grids July 1, 2002 Project Description The concept of grid computing is becoming a more and more important one in the high performance computing

More information

Hadoop on the Gordon Data Intensive Cluster

Hadoop on the Gordon Data Intensive Cluster Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,

More information

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization

More information

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007

LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch. CERN, July 2007 WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? Jamie.Shiers@cern.ch WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.

More information

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

Integrating a heterogeneous and shared Linux cluster into grids

Integrating a heterogeneous and shared Linux cluster into grids Integrating a heterogeneous and shared Linux cluster into grids 1,2 1 1,2 1 V. Büge, U. Felzmann, C. Jung, U. Kerzel, 1 1 1 M. Kreps, G. Quast, A. Vest 1 2 DPG Frühjahrstagung March 28 31, 2006 Dortmund

More information

How To Design A Data Centre

How To Design A Data Centre DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is

More information

Optimizing Large Arrays with StoneFly Storage Concentrators

Optimizing Large Arrays with StoneFly Storage Concentrators Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time

More information

Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab

Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab Performance monitoring at CERN openlab July 20 th 2012 Andrzej Nowak, CERN openlab Data flow Reconstruction Selection and reconstruction Online triggering and filtering in detectors Raw Data (100%) Event

More information

Integrated Grid Solutions. and Greenplum

Integrated Grid Solutions. and Greenplum EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving

More information

SMB Direct for SQL Server and Private Cloud

SMB Direct for SQL Server and Private Cloud SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server

More information

HyperQ Storage Tiering White Paper

HyperQ Storage Tiering White Paper HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com

More information

GRID computing at LHC Science without Borders

GRID computing at LHC Science without Borders GRID computing at LHC Science without Borders Kajari Mazumdar Department of High Energy Physics Tata Institute of Fundamental Research, Mumbai. Disclaimer: I am a physicist whose research field induces

More information

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com

More information

IT-INFN-CNAF Status Update

IT-INFN-CNAF Status Update IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, 10-11 December 2009 Stefano Zani 10/11/2009 Stefano Zani INFN CNAF (TIER1 Staff) 1 INFN CNAF CNAF is the main computing facility of the INFN Core business:

More information

How To Create A Large Enterprise Cloud Storage System From A Large Server (Cisco Mds 9000) Family 2 (Cio) 2 (Mds) 2) (Cisa) 2-Year-Old (Cica) 2.5

How To Create A Large Enterprise Cloud Storage System From A Large Server (Cisco Mds 9000) Family 2 (Cio) 2 (Mds) 2) (Cisa) 2-Year-Old (Cica) 2.5 Cisco MDS 9000 Family Solution for Cloud Storage All enterprises are experiencing data growth. IDC reports that enterprise data stores will grow an average of 40 to 60 percent annually over the next 5

More information

Lecture 1: the anatomy of a supercomputer

Lecture 1: the anatomy of a supercomputer Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949

More information

Service Challenge Tests of the LCG Grid

Service Challenge Tests of the LCG Grid Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources

More information

Tier-1 Services for Tier-2 Regional Centres

Tier-1 Services for Tier-2 Regional Centres Tier-1 Services for Tier-2 Regional Centres The LHC Computing MoU is currently being elaborated by a dedicated Task Force. This will cover at least the services that Tier-0 (T0) and Tier-1 centres (T1)

More information

Clusters: Mainstream Technology for CAE

Clusters: Mainstream Technology for CAE Clusters: Mainstream Technology for CAE Alanna Dwyer HPC Division, HP Linux and Clusters Sparked a Revolution in High Performance Computing! Supercomputing performance now affordable and accessible Linux

More information

The HP IT Transformation Story

The HP IT Transformation Story The HP IT Transformation Story Continued consolidation and infrastructure transformation impacts to the physical data center Dave Rotheroe, October, 2015 Why do data centers exist? Business Problem Application

More information

Advantages of Tape-Network (LTO) Technology

Advantages of Tape-Network (LTO) Technology LTO Technology & Storage Area Networks A Winning Combination Hewlett-Packard IBM Corporation Seagate Removable Storage Solutions Introduction Explosive data growth, 24x7 operations and business-critical

More information

Software Scalability Issues in Large Clusters

Software Scalability Issues in Large Clusters Software Scalability Issues in Large Clusters A. Chan, R. Hogue, C. Hollowell, O. Rind, T. Throwe, T. Wlodek Brookhaven National Laboratory, NY 11973, USA The rapid development of large clusters built

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

Using Linux Clusters as VoD Servers

Using Linux Clusters as VoD Servers HAC LUCE Using Linux Clusters as VoD Servers Víctor M. Guĺıas Fernández gulias@lfcia.org Computer Science Department University of A Corunha funded by: Outline Background: The Borg Cluster Video on Demand.

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp

More information

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full

More information

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1

NAS or iscsi? White Paper 2007. Selecting a storage system. www.fusionstor.com. Copyright 2007 Fusionstor. No.1 NAS or iscsi? Selecting a storage system White Paper 2007 Copyright 2007 Fusionstor www.fusionstor.com No.1 2007 Fusionstor Inc.. All rights reserved. Fusionstor is a registered trademark. All brand names

More information

New!! - Higher performance for Windows and UNIX environments

New!! - Higher performance for Windows and UNIX environments New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Moving Virtual Storage to the Cloud Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage Table of Contents Overview... 1 Understanding the Storage Problem... 1 What Makes

More information

Microsoft SQL Server 2005 on Windows Server 2003

Microsoft SQL Server 2005 on Windows Server 2003 EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference

More information

Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc.

Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc. Oracle BI EE Implementation on Netezza Prepared by SureShot Strategies, Inc. The goal of this paper is to give an insight to Netezza architecture and implementation experience to strategize Oracle BI EE

More information

WBS 1.8 Trigger Electronics & Software. Production, Production Testing and Quality Assurance Plan

WBS 1.8 Trigger Electronics & Software. Production, Production Testing and Quality Assurance Plan WBS 1.8 Trigger Electronics & Software Production, Production Testing and Quality Assurance Plan BTeV Document 1046 Edited by Erik Gottschalk, Vince Pavlicek, Michael Haney 1 INTRODUCTION... 3 2 LEVEL

More information