Next Generation Tier 1 Storage
|
|
|
- Moris Montgomery
- 10 years ago
- Views:
Transcription
1 Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012
2 Why are we doing this? What were the evaluation criteria? Candidates Selections And omissions Tests Status Timeline Aim for production service for 2014 data run Introduction
3 Why are we doing this CASTOR is working well for us but: CASTOR optimised for many disk servers per pool Getting harder as cost optimal size is getting larger & we have many storage pools Scheduling overhead/ hot spotting TransferManager (LSF replacement) has improved this A LOT Oracle Costs! Nameserver is Single Point of Failure Not mountable file system Limits take-up outside of WLCG/HEP( Matters as we also use Castor for storage for other STFC user groups) Requires (quite a lot of) specific expertise Disk-only not very widely deployed now EOS (CERN ATLAS+CMS), DPM (ASGC) Could cause delays in support resolution Reduced community support Greater risk in meeting future requirements for disk-only
4 Mandatory : Must make more effective use of existing h/w Must not restrict hardware purchasing Must be able to use ANY commodity disk Must support end-to-end checksumming Must support ADLER32 checksums Must support xrootd and gridftp Must support X509 authentication Must support ACLs Must be scalable to 100s Petabytes Must scale to > files Must at least match I/O performance of CASTOR Criteria
5 Desirable Should provide NFS mountable file system Should be resilient to hardware failure (disk/memory/node) Should support checksum algorithms other than ADLER32 Should be independent of licensed software Any required database should be supported by STFC DB Team Should provide an HTTP or WebDAV interface Should support SRM interface Should provide a POSIX interface and file protocol Should support username/password authentication Should support kerberos authentication Should support hot file or hot block replication Criteria
6 What else are we looking for? Draining Speed up deployment/decommissioning of new hardware Removal Great if you can get data in fast, but if deletes are slow... Directory entries We already know experiments have large numbers of files in each directory Need support for lots Support Should have good support and wider usage IPv6 support Not on roadmap for Castor RAL & Tier 1 starting to look at IPv6
7 Candidates HEP (ish) solutions dcache, DPM, STORM+Lustre, AFS Parallel and Scale-Out Solutions HDFS, OpenStack, OrangeFS, MooseFS, Ceph, Tahoe-LAFS, XtreemFS, GfamrFS, GlustreFS,GPFS Integrated Solutions IBM SONAS, Isilon, Panasas, DDN, NetApp Paper (twiki) based review carried out plus using anecdotal /reported experience where we know sites that run things in production (i.e. reports at HEPiX etc.)
8 Selection
9 Selections dcache Still has SPoFs CEPH Still under development; no lifecycle checksumming HDFS Partial POSIX support using FUSE orangefs No file replication, fault tolerant version in preparation Lustre Requires server kernel patch (although latest doesn t), no hot file replication?
10 Some rejections... DPM Well known and widely deployed within HEP No hot-file replication, SPoFs, NFS interface not yet stable Some questions about scalability AFS Massively scalable (>25k clients) Not deployed in high throughput production, cumbersome to administer, security GPFS Excellent performance To get all required features, locked into IBM hardware, licensed EOS Good performance, auto file replication Limited support
11 IOZone tests A-la disk server acceptance tests Read/Write throughput tests (sequential and parallel) File/gridFTP/xroot Deletions Draining Fault tolerance Rebuild times Tests
12 dcache Under deployment. Testing not yet started Lustre Deployed IOZone tests complete, functional tests ongoing OrangeFS Deployed IOZone tests complete, functional tests ongoing CEPH RPMs built, being deployed. HDFS Installation complete, Problems running IOZone tests, other tests ongoing Status
13 Results (Preliminary) 2000 Write Time for 2GB Test File Transfer Time (secs) Lustre OrangeFS Castor Number of Parallel Write Operations Note Lustre hitting a wall doing parallel writes Not entirely understood yet (have some hints from other sites Assume this is a setup/tuning issue Exactly the kind of thing that could disqualify it if we can t fix
14 Results (Preliminary) Write Rates for 2GB Source File 120 Transfer Rate (MB/sec) Lustre OrangeFS Castor Number of parallel write operations
15 Results (Preliminary) Read Time for 2GB Source File 4000 Transfer Time (sec) Lustre OrangeFS Castor Number of parallel read operations
16 Results (Preliminary) Read Rate for 2GB Source Files 120 Transfer Rate (MB/sec) Lustre OrangeFS Castor Number of Concurrent Read Operations
17 Very Provisional so far Castor rather well tuned at RAL Lustre & OrangeFS hardly tuned Non-binding summary OrangeFS & Ceph look promising in long term but immature dcache surely could do most of what we need Still file based Lustre Promising Like that it is block based Like no SPoFs Stable Could live with occasional downtimes for upgrades Results
18 Timeline Provisional Dec 2012 Final Selection (primary and reserve) Mar 2013 Pre-Production Instance available WAN testing and tuning Jun 2013 Production Instance Depends on... Hardware availability Quattor Profiling - configurartion should be less complex than CASTOR Results from test instances Open Questions: One large instance or multiple smaller ones as now? Large instance with dynamic quotas has some attractions Migration from CASTOR Could just use lifetime of hardware
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT
SS Data & Storage Storage strategy and cloud storage evaluations at CERN Dirk Duellmann, CERN IT (with slides from Andreas Peters and Jan Iven) 5th International Conference "Distributed Computing and Grid-technologies
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
IBM ELASTIC STORAGE SEAN LEE
IBM ELASTIC STORAGE SEAN LEE Solution Architect Platform Computing Division IBM Greater China Group Agenda Challenges in Data Management What is IBM Elastic Storage Key Features Elastic Storage Server
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
Four Reasons To Start Working With NFSv4.1 Now
Four Reasons To Start Working With NFSv4.1 Now PRESENTATION TITLE GOES HERE Presented by: Alex McDonald Hosted by: Gilles Chekroun Ethernet Storage Forum Members The SNIA Ethernet Storage Forum (ESF) focuses
Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week
Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly
Investigation of storage options for scientific computing on Grid and Cloud facilities
Investigation of storage options for scientific computing on Grid and Cloud facilities Overview Context Test Bed Lustre Evaluation Standard benchmarks Application-based benchmark HEPiX Storage Group report
The BIG Data Era has. your storage! Bratislava, Slovakia, 21st March 2013
The BIG Data Era has arrived Re-invent your storage! Bratislava, Slovakia, 21st March 2013 Luka Topic Regional Manager East Europe EMC Isilon Storage Division [email protected] 1 What is Big Data? 2 EXABYTES
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
EMC IRODS RESOURCE DRIVERS
EMC IRODS RESOURCE DRIVERS PATRICK COMBES: PRINCIPAL SOLUTION ARCHITECT, LIFE SCIENCES 1 QUICK AGENDA Intro to Isilon (~2 hours) Isilon resource driver Intro to ECS (~1.5 hours) ECS Resource driver Possibilities
Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari
Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari 1 Agenda Introduction on the objective of the test activities
Data Storage in Clouds
Data Storage in Clouds Jan Stender Zuse Institute Berlin contrail is co-funded by the EC 7th Framework Programme 1 Overview Introduction Motivation Challenges Requirements Cloud Storage Systems XtreemFS
pnfs State of the Union FAST-11 BoF Sorin Faibish- EMC, Peter Honeyman - CITI
pnfs State of the Union FAST-11 BoF Sorin Faibish- EMC, Peter Honeyman - CITI Outline What is pnfs? pnfs Tutorial pnfs Timeline Standards Status Industry Support EMC Contributions Q&A pnfs Update November
short introduction to linux high availability description of problem and solution possibilities linux tools
High Availability with Linux / Hepix October 2004 Karin Miers 1 High Availability with Linux Using DRBD and Heartbeat short introduction to linux high availability description of problem and solution possibilities
Data storage at CERN
Data storage at CERN Overview: Some CERN / HEP specifics Where does the data come from, what happens to it General-purpose data storage @ CERN Outlook EAKC2014 Data at CERN J.Iven - 1 CERN vs Experiments
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
Engineering a NAS box
Engineering a NAS box One common use for Linux these days is as a Network Attached Storage server. In this talk I will discuss some of the challenges facing NAS server design and how these can be met within
The DESY Big-Data Cloud System
The DESY Big-Data Cloud System Patrick Fuhrmann On behave of the project team The DESY BIG DATA Cloud Service Berlin Cloud Event Patrick Fuhrmann 5 May 2014 1 Content (on a good day) About DESY Project
Patrick Fuhrmann. The DESY Storage Cloud
The DESY Storage Cloud Patrick Fuhrmann The DESY Storage Cloud Hamburg, 2/3/2015 for the DESY CLOUD TEAM Content > Motivation > Preparation > Collaborations and publications > What do you get right now?
UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure
UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure Authors: A O Jaunsen, G S Dahiya, H A Eide, E Midttun Date: Dec 15, 2015 Summary Uninett Sigma2 provides High
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Preview of a Novel Architecture for Large Scale Storage
Preview of a Novel Architecture for Large Scale Storage Andreas Petzold, Christoph-Erdmann Pfeiler, Jos van Wezel Steinbuch Centre for Computing STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the
WHITE PAPER. www.fusionstorm.com. Get Ready for Big Data:
WHitE PaPER: Easing the Way to the cloud: 1 WHITE PAPER Get Ready for Big Data: How Scale-Out NaS Delivers the Scalability, Performance, Resilience and manageability that Big Data Environments Demand 2
Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director
Building Storage as a Service with OpenStack Greg Elkinbard Senior Technical Director MIRANTIS 2012 PAGE 1 About the Presenter Greg Elkinbard Senior Technical Director at Mirantis Builds on demand IaaS
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
HTCondor at the RAL Tier-1
HTCondor at the RAL Tier-1 Andrew Lahiff, Alastair Dewhurst, John Kelly, Ian Collier, James Adams STFC Rutherford Appleton Laboratory HTCondor Week 2014 Outline Overview of HTCondor at RAL Monitoring Multi-core
Lessons learned from parallel file system operation
Lessons learned from parallel file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
Summer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
In Memory Accelerator for MongoDB
In Memory Accelerator for MongoDB Yakov Zhdanov, Director R&D GridGain Systems GridGain: In Memory Computing Leader 5 years in production 100s of customers & users Starts every 10 secs worldwide Over 15,000,000
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL [email protected] 2012 IBM Corporation
Page:1 Openstack Swift Object Store Cloud built from the grounds up David Hadas Swift ATC HRL [email protected] Page:2 Object Store Cloud Services Expectations: PUT/GET/DELETE Huge Capacity (Scale) Always
New Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
Survey of Technologies for Wide Area Distributed Storage
Survey of Technologies for Wide Area Distributed Storage Project : GigaPort3 Project Year : 2010 Project Manager : Rogier Spoor Author(s) Completion Date : 2010-06-29 Version : 1.0 : Arjan Peddemors, Christiaan
Exploring Oracle E-Business Suite Load Balancing Options. Venkat Perumal IT Convergence
Exploring Oracle E-Business Suite Load Balancing Options Venkat Perumal IT Convergence Objectives Overview of 11i load balancing techniques Load balancing architecture Scenarios to implement Load Balancing
IBM System x GPFS Storage Server
IBM System x GPFS Storage Server Schöne Aussicht en für HPC Speicher ZKI-Arbeitskreis Paderborn, 15.03.2013 Karsten Kutzer Client Technical Architect Technical Computing IBM Systems & Technology Group
A Survey of Shared File Systems
Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE Hadoop Storage-as-a-Service ABSTRACT This White Paper illustrates how EMC Elastic Cloud Storage (ECS ) can be used to streamline the Hadoop data analytics
XtreemFS a Distributed File System for Grids and Clouds Mikael Högqvist, Björn Kolbeck Zuse Institute Berlin XtreemFS Mikael Högqvist/Björn Kolbeck 1
XtreemFS a Distributed File System for Grids and Clouds Mikael Högqvist, Björn Kolbeck Zuse Institute Berlin XtreemFS Mikael Högqvist/Björn Kolbeck 1 The XtreemOS Project Research project funded by the
Diagram 1: Islands of storage across a digital broadcast workflow
XOR MEDIA CLOUD AQUA Big Data and Traditional Storage The era of big data imposes new challenges on the storage technology industry. As companies accumulate massive amounts of data from video, sound, database,
HDFS Under the Hood. Sanjay Radia. [email protected] Grid Computing, Hadoop Yahoo Inc.
HDFS Under the Hood Sanjay Radia [email protected] Grid Computing, Hadoop Yahoo Inc. 1 Outline Overview of Hadoop, an open source project Design of HDFS On going work 2 Hadoop Hadoop provides a framework
Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau
Selling Compellent NAS: File & Block Level in the Same System Chad Thibodeau Agenda Session Objectives Feature Overview Technology Overview Compellent Differentiators Competition Available Resources Questions
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing
Roadmap for Applying Hadoop Distributed File System in Scientific Grid Computing Garhan Attebury 1, Andrew Baranovski 2, Ken Bloom 1, Brian Bockelman 1, Dorian Kcira 3, James Letts 4, Tanya Levshina 2,
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
Isilon OneFS. Version 7.2. OneFS Migration Tools Guide
Isilon OneFS Version 7.2 OneFS Migration Tools Guide Copyright 2014 EMC Corporation. All rights reserved. Published in USA. Published November, 2014 EMC believes the information in this publication is
Research Data Storage Infrastructure (RDSI) Project. DaSh Straw-Man
Research Data Storage Infrastructure (RDSI) Project DaSh Straw-Man Recap from the Node Workshop (Cherry-picked) *Higher Tiered DCs cost roughly twice the cost of Lower Tiered DCs. * However can provide
BabuDB: Fast and Efficient File System Metadata Storage
BabuDB: Fast and Efficient File System Metadata Storage Jan Stender, Björn Kolbeck, Mikael Högqvist Felix Hupfeld Zuse Institute Berlin Google GmbH Zurich Motivation Modern parallel / distributed file
Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide
Isilon OneFS Version 7.2.1 OneFS Migration Tools Guide Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information in this publication is accurate
Service Catalogue. virtual services, real results
Service Catalogue virtual services, real results September 2015 Table of Contents About the Catalyst Cloud...1 Get in contact with us... 2 Services... 2 Infrastructure services 2 Platform services 7 Management
Detailed Outline of Hadoop. Brian Bockelman
Detailed Outline of Hadoop Brian Bockelman Outline of Hadoop Before we dive in to an installation, I wanted to survey the landscape. HDFS Core Services Grid services HDFS Aux Services Putting it all together
RAID Storage, Network File Systems, and DropBox
RAID Storage, Network File Systems, and DropBox George Porter CSE 124 February 24, 2015 * Thanks to Dave Patterson and Hong Jiang Announcements Project 2 due by end of today Office hour today 2-3pm in
Data Sheet FUJITSU Storage ETERNUS CD10000
Data Sheet FUJITSU Storage ETERNUS CD10000 Data Sheet FUJITSU Storage ETERNUS CD10000 The ultimate hyperscale storage system for large enterprises ETERNUS CD10000 The ETERNUS CD10000 provides unlimited,
IBM PureApplication System for IBM WebSphere Application Server workloads
IBM PureApplication System for IBM WebSphere Application Server workloads Use IBM PureApplication System with its built-in IBM WebSphere Application Server to optimally deploy and run critical applications
SMB in the Cloud David Disseldorp
SMB in the Cloud David Disseldorp Samba Team / SUSE [email protected] Agenda Cloud storage Common types Interfaces Applications Cloud file servers Microsoft Azure File Service Demonstration Amazon Elastic
PES. Batch virtualization and Cloud computing. Part 1: Batch virtualization. Batch virtualization and Cloud computing
Batch virtualization and Cloud computing Batch virtualization and Cloud computing Part 1: Batch virtualization Tony Cass, Sebastien Goasguen, Belmiro Moreira, Ewan Roche, Ulrich Schwickerath, Romain Wartel
Easier - Faster - Better
Highest reliability, availability and serviceability ClusterStor gets you productive fast with robust professional service offerings available as part of solution delivery, including quality controlled
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
HADOOP, a newly emerged Java-based software framework, Hadoop Distributed File System for the Grid
Hadoop Distributed File System for the Grid Garhan Attebury, Andrew Baranovski, Ken Bloom, Brian Bockelman, Dorian Kcira, James Letts, Tanya Levshina, Carl Lundestedt, Terrence Martin, Will Maier, Haifeng
THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.
THE EMC ISILON STORY Big Data In The Enterprise 2012 1 Big Data In The Enterprise Isilon Overview Isilon Technology Summary 2 What is Big Data? 3 The Big Data Challenge File Shares 90 and Archives 80 Bioinformatics
DSS. Diskpool and cloud storage benchmarks used in IT-DSS. Data & Storage Services. Geoffray ADDE
DSS Data & Diskpool and cloud storage benchmarks used in IT-DSS CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/it Geoffray ADDE DSS Outline I- A rational approach to storage systems evaluation
AIX NFS Client Performance Improvements for Databases on NAS
AIX NFS Client Performance Improvements for Databases on NAS October 20, 2005 Sanjay Gulabani Sr. Performance Engineer Network Appliance, Inc. [email protected] Diane Flemming Advisory Software Engineer
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary
Big Science and Big Data Dirk Duellmann, CERN Apache Big Data Europe 28 Sep 2015, Budapest, Hungary 16/02/2015 Real-Time Analytics: Making better and faster business decisions 8 The ATLAS experiment
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
EMC s Enterprise Hadoop Solution. By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst
White Paper EMC s Enterprise Hadoop Solution Isilon Scale-out NAS and Greenplum HD By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst February 2012 This ESG White Paper was commissioned
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Ceph. A file system a little bit different. Udo Seidel
Ceph A file system a little bit different Udo Seidel Ceph what? So-called parallel distributed cluster file system Started as part of PhD studies at UCSC Public announcement in 2006 at 7 th OSDI File system
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
Archival Storage At LANL Past, Present and Future
Archival Storage At LANL Past, Present and Future Danny Cook Los Alamos National Laboratory [email protected] Salishan Conference on High Performance Computing April 24-27 2006 LA-UR-06-0977 Main points of
