GPFS in data taking and analysis for new light source (X-Ray)
|
|
|
- Beverley Ellis
- 9 years ago
- Views:
Transcription
1 GPFS in data taking and analysis for new light source (X-Ray) formerly known as Large Data Ingest Architecture Martin Gasthuber, Stefan Dietrich, Marco Strutz, Manuela Kuhn, Uwe Ensslin, Steve Aplin, Volker Guelzow / DESY Ulf Troppens, Olaf Weiser / IBM GPFS-UG Austin, November 15
2 DESY where & what & why > founded December 1959 > research topics Accelerator Science (>5 already built) Particle Physics (HEP) i.e. Photon Science (physics with X-Rays) i.e. X-ray crystallography, looking at viruses to van Gogh and fuel cells Astro Particle Physics > 2 Sites Hamburg/ Germany (main site) Zeuthen / Germany (close to Berlin) > ~2300 employes, ~3000 guest scientists annually Martin Gasthuber GPFS for data November 15, 2015 Page 2
3 PETRA III > storage ring accelerator 2.3 km circumference > X-ray radiation source > since 2009: 14 beamlines in operation > 2016: 10 additional beamlines (extension) start operations Sample raw file Beamline P11 Bio-Imaging and diffraction Martin Gasthuber GPFS for data November 15, 2015 Page 3
4 why building a new online storage and analysis system > massive increase in terms of #files and #bytes (orders of magnitude) > dramatic increase in data complexity more IO & CPU from crystal bound targets to liquid bound targets Martin Gasthuber GPFS for data November 15, 2015 Page 4
5 becomes less precise, more complex, more iterative Martin Gasthuber GPFS for data November 15, 2015 Page 5
6 user perspective > run startbeamtime <beamtime-id> from relational DB get [PI, participants, access rights, nodes, ] create filesets, create def. dirs + ACLs, setup policy runs, quota setting, XATTR def. settings, create shares for nfs+smb, start ZeroMQ endpoints > take data mostly detector specific mostly done through NFS & SMB today ZeroMQ (vacuum cleaner) growing non present (on site) experimentors can start accessing the data remoteley Portal (http), FTP, Globus-Online (also used for remote data ingest) > run stopbeamtime <beamtime-id> verify all files copied unlink fileset (beamline fs), unshare exports, stop ZeroMQ endpoint update XATTR to reflect state > users continue data analysis on core fileystem & remote access GPFS native, NFS, SMB Martin Gasthuber GPFS for data November 15, 2015 Page 6
7 ZeroMQ usage Detectors / Source Posix IO / Detector specific ZeroMQ Entrance ZeroMQ Fan-Out ZeroMQ/IP protocol data analysis data analysis ZeroMQ Exit ZeroMQ Fan-In GPFS IO / Posix GPFS / Sink more pattern expected Martin Gasthuber GPFS for data November 15, 2015 Page 7
8 from the cradle to the grave the overall dataflow Detektor NFS, SMB, 0MQ 3 min delay Linux Windows NSD NFS SMB Analysis CPUs Windows Linux Detektor ~ 30 min delay Tape Linux Sandbox per Beamline Beamline Filesystem Core Filesystem ~25 min delay γportal keep copy of data for several days/weeks keep copy of data for several months (>10) Martin Gasthuber GPFS for data November 15, 2015 Page 8
9 Beamline Filesystem > wild-west area for beamline > only host based authentication, no ACLs > access through NFSv3, SMB or ZeroMQ > optimized for performance 1 MiB filesystem blocksize pre-optimized NFSv3: ~60 MB/s NFSv3: ~600 MB/s after tuning! SMB: ~ MB/s 'carve out' low capacity & large # of spindles > Tiered storage Tier 0: SSDs from GS1 (~10 TB) migration after short period of time Tier 1: ~80 TB capacity at GL4/6 Detector prop. Link Control Server Beamline Nodes Access through NFS/SMB/0MQ GS1 NSD Proxy Nodes Beamline Filesystem Migration between GPFS Pools via PolicyRun GL6/GL4 NSD MD Martin Gasthuber GPFS for data November 15, 2015 Page 9
10 Core Filesystem > settled world > full user authentication > NFSv4 ACLs (only) > access through NFSv3, SMB or native GPFS (dominant) > GPFS policy runs copy data Analysis Nodes Proxy Nodes Beamline Filesystem GS1 NSD GL6/GL4 NSD MD Migration between GPFS Pools GPFS Policy Run creates copy Beamline Core Filesystem Single UID/GID (many to one) ACLs inherited from. raw data set to immutable > 8 MiB filesystem blocksize > fileset per beamtime GL6/GL4 NSD Core Filesystem GS1 NSD MD Martin Gasthuber GPFS for data November 15, 2015 Page 10
11 boxes linked together Detector prop. Link Proxy Nodes Control Server 10GE Access Layer Proxy Nodes Analysis Cluster DESY Network Detector prop. Link Control Server up to 24 Detectors Total: 160 Gb/s (320 max.) P3 DC ~1km Proxy Nodes Proxy Nodes Infiniband (56 Gbit FDR) 2 Switches 2 Fabrics GS1 NSD ESS GL6 NSD GS1 NSD Beamline Nodes RTT: ~0,24 ms Initial 7 Proxy Nodes ESS GL4 NSD P3 Hall Datacenter Martin Gasthuber GPFS for data November 15, 2015 Page 11
12 structured view Key Components 4x GPFS clusters - Looks like one cluster from the user point of view - Provides separation for admin purposes 2x GPFS file system - Provides isolation for data taking (beamlinefile system) and long-term analysis (core file system) 2x IBM ESS GS1 - Metadata core file system - Metadata beamline file system - Tier0 beamlinefile system - Having two ESS improves high availability for data taking 2x IBM ESS GL4/6 - Tier1 beamlinefile system - Data core filesystem Additional nodes in access cluster and beamline cluster Analysis cluster already exists All nodes run GPFS All nodes are connected via InfiniBand (FDR) Martin Gasthuber GPFS for data November 15, 2015 Page 12
13 monitoring > data tapped from ZIMon Collector -> ElasticSearch > ESS GUI (used supplementary to the others) > three areas to cover dashboards (view only) target for beamline, operating, public areas data from ElasticSearch presented with Kibana expert ElasticSearch data accessed through Kibana hunt for covered correlations logfile analysis (Kibana) operating alert system Nagios/Icinga and RZ-Monitor (home grown) checks Martin Gasthuber GPFS for data November 15, 2015 Page 13
14 primary civil duty - tuning, tuning, tuning > in several tuning sessions (multiple days ), on the invest side we got impressive speed-ups > getting the right expert on hand (for days) is the key know-how is there > changes with the biggest impact are: GPFS: blocksize (1M, 8M), prefetch-aggressiveness (nfs-prefetch-strategy), logbuffer-size (journal), pagepool (prefetch-percentage) NFS: r/w size (max) > continues to be an important permanent topic! > finding the real protein is possible but not always easy Martin Gasthuber GPFS for data November 15, 2015 Page 14
15 things we hope(d) would work, but > current architecture result of process during last months > detectors as native GPFS clients old operating systems (RHEL 4, Suse 10 etc.) inhomogeneous network for GPFS: InfiniBand and 10G Ethernet > Windows as native GPFS client more or less working, but source of pain > Active file management (AFM) as copy process no control during file transfer not supported with native GPFS Windows client cache behavior not optimal for this use case > self-made SSD burst buffer SSDs died very fast (majority), others by far too expensive (NVMe devices) but survived! > remote cluster UID-mapping Martin Gasthuber GPFS for data November 15, 2015 Page 15
16 usage Martin Gasthuber GPFS for data November 15, 2015 Page 16
17 preparing for > next gen. 6GB/sec (expect 4-6 next year) > next 'customer' - XFEL ( (free electron laser) start in 2016/2017 blueprint larger PB per year faster - ( GB/sec) (initial three experiments) Martin Gasthuber GPFS for data November 15, 2015 Page 17
18 summary > key points for going with GPFS as the core component creator knows business well-hung technology storage is conservative fast & reliable & scalable (to a larger extend ;-) GNR usage a must XATTR, NFSv4 ACL (no more POSIC ACLs!) federated (concurrent) access by NFS, SMB and native GPFS protocol policy-runs drives all the automatic data flow (ILC) distinguished data management scale integration into (our weird) environments/infrastructure (auth, dir services ) future stuff burst buffer, CES (incl. Ganesha), QOS, continuing expect even higher bandwidth with Ganesha (~800MB/sec single thread) Ganesha running NFSv4.1 (including pnfs ;-) our long-term agenda point Martin Gasthuber GPFS for data November 15, 2015 Page 18
Advanced Computing. for large Experiments @ DESY. Volker Guelzow IT-Gruppe Hamburg, Oct 29th, 2014
Advanced Computing for large Experiments @ DESY Volker Guelzow IT-Gruppe Hamburg, Oct 29th, 2014 Management for Beamlines: 11 12 Visualization (Linux, Windows) Analysis (Linux, Windows) 8 Admin NSD, NFS
An Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing
An Alternative Storage Solution for MapReduce Eric Lomascolo Director, Solutions Marketing MapReduce Breaks the Problem Down Data Analysis Distributes processing work (Map) across compute nodes and accumulates
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
Lustre* is designed to achieve the maximum performance and scalability for POSIX applications that need outstanding streamed I/O.
Reference Architecture Designing High-Performance Storage Tiers Designing High-Performance Storage Tiers Intel Enterprise Edition for Lustre* software and Intel Non-Volatile Memory Express (NVMe) Storage
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX G1000 Capacity Specifications Maximum (Max.) Number of Hard Drives, Including Spares 264 SFF 252 LFF 480 SFF 480 LFF 720 SFF 720 LFF 1,440 SFF 1,440 LFF 2,304
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX
HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX 1 G1000 Capacity Specifications Maximum (Max.) Number of Hard Drives, Including Spares 264 SFF 264 LFF 480 SFF 480 LFF 720 SFF 720 LFF 1,440 SFF 1,440 LFF
www.thinkparq.com www.beegfs.com
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected] [email protected]
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
IBM System x GPFS Storage Server
IBM System x GPFS Storage Server Schöne Aussicht en für HPC Speicher ZKI-Arbeitskreis Paderborn, 15.03.2013 Karsten Kutzer Client Technical Architect Technical Computing IBM Systems & Technology Group
Sun Storage Perspective & Lustre Architecture. Dr. Peter Braam VP Sun Microsystems
Sun Storage Perspective & Lustre Architecture Dr. Peter Braam VP Sun Microsystems Agenda Future of Storage Sun s vision Lustre - vendor neutral architecture roadmap Sun s view on storage introduction The
University of Birmingham & CLIMB GPFS User Experience. Simon Thompson Research Support, IT Services University of Birmingham
University of Birmingham & CLIMB GPFS User Experience Simon Thompson Research Support, IT Services University of Birmingham University of Birmingham Research intensive University ~19000 Undergraduate Students
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
Accelerating Real Time Big Data Applications. PRESENTATION TITLE GOES HERE Bob Hansen
Accelerating Real Time Big Data Applications PRESENTATION TITLE GOES HERE Bob Hansen Apeiron Data Systems Apeiron is developing a VERY high performance Flash storage system that alters the economics of
OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group,
OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group, Agenda Project Overview API Overview Architecture Discussion Driver Details Project Status & Upcoming Features Q & A 2 Manila: Project
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
The Murchison Widefield Array Data Archive System. Chen Wu Int l Centre for Radio Astronomy Research The University of Western Australia
The Murchison Widefield Array Data Archive System Chen Wu Int l Centre for Radio Astronomy Research The University of Western Australia Agenda Dataflow Requirements Solutions & Lessons learnt Open solution
EOFS Workshop Paris Sept, 2011. Lustre at exascale. Eric Barton. CTO Whamcloud, Inc. [email protected]. 2011 Whamcloud, Inc.
EOFS Workshop Paris Sept, 2011 Lustre at exascale Eric Barton CTO Whamcloud, Inc. [email protected] Agenda Forces at work in exascale I/O Technology drivers I/O requirements Software engineering issues
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
SCALABLE FILE SHARING AND DATA MANAGEMENT FOR INTERNET OF THINGS
Sean Lee Solution Architect, SDI, IBM Systems SCALABLE FILE SHARING AND DATA MANAGEMENT FOR INTERNET OF THINGS Agenda Converging Technology Forces New Generation Applications Data Management Challenges
IBM ELASTIC STORAGE SEAN LEE
IBM ELASTIC STORAGE SEAN LEE Solution Architect Platform Computing Division IBM Greater China Group Agenda Challenges in Data Management What is IBM Elastic Storage Key Features Elastic Storage Server
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
The Use of Flash in Large-Scale Storage Systems. [email protected]
The Use of Flash in Large-Scale Storage Systems [email protected] 1 Seagate s Flash! Seagate acquired LSI s Flash Components division May 2014 Selling multiple formats / capacities today Nytro
Spectrum Scale. Problem Determination. Mathias Dietz
Spectrum Scale Problem Determination Mathias Dietz Please Note IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM s sole discretion.
IBM Storage Technical Strategy and Trends
IBM Storage Technical Strategy and Trends 9.3.2016 Dr. Robert Haas CTO Storage Europe, IBM [email protected] 2016 International Business Machines Corporation 1 Cognitive Computing: Technologies that will
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Four Reasons To Start Working With NFSv4.1 Now
Four Reasons To Start Working With NFSv4.1 Now PRESENTATION TITLE GOES HERE Presented by: Alex McDonald Hosted by: Gilles Chekroun Ethernet Storage Forum Members The SNIA Ethernet Storage Forum (ESF) focuses
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
Can High-Performance Interconnects Benefit Memcached and Hadoop?
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
IBM System x SAP HANA
Place photo here IBM System x SAP HANA, IBM System X IBM SAP: 42 2012 Largest HANA implementation worldwide with 100 Terrabyte powered by IBM 2011 IBM Unveils Next Generation Smart Cloud Platform for Business
Hadoop: Embracing future hardware
Hadoop: Embracing future hardware Suresh Srinivas @suresh_m_s Page 1 About Me Architect & Founder at Hortonworks Long time Apache Hadoop committer and PMC member Designed and developed many key Hadoop
Lustre SMB Gateway. Integrating Lustre with Windows
Lustre SMB Gateway Integrating Lustre with Windows Hardware: Old vs New Compute 60 x Dell PowerEdge 1950-8 x 2.6Ghz cores, 16GB, 500GB Sata, 1GBe - Win7 x64 Storage 1 x Dell R510-12 x 2TB Sata, RAID5,
Distributed File System Choices: Red Hat Storage, GFS2 & pnfs
Distributed File System Choices: Red Hat Storage, GFS2 & pnfs Ric Wheeler Architect & Senior Manager, Red Hat June 27, 2012 Overview Distributed file system basics Red Hat distributed file systems Performance
A Survey of Shared File Systems
Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...
Name: 1. CS372H: Spring 2009 Final Exam
Name: 1 Instructions CS372H: Spring 2009 Final Exam This exam is closed book and notes with one exception: you may bring and refer to a 1-sided 8.5x11- inch piece of paper printed with a 10-point or larger
EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X210. EMC Isilon X410 ARCHITECTURE
EMC ISILON X-SERIES EMC Isilon X200 EMC Isilon X210 The EMC Isilon X-Series, powered by the OneFS operating system, uses a highly versatile yet simple scale-out storage architecture to speed access to
Data Movement and Storage. Drew Dolgert and previous contributors
Data Movement and Storage Drew Dolgert and previous contributors Data Intensive Computing Location Viewing Manipulation Storage Movement Sharing Interpretation $HOME $WORK $SCRATCH 72 is a Lot, Right?
Monitoring GPFS Using TPC or, Monitoring IBM Spectrum Scale using IBM Spectrum Control. Christian Bolik, TPC/Spectrum Control development 13/05/2015
Monitoring GPFS Using TPC or, Monitoring IBM Spectrum Scale using IBM Spectrum Control Christian Bolik, TPC/Spectrum Control development 13/05/2015 Please note: IBM s statements regarding its plans, directions,
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks
A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks Xiaoyi Lu, Md. Wasi- ur- Rahman, Nusrat Islam, and Dhabaleswar K. (DK) Panda Network- Based Compu2ng Laboratory Department
POSIX and Object Distributed Storage Systems
1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments
White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server
Large Scale Storage. Orlando Richards, Information Services [email protected]. LCFG Users Day, University of Edinburgh 18 th January 2013
Large Scale Storage Orlando Richards, Information Services [email protected] LCFG Users Day, University of Edinburgh 18 th January 2013 Overview My history of storage services What is (and is not)
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE Hadoop Storage-as-a-Service ABSTRACT This White Paper illustrates how EMC Elastic Cloud Storage (ECS ) can be used to streamline the Hadoop data analytics
SMB in the Cloud David Disseldorp
SMB in the Cloud David Disseldorp Samba Team / SUSE [email protected] Agenda Cloud storage Common types Interfaces Applications Cloud file servers Microsoft Azure File Service Demonstration Amazon Elastic
Secure Web. Hardware Sizing Guide
Secure Web Hardware Sizing Guide Table of Contents 1. Introduction... 1 2. Sizing Guide... 2 3. CPU... 3 3.1. Measurement... 3 4. RAM... 5 4.1. Measurement... 6 5. Harddisk... 7 5.1. Mesurement of disk
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
THE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.
THE EMC ISILON STORY Big Data In The Enterprise 2012 1 Big Data In The Enterprise Isilon Overview Isilon Technology Summary 2 What is Big Data? 3 The Big Data Challenge File Shares 90 and Archives 80 Bioinformatics
The DESY Big-Data Cloud System
The DESY Big-Data Cloud System Patrick Fuhrmann On behave of the project team The DESY BIG DATA Cloud Service Berlin Cloud Event Patrick Fuhrmann 5 May 2014 1 Content (on a good day) About DESY Project
owncloud Enterprise Edition on IBM Infrastructure
owncloud Enterprise Edition on IBM Infrastructure A Performance and Sizing Study for Large User Number Scenarios Dr. Oliver Oberst IBM Frank Karlitschek owncloud Page 1 of 10 Introduction One aspect of
IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
IBM General Parallel File System (GPFS ) 3.5 File Placement Optimizer (FPO)
IBM General Parallel File System (GPFS ) 3.5 File Placement Optimizer (FPO) Rick Koopman IBM Technical Computing Business Development Benelux [email protected] Enterprise class replacement for HDFS
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks
PERFORMANCE BENCHMARKS PAPER Marvell DragonFly Virtual Storage Accelerator Performance Benchmarks Arvind Pruthi Senior Staff Manager Marvell April 2011 www.marvell.com Overview In today s virtualized data
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Technology Highlights Of. (Medusa)
Technology Highlights Of CQCloud s NG-SIEM (Medusa) Table of Contents 1. Genesis of Medusa 2. Philosophy of Medusa 3. Medusa At a Glance 4. Medusa Overview 5. Benefits 6. Implementations 1 1. Genesis of
An Oracle White Paper August 2012. Oracle WebCenter Content 11gR1 Performance Testing Results
An Oracle White Paper August 2012 Oracle WebCenter Content 11gR1 Performance Testing Results Introduction... 2 Oracle WebCenter Content Architecture... 2 High Volume Content & Imaging Application Characteristics...
Configuration Management Evolution at CERN. Gavin McCance [email protected] @gmccance
Configuration Management Evolution at CERN Gavin McCance [email protected] @gmccance Agile Infrastructure Why we changed the stack Current status Technology challenges People challenges Community The
Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director
Building Storage as a Service with OpenStack Greg Elkinbard Senior Technical Director MIRANTIS 2012 PAGE 1 About the Presenter Greg Elkinbard Senior Technical Director at Mirantis Builds on demand IaaS
Managing your Red Hat Enterprise Linux guests with RHN Satellite
Managing your Red Hat Enterprise Linux guests with RHN Satellite Matthew Davis, Level 1 Production Support Manager, Red Hat Brad Hinson, Sr. Support Engineer Lead System z, Red Hat Mark Spencer, Sr. Solutions
System Architecture. In-Memory Database
System Architecture for Are SSDs Ready for Enterprise Storage Systems In-Memory Database Anil Vasudeva, President & Chief Analyst, Research 2007-13 Research All Rights Reserved Copying Prohibited Contact
Scale and Availability Considerations for Cluster File Systems. David Noy, Symantec Corporation
Scale and Availability Considerations for Cluster File Systems David Noy, Symantec Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted.
Microsoft Windows Server Hyper-V in a Flash
Microsoft Windows Server Hyper-V in a Flash Combine Violin s enterprise- class all- flash storage arrays with the ease and capabilities of Windows Storage Server in an integrated solution to achieve higher
Linux Powered Storage:
Linux Powered Storage: Building a Storage Server with Linux Architect & Senior Manager [email protected] June 6, 2012 1 Linux Based Systems are Everywhere Used as the base for commercial appliances Enterprise
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
How To Install Ggscaler 2.0.2.0
Product Release Notes GRIDScaler 2.0.0 SFA OS Release 1.4.0.2 Product Release Notes Revision A2 July 2013 Important Information Information in this document is subject to change without notice and does
Netapp @ 10th TF-Storage Meeting
Netapp @ 10th TF-Storage Meeting Wojciech Janusz, Netapp Poland Bogusz Błaszkiewicz, Netapp Poland Ljubljana, 2012.02.20 Agenda Data Ontap Cluster-Mode pnfs E-Series NetApp Confidential - Internal Use
Archiving and Managing Remote Sensing Data Using State of the Art Storage Technologies
Archiving and Managing Remote Sensing Data Using State of the Art Storage Technologies Ms B Lakshmi C Chandrasekhar Reddy SVSRK Kishore SDAPSA, NRSC, Hyderabad NRSC Functions Remote Sensing Data Acquisition
Understanding Enterprise NAS
Anjan Dave, Principal Storage Engineer LSI Corporation Author: Anjan Dave, Principal Storage Engineer, LSI Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA
Managing a local Galaxy Instance. Anushka Brownley / Adam Kraut BioTeam Inc.
Managing a local Galaxy Instance Anushka Brownley / Adam Kraut BioTeam Inc. Agenda Who are we Why a local installation Local infrastructure Local installation Tips and Tricks SlipStream Appliance WHO ARE
What s new in Hyper-V 2012 R2
What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows
Scalable NAS for Oracle: Gateway to the (NFS) future
Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010
Best Practices for Data Sharing in a Grid Distributed SAS Environment Updated July 2010 B E S T P R A C T I C E D O C U M E N T Table of Contents 1 Abstract... 2 1.1 Storage performance is critical...
Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1
Flash 101 Violin Memory Switzerland Violin Memory Inc. Proprietary 1 Agenda - What is Flash? - What is the difference between Flash types? - Why are SSD solutions different from Flash Storage Arrays? -
AIX NFS Client Performance Improvements for Databases on NAS
AIX NFS Client Performance Improvements for Databases on NAS October 20, 2005 Sanjay Gulabani Sr. Performance Engineer Network Appliance, Inc. [email protected] Diane Flemming Advisory Software Engineer
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
Integration of Virtualized Workernodes in Batch Queueing Systems The ViBatch Concept
Integration of Virtualized Workernodes in Batch Queueing Systems, Dr. Armin Scheurer, Oliver Oberst, Prof. Günter Quast INSTITUT FÜR EXPERIMENTELLE KERNPHYSIK FAKULTÄT FÜR PHYSIK KIT University of the
