Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
|
|
|
- Marylou Gardner
- 10 years ago
- Views:
Transcription
1 dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Dr.
2 What is dcache? Developed at DESY and FNAL Disk pool management with or without tape backend Data may be distributed among a huge amount of disk servers. Fine grained configuration of pool attraction scheme Automatic load balancing by cost metric and inter pool transfers. Data removed only if space is needed
3 Pool Selection Mechanism Pool Selection required for: Client dcache Tape dcache dcache dcache dcache Client Pool selection is done in 2 steps Query configuration database : which pools are allowed for requested operation Query 'allowed pool' for their vital functions : find pool with lowest cost for requested operation
4 Pool Selection Mechanism : Tuning Space vs. Load For each request, the central cost module generates two cost values for each pool Space : Cost based on available space or LRU time stamp CPU : Cost based on the number of different movers (in,out,...) The final cost, which is used to determine the best pool, is a linear combination of space and CPU cost. The coefficients needs to be configured. Space coefficient << CPU coefficient Pro : Movers are nicely distributed among pools. Con : Old files are removed rather than filling up empty pools. Space coefficient >> CPU coefficient Pro : Empty pools are filled up before any old file is removed. Con : 'Clumping' of movers on pools with very old files or much space.
5 dcache properties Automatic migration to tape Supports read ahead buffering and deferred write Uses standard 'ssh' protocol for administration interface. supports ssl, kerberos and gsi security mechanisms
6 LCG Storage Element DESY dcap lib incorporates with CERN GFAL library SRM version ~ 1 (1.7) supported gsiftp supported
7 Multiple access of one file Pool 1 Pool 2 Pool 3 File 1 File 1
8 dcache environment compute nodes / login server gridftp / mountpoint file transfer pools head node srm gsiftp srmcp file transfer tsm server file transfer TSM with tapes
9 PNFS Perfectly Normal File System pool and tape real data F A E pnfs database for filenames metadata
10 Databases gdbm databases Experiment specific databases Independent access Content of metadata: User file name File name within dcache Information about the tape location (storage class ) Pool name where the file is located
11 Access to dcache Mountpoint ls mv rm checksum,. dcap dccp <source> <destination> dc_open(...) dc_read(...) Gridftp Problematic when file needs to be staged first SRMCP
12 dcache Access Protocol (dcap) write a file to dcache Login server: dccp d2 <source file> <pnfs mountpoint> Connection to head node return available pool node copy direct into available pool node pool: data is precious (can't be deleted) flush into tape data is cached (can be deleted from pool)
13 dcache Access Protocol (dcap) copy a file out of dcache Login server : dccp d2 <pnfs mountpoint> <dest. file> Connection to the headnode is file in any pool? if not in pool the data will be taken from tape Copy file from pool to destination
14 gsiftp Only registered dcache user!!! grid-proxy-init globus-url-copy dbg \ file:///grid/fzk.de/mounts/nfs/home/ressmann/testfile \ gsiftp://dcacheh1.gridka.de/ressmann/file1
15 srmcp Only registered dcache user!!! grid-proxy-init srmcp debug=true gsissl=true \ srm://castorgrid.cern.ch:80//castor/cern.ch/grid/dteam/castorfile \ srm://srm1.fzk.de:8443//pnfs/gridka.de/data/ressmann/file2 srmcp debug=true \ srm://srm1.fzk.de:8443//pnfs/gridka.de/data/ressmann/file2 file:////tmp/file2
16
17
18 Tape Management Forschungszentrum Karlsruhe Tivoli Storage Manager (TSM) library management TSM is not developed for archive Interruption of TSM archive No control what has been archived
19 dcache tape access Convenient HSM connectivity (done for Enstore, OSM, TSM, bad for HPSS) Creates a separate session for every file Transparent access Allows transparent maintenance at HSM
20 dcache tape management Precious data is separately collected per 'storage class Each 'storage class queue ' has individual parameters, steering the tape flush operation. Maximum time, a file is allowed to be 'precious' per 'storage class'. Maximum number of precious bytes per 'storage class Maximum number of precious files per 'storage class Maximum number of simultaneous tape flush' operations can be configured
21 dcache pool node 20 GB 800 GB 1h
22 Tivoli Storage Manager (tsm)
23 Tivoli Storage Manager (tsm) after dcache tuning
24
25 Problematic Hardware RAID controller 3WARE with 1.6 TB Always Degraded mode Rebuilding 70 kb/s or 10 MB/s Lost data
26 Installation experience Little documentation rpm s created for Tier 2 centres e.g. read access from tape was removed New versions are in new locations Every new installation is getting easier Great support from DESY especially Patrick Fuhrmann
27 User Problems Use of cp Overwriting of existing files Third party transfer of globus-url-copy
28 Other File Management Systems CERN Advanced STORage manager (Castor) Jefferson lab Asynchronous Storage Manager (JASMine) xrootd SLAC Cluster File Systeme (SAN FS, AFS, GPFS)
29 Conclusion Low cost read pools Reliable write pools Fast file access with low cost hardware Write once never change a dcache file Single point of failure Future work: using cheap disks to pre-stage a whole tape
30
Mass Storage at GridKa
Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. Doris Ressmann http://www.gridka.de 1 Overview What is dcache? Pool
The dcache Storage Element
16. Juni 2008 Hamburg The dcache Storage Element and it's role in the LHC era for the dcache team Topics for today Storage elements (SEs) in the grid Introduction to the dcache SE Usage of dcache in LCG
Managed Storage @ GRID or why NFSv4.1 is not enough. Tigran Mkrtchyan for dcache Team
Managed Storage @ GRID or why NFSv4.1 is not enough Tigran Mkrtchyan for dcache Team What the hell do physicists do? Physicist are hackers they just want to know how things works. In moder physics given
Patrick Fuhrmann. The DESY Storage Cloud
The DESY Storage Cloud Patrick Fuhrmann The DESY Storage Cloud Hamburg, 2/3/2015 for the DESY CLOUD TEAM Content > Motivation > Preparation > Collaborations and publications > What do you get right now?
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
Maurice Askinazi Ofer Rind Tony Wong. HEPIX @ Cornell Nov. 2, 2010 Storage at BNL
Maurice Askinazi Ofer Rind Tony Wong HEPIX @ Cornell Nov. 2, 2010 Storage at BNL Traditional Storage Dedicated compute nodes and NFS SAN storage Simple and effective, but SAN storage became very expensive
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
OSG Hadoop is packaged into rpms for SL4, SL5 by Caltech BeStMan, gridftp backend
Hadoop on HEPiX storage test bed at FZK Artem Trunov Karlsruhe Institute of Technology Karlsruhe, Germany KIT The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) www.kit.edu
Understanding Disk Storage in Tivoli Storage Manager
Understanding Disk Storage in Tivoli Storage Manager Dave Cannon Tivoli Storage Manager Architect Oxford University TSM Symposium September 2005 Disclaimer Unless otherwise noted, functions and behavior
The DESY Big-Data Cloud System
The DESY Big-Data Cloud System Patrick Fuhrmann On behave of the project team The DESY BIG DATA Cloud Service Berlin Cloud Event Patrick Fuhrmann 5 May 2014 1 Content (on a good day) About DESY Project
Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week
Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly
The Data Placement Challenge
The Data Placement Challenge Entire Dataset Applications Active Data Lowest $/IOP Highest throughput Lowest latency 10-20% Right Place Right Cost Right Time 100% 2 2 What s Driving the AST Discussion?
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
DSS. High performance storage pools for LHC. Data & Storage Services. Łukasz Janyst. on behalf of the CERN IT-DSS group
DSS High performance storage pools for LHC Łukasz Janyst on behalf of the CERN IT-DSS group CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Introduction The goal of EOS is to provide a
AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill
AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
irods at CC-IN2P3: managing petabytes of data
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules irods at CC-IN2P3: managing petabytes of data Jean-Yves Nief Pascal Calvat Yonny Cardenas Quentin Le Boulc h
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
XenData Archive Series Software Technical Overview
XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic
CMS Tier-3 cluster at NISER. Dr. Tania Moulik
CMS Tier-3 cluster at NISER Dr. Tania Moulik What and why? Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach common goal. Grids tend
HPSS Best Practices. Erich Thanhardt Bill Anderson Marc Genty B
HPSS Best Practices Erich Thanhardt Bill Anderson Marc Genty B Overview Idea is to Look Under the Hood of HPSS to help you better understand Best Practices Expose you to concepts, architecture, and tape
Large File System Backup NERSC Global File System Experience
Large File System Backup NERSC Global File System Experience M. Andrews, J. Hick, W. Kramer, A. Mokhtarani National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on
Tivoli Storage Manager Explained
IBM Software Group Dave Cannon IBM Tivoli Storage Management Development Oxford University TSM Symposium 2003 Presentation Objectives Explain TSM behavior for selected operations Describe design goals
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments
A Tutorial on Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments John Bresnahan Michael Link Rajkumar Kettimuthu Dan Fraser Argonne National Laboratory University of
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
IT of SPIM Data Storage and Compression. EMBO Course - August 27th! Jeff Oegema, Peter Steinbach, Oscar Gonzalez
IT of SPIM Data Storage and Compression EMBO Course - August 27th Jeff Oegema, Peter Steinbach, Oscar Gonzalez 1 Talk Outline Introduction and the IT Team SPIM Data Flow Capture, Compression, and the Data
Next Generation Tier 1 Storage
Next Generation Tier 1 Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies, Matthew Viljoen HEPiX Beijing 16th October 2012 Why are we doing this?
DSS. Diskpool and cloud storage benchmarks used in IT-DSS. Data & Storage Services. Geoffray ADDE
DSS Data & Diskpool and cloud storage benchmarks used in IT-DSS CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/it Geoffray ADDE DSS Outline I- A rational approach to storage systems evaluation
Inside Lustre HSM. An introduction to the newly HSM-enabled Lustre 2.5.x parallel file system. Torben Kling Petersen, PhD.
Inside Lustre HSM Technology Paper An introduction to the newly HSM-enabled Lustre 2.5.x parallel file system Torben Kling Petersen, PhD Introduction Hierarchical Storage Management (HSM) has been the
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
ZFS Backup Platform. ZFS Backup Platform. Senior Systems Analyst TalkTalk Group. http://milek.blogspot.com. Robert Milkowski.
ZFS Backup Platform Senior Systems Analyst TalkTalk Group http://milek.blogspot.com The Problem Needed to add 100's new clients to backup But already run out of client licenses No spare capacity left (tapes,
Solution Brief: Creating Avid Project Archives
Solution Brief: Creating Avid Project Archives Marquis Project Parking running on a XenData Archive Server provides Fast and Reliable Archiving to LTO or Sony Optical Disc Archive Cartridges Summary Avid
SDFS Overview. By Sam Silverberg
SDFS Overview By Sam Silverberg Why did I do this? I had an Idea that I needed to see if it worked. Design Goals Create a dedup file system capable of effective inline deduplication for Virtual Machines
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
(Scale Out NAS System)
For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages
DELL s Oracle Database Advisor
DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL
Hardware Configuration Guide
Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Building Storage Service in a Private Cloud
Building Storage Service in a Private Cloud Sateesh Potturu & Deepak Vasudevan Wipro Technologies Abstract Storage in a private cloud is the storage that sits within a particular enterprise security domain
Advancements in Storage QoS Management in National Data Storage
Advancements in Storage QoS Management in National Data Storage Darin Nikolow 1, Renata Słota 1, Stanisław Polak 1 and Jacek Kitowski 1,2 1 AGH University of Science and Technology, Faculty of Computer
Considerations when Choosing a Backup System for AFS
Considerations when Choosing a Backup System for AFS By Kristen J. Webb President and CTO Teradactyl LLC. June 18, 2005 The Andrew File System has a proven track record as a scalable and secure network
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This
Quick Introduction to HPSS at NERSC
Quick Introduction to HPSS at NERSC Nick Balthaser NERSC Storage Systems Group [email protected] Joint Genome Institute, Walnut Creek, CA Feb 10, 2011 Agenda NERSC Archive Technologies Overview Use Cases
AFS File Servers Dos and Don'ts
AFS File Servers Dos and Don'ts Alf Wachsmann Stanford Linear Accelerator Center [email protected] AFS&Krb BPW '06 AFS File Servers Dos and Don'ts - A. Wachsmann 1 AFS File Server... Operating System
Solution Brief: Using a XenData Digital Video Archive with Grass Valley STRATUS
Solution Brief: Using a XenData Digital Video Archive with Grass Valley STRATUS Contents 1. About Us 2. Introduction 3. XenData GV STRATUS Configuration 4. Benefits from XenData s Commitment to Standards
LBNC and IBM Corporation 2009. Document: LBNC-Install.doc Date: 06.03.2009 Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0
LBNC Compute Cluster Installation and Configuration Author: Markus Baertschi Owner: Markus Baertschi Customer: LBNC Subject: LBNC Compute Cluster Installation and Configuration Page 1 of 14 Contents 1.
Das HappyFace Meta-Monitoring Framework
Das HappyFace Meta-Monitoring Framework B. Berge, M. Heinrich, G. Quast, A. Scheurer, M. Zvada, DPG Frühjahrstagung Karlsruhe, 28. März 1. April 2011 KIT University of the State of Baden-Wuerttemberg and
XenData Product Brief: SX-550 Series Servers for LTO Archives
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
High Performance Computing Specialists. ZFS Storage as a Solution for Big Data and Flexibility
High Performance Computing Specialists ZFS Storage as a Solution for Big Data and Flexibility Introducing VA Technologies UK Based System Integrator Specialising in High Performance ZFS Storage Partner
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
GridKa: Roles and Status
GridKa: Roles and Status GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Holger Marten http://www.gridka.de History 10/2000: First ideas about a German Regional Centre
Flexible Scalable Hardware independent. Solutions for Long Term Archiving
Flexible Scalable Hardware independent Solutions for Long Term Archiving More than 20 years of experience in archival storage 2 OA HPA 2010 1992 2000 2004 2007 Mainframe Tape Libraries Open System Tape
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
Bigdata High Availability (HA) Architecture
Bigdata High Availability (HA) Architecture Introduction This whitepaper describes an HA architecture based on a shared nothing design. Each node uses commodity hardware and has its own local resources
EMC CLARiiON Backup Storage Solutions: Backup-to-Disk Guide with IBM Tivoli Storage Manager
EMC CLARiiON Backup Storage Solutions: Backup-to-Disk Guide with Best Practices Planning Abstract This white paper describes how to configure EMC CLARiiON CX Series storage systems with IBM Tivoli Storage
XenData Video Edition. Product Brief:
XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
Implementing Internet Storage Service Using OpenAFS. Sungjin Chun([email protected]) Dongguen Choi([email protected]) Arum Yoon(toy7777@embian.
Implementing Internet Storage Service Using OpenAFS Sungjin Chun([email protected]) Dongguen Choi([email protected]) Arum Yoon([email protected]) Overview Introduction Implementation Current Status
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
Analyzing Big Data with Splunk A Cost Effective Storage Architecture and Solution
Analyzing Big Data with Splunk A Cost Effective Storage Architecture and Solution Jonathan Halstuch, COO, RackTop Systems [email protected] Big Data Invasion We hear so much on Big Data and
NERSC Archival Storage: Best Practices
NERSC Archival Storage: Best Practices Lisa Gerhardt! NERSC User Services! Nick Balthaser! NERSC Storage Systems! Joint Facilities User Forum on Data Intensive Computing! June 18, 2014 Agenda Introduc)on
KIT Site Report. Andreas Petzold. www.kit.edu STEINBUCH CENTRE FOR COMPUTING - SCC
KIT Site Report Andreas Petzold STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu GridKa Tier 1 - Batch
SURFsara Data Services
SURFsara Data Services SUPPORTING DATA-INTENSIVE SCIENCES Mark van de Sanden The world of the many Many different users (well organised (international) user communities, research groups, universities,
CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY
White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security
SOFTWAREDEFINED-STORAGE
SOFTWAREDEFINED-STORAGE The future of cloudoptimized Storage Technical Presentation Ivo GomIlšek SenIor Infrastructure ArchItect - CEE CLASSICAL DATACENTER WITH STORAGE IN SILOS... L P M O C COMPLEX TO
Configuring Celerra for Security Information Management with Network Intelligence s envision
Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
IBM Infrastructure for Long Term Digital Archiving
IBM Infrastructure for Long Term Digital Archiving A new generation of archival storage Rudolf Hruška Information Infrastructure Leader IBM Systems & Technology Group [email protected] 2010 IBM
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
