NERSC Archival Storage: Best Practices
|
|
|
- Milton Webb
- 10 years ago
- Views:
Transcription
1 NERSC Archival Storage: Best Practices Lisa Gerhardt! NERSC User Services! Nick Balthaser! NERSC Storage Systems! Joint Facilities User Forum on Data Intensive Computing! June 18, 2014
2 Agenda Introduc)on to Archival Storage Archive vs. backup Data lifecycle management and 9ered storage Features of the NERSC archive NERSC user case study Access methods (clients) Op)mizing Archival Storage Tape IO characteris9cs Storage and retrieval strategies Data Redundancy and Integrity Rule Checksums - 2 -
3 What is an archive? Long- term data storage Data that is no longer modified or regularly accessed OKen the only copy of the data Valuable data to be kept for the long- term An archive is not a backup A backup is a copy of produc9on data If a backup is lost, produc9on data is s9ll online Value and reten9on of backup data is short- term Main purpose of a backup is fast recovery NERSC archive has files da)ng back to the 1970s NERSC began using HPSS storage system sokware in 1998 Data migrated from previous archive systems including CFS and Unitree - 3 -
4 Why should I use an archive? Data growth is exponen)al File system space is finite 80% of stored data is never accessed aker 90 days Cost of storing infrequently accessed data on flash or spinning disk is prohibi9ve Important, but less frequently accessed data should be stored in an archive to free higher performing resources for processing workload - 4 -
5 Data Lifecycle Management Manage data according to access palerns and media cost At NERSC Scratch File System High capacity, fast access, high IO throughput Temporary storage of large data files and compute output Regular purges, not backed up Project File System Medium capacity, medium- term storage of shared data Designed for file sharing HPSS Tape- backed storage system NERSC users are responsible for their own data management
6 Features of the NERSC archive NERSC implements an online or ac)ve archive Parallel high- speed transfer and fast data access Data is transferred over parallel connec9ons on the NERSC internal 10Gb network Access to first byte in seconds or minutes as opposed to hours or days Tiered internal storage facilitates high speed data access: Ini9al data ingest to high- performance disk cache Data migrated to automated enterprise tape system and managed by HSM sokware (HPSS) based on file age and usage Indefinite data reten9on policy The archive is a shared mul)- user system No batch system. Inefficient use affects others. Session limits enforced - 6 -
7 The Archive is an HSM The NERSC archive is a Hierarchical Storage Management system (HSM) Highest performance requirements and access characteris)cs at top level Lowest cost, greatest capacity at lower levels Migra)on between levels is automa)c based on access palerns Local Disk or Tape Fast Disk High Capacity Disk Remote Disk or Tape Capacity HPSS performs differently than a file system Metadata in transac9onal DB (IBM DB2) for integrity Time to move data between hierarchies - 7 -
8 NERSC Archive: HPSS Currently holds about 60 PB. Total capacity is 240 PB
9 HPSS is Heavily Used - 9 -
10 NERSC User Case Study: Daya Bay Neutrino oscilla)on experiment in China High precision measurement of neutrino oscilla)on parameter Science Magazine s top ten breakthroughs of 2012 Data analysis and simula)on done primarily on PDSF (HEP / NP cluster) at NERSC
11 NERSC User Case Study: Daya Bay NERSC is their US Tier 1 facility Archive of raw data on HPSS Data copied to HPSS within min. of a run finishing Yearly data analysis Constant upload rate of about 220 GB / day
12 Accessing NERSC HPSS HSI Fast, parallel transfers, unix shell- like interface HTAR Parallel, put tar file directly into HPSS, excellent for groups of small files FTP/PFTP Standard and high- performance parallel FTP (NERSC plahorms) gridftp Grid (GSI) authen9ca9on Enables 3 rd - party transfer Globus Web- enabled reliable transfer
13 Optimizing Archival Storage
14 Tape IO Characteristics Ul)mately all data in the archive is wrilen to tape Tape is sequen)al (linear) media Behaves differently than disk media: Data cannot be re- wrijen in place, it is appended aker the end of exis9ng data Reading and wri9ng are sequen9al opera9ons no random access Tape drives behave differently than disk drives Take 9me to seek to file loca9ons on tape Take 9me to ramp up to full speed Tape drives pause aker reading or wri9ng each file (file sync) HPSS does not respond like a normal file system Presents itself as one, but some opera9ons can have unexpected results
15 Reading from Tape Loading a tape into a drive and posi)oning to the beginning of data are the slowest system ac)vi)es Average 9me to data on current drive technology is 45s, sec cartridge load Reading a few large files from tape is rela9vely quick up to 400MB/sec Reading many small files stored on mul9ple tapes is slow Minimize tape mounts and posi)oning ac)vity for best read performance
16 Size Matters Sweet Spot Tape drives perform best when opera9ng at full rate ( streaming ) for long dura9ons Large file IO is best for tape drive performance Small file IO causes frequent pauses and low- speed opera9ons File bundles in the 100s of GB currently provide best performance Aggregate (bundle) small files for op)mal storage Use HTAR, GNU tar, and/or zip to bundle groups of small files together to op9mize tape and network performance There is such a thing as too big Long- running transfers can be failure- prone Files spanning mul9ple tapes may incur tape mount delays
17 Tape Ordering for Non-optimal Retrieval Workloads When retrieving data from the archive: Minimize tape mounts Retrieve all files on a par9cular tape while it is mounted, before retrieving files on subsequent tapes Minimize tape posi)oning Retrieve files resident on a par9cular tape in order of tape posi9on Contact NERSC Consul)ng for details of procedure
18 Using a Shared Storage Resource No exclusive access to the archive No batch system Inefficient use affects performance for everyone The archive relies on mechanical devices Robots, tape drives, tape cartridges Limited number of drives and robots to serve requests Avoid administra)ve ac)on Bundle small files together/avoid excessive writes Order your retrievals Avoid excessive transfer failures
19 Data Redundancy Use the Rule for cri)cal data: 3 copies of the data (in different places) 2 different formats 1 copy off- site Data loss in the NERSC archive is rare but it can happen: No offsite backup for NERSC archive data Archive data is single- copy by default No Trash Can deleted files are gone! Make mul)ple copies of data you care about If off- site copies are not possible at least store mul9ple archive copies several days apart (different tapes) Contact NERSC Consul9ng if dual- copy is a persistent requirement (data charges apply) We take great care to preserve archival data But data loss can s9ll happen!
20 Data Integrity Corrup)on Any unintended change to data Rare but it can happen any point during reading, wri9ng, transfer, or processing is subject to unintended change Bad network interface, disk controller, computer memory, etc. at any point in data life9me can introduce an error Corrup9on is data loss make mul9ple copies Use checksums for cri)cal data Record checksums before storing cri9cal data in the archive, check aker retrieving Many checksum methods including HSI op9ons Checksums incur performance penalty due to CPU load check with system administrators before running checksums Interrupted/failed transfers are not data corrup)on Check transfer return codes!
21 Asking Questions, Reporting Problems Contact NERSC Consul)ng Toll- free , #
22 Further Reading NERSC Website Archive documenta9on: h/p:// and- file- systems/hpss/ge:ng- started/ Data management strategy and policies: h/p:// and- file- systems/policies/ Accessing HPSS h/p:// and- file- systems/hpss/ge:ng- started/accessing- hpss/ HSI and HTAR man pages are installed on NERSC compute pladorms Gleicher Enterprises Online Documenta)on (HSI, HTAR) h/p:// h/p:// HSI Best Prac-ces for NERSC Users, LBNL Report #LBNL- 4745E h/p:// Balthaser- Hazen pdf
23 Thank you
24 Tape Ordering for Non-optimal Workloads Minimize tape mounts Retrieve all files on a par9cular tape while it is mounted, before retrieving files on subsequent tapes Minimize tape posi)oning Retrieve files resident on a par9cular tape in order of tape posi9on Find volume name and tape posi)on for every file: HSI ls - X or ls - P arguments: A:/home/n/nickb- > ls - P z.tar z.tar.idx FILE /home/n/nickb/z.tar EP /03/ :00:57 07/03/ :01:01 FILE /home/n/nickb/z.tar.idx EF /03/ :01:01 07/03/ :01:02 Posi)on Volume
25 Retrieve Files in Tape and Position Order Generate per- volume file lists in tape posi)on order Using HSI, generate lists of files per tape and sort in ascending posi9on Put file path names in HSI command file using HSI get syntax: bash$ cat./ef2092.cmd get z.tar.idx : /home/n/nickb/z.tar.idx get x.tar : /home/n/nickb/x.tar get x.tar.idx : /home/n/nickb/x.tar.idx quit Retrieve files in per- volume lists using HSI command file Use HSI in cmd_file syntax: hsi q "in./ef2092.cmd Username: nickb UID: Acct: 33065(33065) Copies: 1 Firewall: on [hsi Thu Oct 25 16:31:52 PDT 2012][V _2012_10_22.02] get z.tar.idx : /home/n/nickb/z.tar.idx get 'z.tar.idx' : '/home/n/nickb/z.tar.idx' (2013/07/03 15:01: bytes, KBS ) get x.tar : /home/n/nickb/x.tar get 'x.tar' : '/home/n/nickb/x.tar' (2013/07/05 13:57: bytes, KBS ) Contact NERSC Consul)ng for full procedure
Quick Introduction to HPSS at NERSC
Quick Introduction to HPSS at NERSC Nick Balthaser NERSC Storage Systems Group [email protected] Joint Genome Institute, Walnut Creek, CA Feb 10, 2011 Agenda NERSC Archive Technologies Overview Use Cases
Research Technologies Data Storage for HPC
Research Technologies Data Storage for HPC Supercomputing for Everyone February 17-18, 2014 Research Technologies High Performance File Systems [email protected] Indiana University Intro to HPC on Big
HPSS Best Practices. Erich Thanhardt Bill Anderson Marc Genty B
HPSS Best Practices Erich Thanhardt Bill Anderson Marc Genty B Overview Idea is to Look Under the Hood of HPSS to help you better understand Best Practices Expose you to concepts, architecture, and tape
Introduction to Archival Storage at NERSC
Introduction to Archival Storage at NERSC Nick Balthaser NERSC Storage Systems Group [email protected] NERSC User Training March 8, 2011 Agenda NERSC Archive Technologies Overview Use Cases for the Archive
Archival Storage At LANL Past, Present and Future
Archival Storage At LANL Past, Present and Future Danny Cook Los Alamos National Laboratory [email protected] Salishan Conference on High Performance Computing April 24-27 2006 LA-UR-06-0977 Main points of
NERSC File Systems and How to Use Them
NERSC File Systems and How to Use Them David Turner! NERSC User Services Group! Joint Facilities User Forum on Data- Intensive Computing! June 18, 2014 The compute and storage systems 2014 Hopper: 1.3PF,
Data Deduplication in Tivoli Storage Manager. Andrzej Bugowski 19-05-2011 Spała
Data Deduplication in Tivoli Storage Manager Andrzej Bugowski 19-05-2011 Spała Agenda Tivoli Storage, IBM Software Group Deduplication concepts Data deduplication in TSM 6.1 Planning for data deduplication
Data Management in the Cloud: Limitations and Opportunities. Annies Ductan
Data Management in the Cloud: Limitations and Opportunities Annies Ductan Discussion Outline: Introduc)on Overview Vision of Cloud Compu8ng Managing Data in The Cloud Cloud Characteris8cs Data Management
Understanding Disk Storage in Tivoli Storage Manager
Understanding Disk Storage in Tivoli Storage Manager Dave Cannon Tivoli Storage Manager Architect Oxford University TSM Symposium September 2005 Disclaimer Unless otherwise noted, functions and behavior
High Performance Storage System. Overview
Overview * Please review disclosure statement on last slide Please see disclaimer on last slide Copyright IBM 2009 Slide Number 1 Hierarchical storage software developed in collaboration with five US Department
Using RDBMS, NoSQL or Hadoop?
Using RDBMS, NoSQL or Hadoop? DOAG Conference 2015 Jean- Pierre Dijcks Big Data Product Management Server Technologies Copyright 2014 Oracle and/or its affiliates. All rights reserved. Data Ingest 2 Ingest
CHAPTER 17: File Management
CHAPTER 17: File Management The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides
Offensive & Defensive & Forensic Techniques for Determining Web User Iden<ty
Offensive & Defensive & Forensic Techniques for Determining Web User Iden
Large File System Backup NERSC Global File System Experience
Large File System Backup NERSC Global File System Experience M. Andrews, J. Hick, W. Kramer, A. Mokhtarani National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory
AFS Usage and Backups using TiBS at Fermilab. Presented by Kevin Hill
AFS Usage and Backups using TiBS at Fermilab Presented by Kevin Hill Agenda History and current usage of AFS at Fermilab About Teradactyl How TiBS (True Incremental Backup System) and TeraMerge works AFS
Amazon Cloud Storage Options
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
XenData Archive Series Software Technical Overview
XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
The Archival Upheaval Petabyte Pandemonium Developing Your Game Plan Fred Moore President www.horison.com
USA The Archival Upheaval Petabyte Pandemonium Developing Your Game Plan Fred Moore President www.horison.com Did You Know? Backup and Archive are Not the Same! Backup Primary HDD Storage Files A,B,C Copy
XenData Product Brief: SX-550 Series Servers for LTO Archives
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.
Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds
Kaseya Fundamentals Workshop DAY THREE. Developed by Kaseya University. Powered by IT Scholars
Kaseya Fundamentals Workshop DAY THREE Developed by Kaseya University Powered by IT Scholars Kaseya Version 6.5 Last updated March, 2014 Day Two Overview Day Two Lab Review Patch Management Configura;on
Intro to AWS: Storage Services
Intro to AWS: Storage Services Matt McClean, AWS Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved AWS storage options Scalable object storage Inexpensive archive
Everything you need to know about flash storage performance
Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices
Neil Stobart Cloudian Inc. CLOUDIAN HYPERSTORE Smart Data Storage
Neil Stobart Cloudian Inc. CLOUDIAN HYPERSTORE Smart Data Storage Storage is changing forever Scale Up / Terabytes Flash host/array Tradi/onal SAN/NAS Scalability / Big Data Object Storage Scale Out /
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. dcache Introduction
dcache Introduction Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany Dr. http://www.gridka.de What is dcache? Developed at DESY and FNAL Disk
Data Management using Hierarchical Storage Management (HSM) with 3-Tier Storage Architecture
P - 388 Data Management using Hierarchical Storage Management (HSM) with 3-Tier Storage Architecture Manoj Kumar Kujur, B G Chaurasia, Sompal Singh, D P Singh GEOPIC, ONGC, Dehradun, E-Mail [email protected]
Media Cloud Building Practicalities
Media Cloud Building Practicalities MEDIA-TECH Hamburg May 2012 Damien Bommart Director - Product Marketing www.fpdigital.com Agenda Front Porch Digital DIVASolutions Family What is a Media Content Storage
Data Center Evolu.on and the Cloud. Paul A. Strassmann George Mason University November 5, 2008, 7:20 to 10:00 PM
Data Center Evolu.on and the Cloud Paul A. Strassmann George Mason University November 5, 2008, 7:20 to 10:00 PM 1 Hardware Evolu.on 2 Where is hardware going? x86 con(nues to move upstream Massive compute
Disk Library for mainframe - DLm6000 Product Overview
Disk Library for mainframe - DLm6000 Product Overview Abstract This white paper introduces the EMC DLm6000 - the EMC flagship mainframe VTL solution and a member of the EMC Disk Library for mainframe family.
HSI Best Practices for NERSC Users
HSI Best Practices for NERSC Users Nicholas Balthaser and Damian Hazen NERSC Division Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA 94720 May 2011 This work was produced by the
Storing Data: Disks and Files
Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve
Long term retention and archiving the challenges and the solution
Long term retention and archiving the challenges and the solution NAME: Yoel Ben-Ari TITLE: VP Business Development, GH Israel 1 Archive Before Backup EMC recommended practice 2 1 Backup/recovery process
U"lizing the SDSC Cloud Storage Service
U"lizing the SDSC Cloud Storage Service PASIG Conference January 13, 2012 Richard L. Moore [email protected] San Diego Supercomputer Center University of California San Diego SAN DIEGO SUPERCOMPUTER CENTER
Data storage services at CC-IN2P3
Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief Agenda Hardware: Storage on disk. Storage on tape. Software:
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
AXF Archive exchange Format: Interchange & Interoperability for Operational Storage and Long-Term Preservation
AXF Archive exchange Format: Interchange & Interoperability for Operational Storage and Long-Term Preservation Report to SMPTE Washington DC Section Bits By the Bay Conference 21 May, 2014 S. Merrill Weiss
OLCF Best Practices. Bill Renaud OLCF User Assistance Group
OLCF Best Practices Bill Renaud OLCF User Assistance Group Overview This presentation covers some helpful information for users of OLCF Staying informed Some aspects of system usage that may differ from
Creating a Catalog for ILM Services. Bob Mister Rogers, Application Matrix Paul Field, Independent Consultant Terry Yoshii, Intel
Creating a Catalog for ILM Services Bob Mister Rogers, Application Matrix Paul Field, Independent Consultant Terry Yoshii, Intel SNIA Legal Notice The material contained in this tutorial is copyrighted
UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure
UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure Authors: A O Jaunsen, G S Dahiya, H A Eide, E Midttun Date: Dec 15, 2015 Summary Uninett Sigma2 provides High
EMC DISK LIBRARY FOR MAINFRAME
EMC DISK LIBRARY FOR MAINFRAME A complete suite of solutions to meet any mainframe tape requirement Carlo Dioos [email protected] BRS EMEA TC 1 EMC Mainframe Solutions in the News 2 1 Typical Mainframe
Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation
Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely
IBM TotalStorage IBM TotalStorage Virtual Tape Server
IBM TotalStorage IBM TotalStorage Virtual Tape Server A powerful tape storage system that helps address the demanding storage requirements of e-business storag Storage for Improved How can you strategically
IBM Tivoli Storage Manager Version 7.1.4. Introduction to Data Protection Solutions IBM
IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM Note: Before you use this
Best Prac*ces for Deploying Oracle So6ware on Virtual Compute Appliance
Best Prac*ces for Deploying Oracle So6ware on Virtual Compute Appliance CON7484 Jeff Savit Senior Technical Product Manager Oracle VM Product Management October 1, 2014 Safe Harbor Statement The following
In-memory Tables Technology overview and solutions
In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory
Virtual Tape Systems for IBM Mainframes A comparative analysis
Virtual Tape Systems for IBM Mainframes A comparative analysis Virtual Tape concepts for IBM Mainframes Mainframe Virtual Tape is typically defined as magnetic tape file images stored on disk. In reality
PetaLibrary Storage Service MOU
University of Colorado Boulder Research Computing PetaLibrary Storage Service MOU 1. INTRODUCTION This is the memorandum of understanding (MOU) for the Research Computing (RC) PetaLibrary Storage Service.
Clusters in the Cloud
Clusters in the Cloud Dr. Paul Coddington, Deputy Director Dr. Shunde Zhang, Compu:ng Specialist eresearch SA October 2014 Use Cases Make the cloud easier to use for compute jobs Par:cularly for users
High Performance Storage System Interfaces. Harry Hulen 281-488-2473 [email protected]
High Performance Storage System Interfaces Harry Hulen 281-488-2473 [email protected] Outline Introduction to Interfaces The Collaboration How a SAN File System Works 3. MDC sends lock and ticket back to
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Introduc)on of Pla/orm ISF. Weina Ma [email protected]
Introduc)on of Pla/orm ISF Weina Ma [email protected] Agenda Pla/orm ISF Product Overview Pla/orm ISF Concepts & Terminologies Self- Service Applica)on Management Applica)on Example Deployment Examples
Mass Storage Structure
Mass Storage Structure 12 CHAPTER Practice Exercises 12.1 The accelerating seek described in Exercise 12.3 is typical of hard-disk drives. By contrast, floppy disks (and many hard disks manufactured before
Architec;ng Splunk for High Availability and Disaster Recovery
Copyright 2014 Splunk Inc. Architec;ng Splunk for High Availability and Disaster Recovery Dritan Bi;ncka BD Solu;on Architecture Disclaimer During the course of this presenta;on, we may make forward- looking
ILM: Tiered Services & The Need For Classification
ILM: Tiered Services & The Need For Classification Edgar StPierre, EMC 2 SNW San Diego April 2007 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Migrating to Hosted Telephony. Your ultimate guide to migrating from on premise to hosted telephony. www.ucandc.com
Migrating to Hosted Telephony Your ultimate guide to migrating from on premise to hosted telephony Intro What is covered in this guide? A professional and reliable business telephone system is a central
<Insert Picture Here> RMAN Configuration and Performance Tuning Best Practices
1 RMAN Configuration and Performance Tuning Best Practices Timothy Chien Principal Product Manager Oracle Database High Availability [email protected] Agenda Recovery Manager
Flash 101. Violin Memory Switzerland. Violin Memory Inc. Proprietary 1
Flash 101 Violin Memory Switzerland Violin Memory Inc. Proprietary 1 Agenda - What is Flash? - What is the difference between Flash types? - Why are SSD solutions different from Flash Storage Arrays? -
Outline. File Management Tanenbaum, Chapter 4. Files. File Management. Objectives for a File Management System
Outline File Management Tanenbaum, Chapter 4 Files and directories from the programmer (and user) perspective Files and directory internals the operating system perspective COMP3231 Operating Systems 1
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected]
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected] Hadoop, Why? Need to process huge datasets on large clusters of computers
Introduction to Optical Archiving Library Solution for Long-term Data Retention
Introduction to Optical Archiving Library Solution for Long-term Data Retention CUC Solutions & Hitachi-LG Data Storage 0 TABLE OF CONTENTS 1. Introduction... 2 1.1 Background and Underlying Requirements
Ubuntu, FEAP, and Virtualiza3on. Jonathan Wong Lab Mee3ng 11/08/10
Ubuntu, FEAP, and Virtualiza3on Jonathan Wong Lab Mee3ng 11/08/10 Mo3va3on Compiling and opera3ng FEAP requires knowledge of Unix/ Posix systems Being comfortable using command- line Naviga3ng the file
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Performance Management in Big Data Applica6ons. Michael Kopp, Technology Strategist @mikopp
Performance Management in Big Data Applica6ons Michael Kopp, Technology Strategist NoSQL: High Volume/Low Latency DBs Web Java Key Challenges 1) Even Distribu6on 2) Correct Schema and Access paperns 3)
Practical Solutions for Big Data Analytics
Practical Solutions for Big Data Analytics Ravi Madduri Computation Institute ([email protected]) Paul Dave ([email protected]) Dinanath Sulakhe ([email protected]) Alex Rodriguez ([email protected])
Effective Planning and Use of TSM V6 Deduplication
Effective Planning and Use of IBM Tivoli Storage Manager V6 Deduplication 08/17/12 1.0 Authors: Jason Basler Dan Wolfe Page 1 of 42 Document Location This is a snapshot of an on-line document. Paper copies
Lustre* is designed to achieve the maximum performance and scalability for POSIX applications that need outstanding streamed I/O.
Reference Architecture Designing High-Performance Storage Tiers Designing High-Performance Storage Tiers Intel Enterprise Edition for Lustre* software and Intel Non-Volatile Memory Express (NVMe) Storage
DNS must be up and running. Both the Collax server and the clients to be backed up must be able to resolve the FQDN of the Collax server correctly.
This howto describes the setup of backup, bare metal recovery, and restore functionality. Collax Backup Howto Requirements Collax Business Server Collax Platform Server Collax Security Gateway Collax V-Cube
Scientific Storage at FNAL. Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015 Index - Storage use cases - Bluearc - Lustre - EOS - dcache disk only - dcache+enstore Data distribution by solution
QStar White Paper. Tiered Storage
QStar White Paper Tiered Storage QStar White Paper Tiered Storage Table of Contents Introduction 1 The Solution 1 QStar Solution 3 Conclusion 4 Copyright 2007 QStar Technologies, Inc. QStar White Paper
Texas Digital Government Summit. Data Analysis Structured vs. Unstructured Data. Presented By: Dave Larson
Texas Digital Government Summit Data Analysis Structured vs. Unstructured Data Presented By: Dave Larson Speaker Bio Dave Larson Solu6ons Architect with Freeit Data Solu6ons In the IT industry for over
File Management. Chapter 12
Chapter 12 File Management File is the basic element of most of the applications, since the input to an application, as well as its output, is usually a file. They also typically outlive the execution
Physical Data Organization
Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor
