Large Scale Storage Solutions for Bioinformatics and Genomics Projects
|
|
|
- Anna Byrd
- 10 years ago
- Views:
Transcription
1 Large Scale Storage Solutions for Bioinformatics and Genomics Projects Phillip Smith Unix System Administrator, Bioinformatics Group The Center for Genomics and Bioinformatics Indiana University, Bloomington
2 Overview Environment as it is today Types of data being stored and typical dataset sizes Where and how the data is being stored Current storage capabilities Problem areas De-centralized vs. centralized storage Data availability and redundancy Backups Long-term data archiving and future retrieval Research and development, and future implementation Evaluate the new technology paradigms (SAN, NAS, etc.) Setup a test bed to try these technologies in our environment Enabling of new software services, like electronic lab notebooks Summary Review Related examples (1TB SAN in CS, Whitehead) Questions and Comments
3 Bits and Bytes A quick overview of computer storage semantics: One bit (b) = 0 or 1 One byte (B) = 8 bits One kilobyte (KB) = 1024 bytes (2^10) One megabyte (MB) = 1024 KB (2^20) One gigabyte (GB) = 1024 MB (2^30) One terabyte (TB) = 1024 GB (2^40) One petabyte (PB) = 1024 TB (2^50) Beyond this is exa (2^60), zetta (2^70), and yotta (2^80) Relatively few sites are managing more than a couple of petabytes
4 Putting the numbers into perspective Some real-world examples 200 petabytes: All printed material 2 petabytes: All U.S. Academic research libraries 400 terabytes: National Climate Data Center (NOAA) database 20 terabytes: The printed collection of the U.S. Library of Congress 2 terabytes: An academic research library 1 terabyte: 50,000 trees made into paper and printed upon 100 gigabytes: A floor of academic journals 4 gigabytes: 1 movie on a DVD 5 megabytes: The complete works of Shakespeare
5 That's a whole lot of data There was an estimated 12+ exabytes of data generated by the year 2000, representing the entire history of humanity In 2002, the number is now estimated at 16+ exabytes. By 2005, it will be nearly 24 exabytes Growing at around two exabytes of new data per year, this equates to roughly 250 megabytes for every man, woman, and child on earth Carved from this is roughly 11,285 terabytes of , 576,000 terabytes worth of phone calls (in the U.S. alone), and over 150,000 terabytes of snail mail per year Nucleotide sequences are being added to databases at a rate of more than 210 million base pairs (210+ MB) per year, with database content doubling in size approximately every 14 months Statistics are from a 2001 paper by researchers at UC Berkeley
6 Environment
7 Data types and sizes Research data comes in all shapes and sizes... Flat file and relational DB datasets GenBank (22 billion base pairs, roughly 80 GB) EMBL (31 billion base pairs, roughly 100 GB) SWISS-PROT (43.6 million amino acids, roughly 416 MB) PIR (96 million amino acids, roughly 645 MB) Dataset indexes for various applications GenBank converted for GCG usage today is approximately 52 GB Microarray and derived numerical/sequence data Microarray images -- each TIFF is roughly 5-10 MB, with 2-3 images per hybridization (DGRC to generate approximately 15 GB images per year) Associated 'meta-data' for 15,000 genes * 500 hybridizations is ~ 1 TB/yr Other types of data, such as video E. Martins' lab currently with 61 GB of lizard video, more than doubling by project completion. Source data is MB QuickTime files
8 Data types and sizes We also need to consider other sources... Papers and Articles The average scientific paper/article as a PDF file is 2 MB, with a typical research scientist storing between of these. The average incoming mailbox size is 10 MB, which doesn't include archived Personally generated files MS Word documents (average size is 3 MB, with 10 files per person) MS Power point presentations (average size is 6 MB, with 5 files per person) Miscellaneous images and other files such as.gif,.jpg/jpeg, text, etc. (average size is 200 KB, with 300 files per person) Assuming these numbers are on the conservative side, the average researcher will amass between 3-5 GB of data, either unique or copied. These data were generated by randomly sampling 500 user accounts on the sunflower system.
9 How does the data get stored? On removable media Floppy disks (1.44 MB & 2 MB) Zip disks (100 MB & 250 MB) Compact Discs ( MB) Digital Video Disks (2 GB - 4 GB) On hard disk drives IDE ( GB) [320GB announced and expected in a few months] SCSI (18-180GB) On filesystems FAT, FAT32, NTFS (Windows) UFS, XFS, JFS, EXT2, EXT3,... (Unix) HFS, HFS+ (MacOS) Via file-sharing protocols CIFS (Windows) NFS (Unix, MacOS X) AppleShare (MacOS 9 and earlier)
10 Where does the data get stored? The short answer is, all over the place...
11 Current CGB/Biology Storage Infrastructure What we can store today The sunflower system in its current form can store approximately 26 GB of personal data, 10 GB of , and 175 GB of research databases (i.e GenBank). Within the next 3 months, we will bring nearly 1 TB of new storage capacity online. CGB's new Laboratory Information Management System (LIMS), as configured, can store 175 GB of research data. David Kehoe's pondscum project server, as configured, can store 175 GB of research data Other CGB research machines, such as those serving up Flybase, Bio- Mirror, and IUBio, have a combined storage capacity of 1 TB Total remaining Biology research (storage capacity on desktops) is guesstimated at around 6 TB (300 computers * 20 GB drives)
12 Current UITS Storage Infrastructure What UITS can store today The Common File System (CFS) service has a total of 1.5 TB of online (hard disk based) storage, and is tied into the MDSS system (which is used to back that data up). CFS is meant for small to medium storage requirements. For instance, you might use it to store presentation files you'll want to access from a conference. By default you are given a 100 MB quota, but researchers can request up to a few GB depending on specific needs. The Massive Data Storage System (MDSS) is based on robotic tape libraries, with a combined storage capacity over 500 TB (120 TB located at IUB, 360 TB located at IUPUI) MDSS is meant for large scale, long-term archival storage. Faculty, staff, and graduate students are given default quotas of 500 GB. If your project demands more, UITS will negotiate a higher quota on a per project/cost share basis.
13
14 De-centralized vs. Centralized Storage What are the differences? De-centralized storage Direct Attached Storage (DAS), where the storage device(s) connect to an individual machine Hard to manage because it must be done directly from the machine to which it is attached Doesn't scale well (you can only attach so many devices to one machine) It's hard to share this storage with other machines Examples of de-centralized storage include a desktop s hard drive, or several servers with a disk array attached to each one Centralized storage All the storage is connected to one machine or group of machines, and/or to some type of network fabric Easier to manage in the long run, but more complex to implement initially Examples of centralized storage would include a dedicated file sharing server
15 Data Availability and Redundancy We must make sure that the data is always available, and fault-tolerant Availability Murphy's law is always in full effect. Machines and storage media will ultimately fail at some point, and we can't always predict when problems will occur Since the CGB provides services to the IU Bloomington community (e.g. GeneTraffic, BioWeb) and to the world at large (e.g. Bio-Mirror, Flybase, etc.), we must ensure that the data we store is available 24x7x365 There is an expectation that research and personal data should also be available 24x7x365. People get ANGRY when they can't get their ! Redundancy The answer is to plan for data redundancy, which generally means we mirror data with two or more drives This doubles the cost and halves our available storage capacity, but gives us peace of mind and our customers more reliable service We're relying on disk drive redundancy, and less so on system redundancy, to protect the data (i.e if an important server dies, we don't have drop-in replacements yet We have no off-site redundancy. If Jordan or Myers are inflicted with flood or fire, everything is gone.
16 Backups You are responsible for backing up your own data CGB and Biology core/research servers We currently offer no guarantee, implied or otherwise, that up-to-date tape backups will be available for all data. We DO make a best effort to backup critical data on our servers, but currently rely on disk redundancy for most data Quite frankly, our existing backup infrastructure is completely inadequate, and we need to do (and will do) better UITS services UITS makes backups of all its core servers, but only provides recovery for up to one month The data you store in your CFS account has a one-day backup by default. You can request that UITS restore files from up to one month, but it will cost you $15 per incident UITS cannot restore files from the MDSS Laptops, Desktops, and Workstations You are responsible for backing up data on your laptop, desktop, or workstation There should be a campus-wide or departmental backup system, but it doesn't exist yet
17 Long-Term Data Archiving and Future Retrieval We can't get rid of anything We have enough storage capacity to handle existing data, and new data that is being generated today. But we need to address the long-term storage issues; that is, we must be able to archive today's data tomorrow, while providing enough capacity for tomorrow's new data. To illustrate part of the problem, there are requirements from federal funding agencies, and various laws such as the Freedom of Information Act (FOIA), which require data dissemination for an indefinite period of time. We have online storage and offline storage. Online means that the data is instantly available, while offline means it must first be retrieved from tape or other media, such as CD Offline storage makes future data retrieval difficult, so it would be better to have plenty of online storage
18 Research and Development
19 Evaluating New Technologies What other people are using to solve these problems Storage Area Network (SAN) Centralized data storage model All servers connect to the storage devices via a network, similar to the way your computer connects to the campus ethernet Generally this network can move data at extremely high speeds, upwards of 200 MB per second. As a comparison, you can copy files between machines over the campus network at 1 MB to 10 MB per second (theoretical maximum) Highly scalable we can easily add more storage as we need it, without worry of how many devices can attach to one machine Network Attached Storage (NAS) Provides the ability for desktops and workstations to access data on the SAN natively and transparently Disk-to-disk backups SAN and NAS technologies enable us to easily backup large amounts of data quickly Archived data can remain online at all times Future retrieval could be as simple as going to a web page and selecting your files
20 Implementation Where do we go from here Setup a test bed Convince a few SAN/NAS vendors to loan us the hardware, so that we can test this out in our existing environment Try out some of our routine day-to-day storage tasks and purposefully try to break things Repeat the process until we have a workable solution Identify projects and funding sources We need feedback from everyone in regards to anticipated project storage needs Instead of allocating funds for direct attached storage (such as an extra hard drive in a desktop or workstation), start including money for a chunk of the SAN Put it into production Move existing servers and data into the SAN fabric Create and offer new services Departmental wide desktop/workstation file sharing and backup Electronic lab notebooks What else would you like?
21 Summary Simply put, we can NEVER have enough storage! It may seem like we have enough storage now to last for several years, but there will always be more data to store. If the storage exists, you can be guaranteed that someone or something will find a way to fill it up. Research databases continue to grow at an amazing rate. It's not enough to cope with that alone; we still have to come up with enough temporary storage in which to copy, index, and store multiple versions of these multi-gigabyte datasets. Using GCG as an example again, we are today having to manipulate over half a terabyte of new and existing data every three months or so. And that's just for one application! With the increased focus on high-throughput sequencing, microarrays, LIMS, electronic lab notebooks, etc., a new ripple has formed in regards to bioinformatics and genomics storage patterns. Pretty soon, we'll have crashing waves... the water will need to go somewhere. In short, we can only see the tip of the iceberg, but it goes much deeper.
Database Fundamentals
Database Fundamentals Computer Science 105 Boston University David G. Sullivan, Ph.D. Bit = 0 or 1 Measuring Data: Bits and Bytes One byte is 8 bits. example: 01101100 Other common units: name approximate
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v.
Backup architectures in the modern data center. Author: Edmond van As [email protected] Competa IT b.v. Existing backup methods Most companies see an explosive growth in the amount of data that they have
How To Store Data On A Computer (For A Computer)
TH3. Data storage http://www.bbc.co.uk/schools/gcsebitesize/ict/ A computer uses two types of storage. A main store consisting of ROM and RAM, and backing stores which can be internal, eg hard disk, or
Definition of Computers. INTRODUCTION to COMPUTERS. Historical Development ENIAC
Definition of Computers INTRODUCTION to COMPUTERS Bülent Ecevit University Department of Environmental Engineering A general-purpose machine that processes data according to a set of instructions that
CSCA0102 IT & Business Applications. Foundation in Business Information Technology School of Engineering & Computing Sciences FTMS College Global
CSCA0102 IT & Business Applications Foundation in Business Information Technology School of Engineering & Computing Sciences FTMS College Global Chapter 2 Data Storage Concepts System Unit The system unit
XenData Archive Series Software Technical Overview
XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic
lesson 1 An Overview of the Computer System
essential concepts lesson 1 An Overview of the Computer System This lesson includes the following sections: The Computer System Defined Hardware: The Nuts and Bolts of the Machine Software: Bringing the
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
Supported File Systems
System Requirements VMware Infrastructure Platforms Hosts vsphere 5.x vsphere 4.x Infrastructure 3.5 (VI3.5) ESX(i) 5.x ESX(i) 4.x ESX(i) 3.5 vcenter Server 5.x (optional) vcenter Server 4.x (optional)
Storage Solutions for Bioinformatics
Storage Solutions for Bioinformatics Li Yan Director of FlexLab, Bioinformatics core technology laboratory [email protected] http://www.genomics.cn/flexlab/index.html Science and Technology Division,
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on
SAN vs. NAS: The Critical Decision
SAN vs. NAS: The Critical Decision Executive Summary The storage strategy for your organization is dictated by many factors: the nature of the documents and files you need to store, the file usage patterns
Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
Online Storage Replacement Strategy/Solution
I. Current Storage Environment Online Storage Replacement Strategy/Solution ISS currently maintains a substantial online storage infrastructure that provides centralized network-accessible storage for
Barracuda Backup Server. Introduction
Barracuda Backup Server Introduction Backup & Recovery Conditions and Trends in the Market Barracuda Networks 2! Business Continuity! Business today operates around the clock Downtime is very costly Disaster
BACKUP SECURITY GUIDELINE
Section: Information Security Revised: December 2004 Guideline: Description: Backup Security Guidelines: are recommended processes, models, or actions to assist with implementing procedures with respect
Solution Brief: Creating Avid Project Archives
Solution Brief: Creating Avid Project Archives Marquis Project Parking running on a XenData Archive Server provides Fast and Reliable Archiving to LTO or Sony Optical Disc Archive Cartridges Summary Avid
Library Recovery Center
Library Recovery Center Ever since libraries began storing bibliographic information on magnetic disks back in the 70 s, the challenge of creating useful back-ups and preparing for a disaster recovery
Data Loss Prevention (DLP) & Recovery Methodologies
Data Loss Prevention (DLP) & Recovery Methodologies Topics to be Discussed Overview of Types of Storage Devices Prevention Methodologies on storage devices Creating a Backup Plan Testing your Recovery
The Dirty Little Secret About Online Backup
White Paper The Dirty Little Secret About Online Backup How to Evaluate Online Backup for Business Authored by: Upside Research, Inc 2012 Upside Research Inc. Recovery The Dirty Little Secret of Online
Tier 2 Nearline. As archives grow, Echo grows. Dynamically, cost-effectively and massively. What is nearline? Transfer to Tape
Tier 2 Nearline As archives grow, Echo grows. Dynamically, cost-effectively and massively. Large Scale Storage Built for Media GB Labs Echo nearline systems have the scale and performance to allow users
XFS File System and File Recovery Tools
XFS File System and File Recovery Tools Sekie Amanuel Majore 1, Changhoon Lee 2 and Taeshik Shon 3 1,3 Department of Computer Engineering, Ajou University Woncheon-doing, Yeongton-gu, Suwon, Korea {amanu97,
Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
Implementing Offline Digital Video Storage using XenData Software
using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional
Putting Genomes in the Cloud with WOS TM. ddn.com. DDN Whitepaper. Making data sharing faster, easier and more scalable
DDN Whitepaper Putting Genomes in the Cloud with WOS TM Making data sharing faster, easier and more scalable Table of Contents Cloud Computing 3 Build vs. Rent 4 Why WOS Fits the Cloud 4 Storing Sequences
Chapter 8: Security Measures Test your knowledge
Security Equipment Chapter 8: Security Measures Test your knowledge 1. How does biometric security differ from using password security? Biometric security is the use of human physical characteristics (such
Hardware Configuration Guide
Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...
Computer Logic (2.2.3)
Computer Logic (2.2.3) Distinction between analogue and discrete processes and quantities. Conversion of analogue quantities to digital form. Using sampling techniques, use of 2-state electronic devices
Multi-Terabyte Archives for Medical Imaging Applications
Multi-Terabyte Archives for Medical Imaging Applications This paper describes how Windows servers running XenData Archive Series software provide an attractive solution for storing and retrieving multiple
70-271. Supporting Users and Troubleshooting a Microsoft Windows XP Operating System Q&A. DEMO Version
Supporting Users and Troubleshooting a Microsoft Windows XP Operating System Q&A DEMO Version Copyright (c) 2007 Chinatag LLC. All rights reserved. Important Note Please Read Carefully For demonstration
Using HP StoreOnce D2D systems for Microsoft SQL Server backups
Technical white paper Using HP StoreOnce D2D systems for Microsoft SQL Server backups Table of contents Executive summary 2 Introduction 2 Technology overview 2 HP StoreOnce D2D systems key features and
Backup & Disaster Recovery Options
Backup & Disaster Recovery Options Since businesses have become more dependent on their internal computing capability, they are increasingly concerned about recovering from equipment failure, human error,
VEEAM BACKUP & REPLICATION 6.1 RELEASE NOTES
VEEAM BACKUP & REPLICATION 6.1 RELEASE NOTES This Release Notes document provides last-minute information about Veeam Backup & Replication 6.1, including system requirements, installation and upgrade procedure,
Call: 08715 900800. Disaster Recovery/Business Continuity (DR/BC) Services From VirtuousIT
Disaster Recovery/Business Continuity (DR/BC) Services From VirtuousIT The VirtuousIT DR/BC solution is designed around RecoveryShield from Thinking SAFE. The service includes a local backup appliance
Restoration Technologies. Mike Fishman / EMC Corp.
Trends PRESENTATION in Data TITLE Protection GOES HERE and Restoration Technologies Mike Fishman / EMC Corp. SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless
Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking
Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like
POWER ALL GLOBAL FILE SYSTEM (PGFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm
Contingency Planning and Disaster Recovery
Contingency Planning and Disaster Recovery Best Practices Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge Date: October 2014 2014 Perceptive Software. All rights reserved Perceptive
Optimizing Backup & Recovery Performance with Distributed Deduplication
Optimizing Backup & Recovery Performance with Distributed Deduplication Using NetVault Backup with EMC DD Boost Written by: Shad Nelson Product Manager Dell Software Executive Summary Backup applications
An Affordable Commodity Network Attached Storage Solution for Biological Research Environments.
An Affordable Commodity Network Attached Storage Solution for Biological Research Environments. Ari E. Berman, Ph.D. Senior Systems Engineer Buck Institute for Research on Aging [email protected]
SOLUTIONS INC. BACK-IT UP. Online Backup Solution
SOLUTIONS INC. Online Backup Solution Data loss is a nightmare Your data is growing exponentially 1MB of data is worth US$10,000 If you find yourself now in a situation where you have to deal with an explosion
XenData Product Brief: SX-550 Series Servers for LTO Archives
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.
Cost Effective Backup with Deduplication Agenda Today s Backup Challenges Benefits of Deduplication Source and Target Deduplication Introduction to EMC Backup Solutions Avamar, Disk Library, and NetWorker
Cleaning Up Your Outlook Mailbox and Keeping It That Way ;-) Mailbox Cleanup. Quicklinks >>
Cleaning Up Your Outlook Mailbox and Keeping It That Way ;-) Whether you are reaching the limit of your mailbox storage quota or simply want to get rid of some of the clutter in your mailbox, knowing where
Backing up Data. You have lots of different options for backing up data, different methods offer different protection.
Backing up Data Why Should I Backup My Data? In these modern days more and more is saved on to your computer. Sometimes its important work you can't afford to lose, it could also be music, photos, videos
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software
Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key
Texas Digital Government Summit. Data Analysis Structured vs. Unstructured Data. Presented By: Dave Larson
Texas Digital Government Summit Data Analysis Structured vs. Unstructured Data Presented By: Dave Larson Speaker Bio Dave Larson Solu6ons Architect with Freeit Data Solu6ons In the IT industry for over
Archiving and Managing Your Mailbox
Archiving and Managing Your Mailbox We Need You to Do Your Part We ask everyone to participate in routinely cleaning out their mailbox. Large mailboxes with thousands of messages impact backups and may
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On
Solutions with Open-E Data Storage Software (DSS V6) Software Version: DSS ver. 6.00 up40 Presentation updated: September 2010 Different s opportunities using Open-E DSS The storage market is still growing
The Genealogy Cloud: Which Online Storage Program is Right For You Page 1 2012, copyright High-Definition Genealogy. All rights reserved.
The Genealogy Cloud: Which Online Storage Program is Right For You Thomas MacEntee, of High-Definition Genealogy http://hidefgen.com [email protected] Clouds in Genealogy? What is the Genealogy Cloud?
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows
Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows Sponsored by: Prepared by: Eric Slack, Sr. Analyst May 2012 Storage Infrastructures for Big Data Workflows Introduction Big
CHAPTER 17: File Management
CHAPTER 17: File Management The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
HOW TO USE WINDOWS BACKUP IN WINDOWS 7
To All Faculty and Staff, This week s Tip of the Week will show you how to use Windows Backup to back up your data in a Windows 7 environment. HOW TO USE WINDOWS BACKUP IN WINDOWS 7 (Basic Information
XenData Video Edition. Product Brief:
XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital
Storage Solutions For Small and Medium Businesses
For Small and Medium Businesses Overview May 2008 Overview 2 Contents Page 3 Page 6 Page 9 Page 11 Introduction RAID storage Software versus hardware RAID Backup hardware and software Considerations for
How Does the ECASD Network Work?
Slide 1 How Does the ECASD Network Work? Jim Blodgett, Network Engineer Slide 2 Network Overview The ECASD Network has 3500 computers, 350 switches, 100 servers and 13000 users spread over 22 different
Practical issues in DIY RAID Recovery
www.freeraidrecovery.com Practical issues in DIY RAID Recovery Based on years of technical support experience 2012 www.freeraidrecovery.com This guide is provided to supplement our ReclaiMe Free RAID Recovery
Introduction to Data Protection: Backup to Tape, Disk and Beyond. Michael Fishman, EMC Corporation
: Backup to Tape, Disk and Beyond Michael Fishman, EMC Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use
Electronic Records Storage Options and Overview
Electronic Records Storage Options and Overview www.archives.nysed.gov Objectives Understand the options for electronic records storage, including cloud-based storage Evaluate the options best suited for
University of Bristol. Research Data Storage Facility (the Facility) Policy Procedures and FAQs
University of Bristol Research Data Storage Facility (the Facility) Policy Procedures and FAQs This FAQs should be read in conjunction with the RDSF usage FAQs - https://www.acrc.bris.ac.uk/acrc/rdsf-faqs.html
Using HP StoreOnce Backup systems for Oracle database backups
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
Object Oriented Storage and the End of File-Level Restores
Object Oriented Storage and the End of File-Level Restores Stacy Schwarz-Gardner Spectra Logic Agenda Data Management Challenges Data Protection Data Recovery Data Archive Why Object Based Storage? The
Simplified HA/DR Using Storage Solutions
Simplified HA/DR Using Storage Solutions Agnes Jacob, NetApp and Tom Tyler, Perforce Software MERGE 2013 THE PERFORCE CONFERENCE SAN FRANCISCO APRIL 24 26 2 SIMPLIFIED HA/DR USING STORAGE SOLUTIONS INTRODUCTION
Cobian9 Backup Program - Amanita
The problem with backup software Cobian9 Backup Program - Amanita Due to the quixotic nature of Windows computers, viruses and possibility of hardware failure many programs are available for backing up
Planning a Backup Strategy
Planning a Backup Strategy White Paper Backups, restores, and data recovery operations are some of the most important tasks that an IT organization performs. Businesses cannot risk losing access to data
Algorithms and Methods for Distributed Storage Networks 7 File Systems Christian Schindelhauer
Algorithms and Methods for Distributed Storage Networks 7 File Systems Institut für Informatik Wintersemester 2007/08 Literature Storage Virtualization, Technologies for Simplifying Data Storage and Management,
Computer Backup Strategies
Computer Backup Strategies Think how much time it would take to recreate everything on your computer...if you could. Given all the threats to your data (viruses, natural disasters, computer crashes, and
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization
an introduction to networked storage
an introduction to networked storage How networked storage can simplify your data management The key differences between SAN, DAS, and NAS The business benefits of networked storage Introduction Historical
The HP IT Transformation Story
The HP IT Transformation Story Continued consolidation and infrastructure transformation impacts to the physical data center Dave Rotheroe, October, 2015 Why do data centers exist? Business Problem Application
Primary Memory. Input Units CPU (Central Processing Unit)
Basic Concepts of Computer Hardware Primary Memory Input Units CPU (Central Processing Unit) Output Units This model of the typical digital computer is often called the von Neuman compute Programs and
