Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Storage management: talk roadmap! Why disk arrays? Failures Redundancy! RAID! Performance considerations normal and degraded modes! Disk array designs and implementations! Case study: HP AutoRAID StAndrews-arrays, 1

2 Why disk arrays? Because stuff happens StAndrews-arrays, 2

3 Failures! Things break -- in a moderately predictable way in aggregate 120 Relative mortality rate Time! Metrics: MTTF: mean time to failure -- a rate, not a period AFR: annual failure rate (better -- but still just middle of bathtub ) MTTR: mean time to repair StAndrews-arrays, 3

4 Definitions! Reliability R(t) = likelihood system up continuously from time 0 to time t! Availability A(t) = likelihood system will be up at time t! Performability P(t,p) = likelihood system will be providing performance p at time t Performance Failure! Degraded mode Rebuilding Normal performance Time StAndrews-arrays, 4

5 Solution: introduce redundancy!! Complete copies replication, mirroring! Partial redundancy Hamming codes/ecc Mirror copies tolerates mangling of elements unnecessarily strong: we know when disks are broken Parity XOR sets (stripes) of data blocks to calculate a parity block any data block in a stripe can be reconstructed from the others + parity XOR Parity unit (xor of rest of stripe units in same stripe) StAndrews-arrays, 5

6 Redundancy helps! For disks: originally (mid-1980s), these were the most unreliable components nowadays, they!re one of the more reliable ones (AFR of 1-2%) but failure rates are proportional to numbers! Assume: independent failures warning! danger! caution! error!! With no redundancy AFR disks ~= N disks * AFR disk! With one degree of redundancy AFR raid ~= AFR disks (N disks ) * MTTR disk * AFR disks (N disks -1) StAndrews-arrays, 6

7 Redundancy hurts, too! Cost replicating everything costs 2x as much storage solution: partial redundancy! Slower updates 2x as many copies to write to even worse with partial redundancy! Greater complexity 80-90% of disk array firmware is error handling lots and lots of configuration choices StAndrews-arrays, 7

8 Storage management: talk roadmap! Why disk arrays? Failures Redundancy! RAID! Performance considerations normal and degraded modes! Disk array designs and implementations! Case study: HP AutoRAID StAndrews-arrays, 8

9 RAID! Originally (like everything else?) invented by IBM striping explored at Univ. Maryland catchy terminology popularized by UC Berkeley: Patterson, Gibson & Katz: The case for Redundant Arrays of Inexpensive Disks (RAID), ACM SIGMOD, 1988 comparison point: slow, large expensive disks (SLED) goal was to compete with IBM mainframe disks using cheap, unreliable PC drives Now:! RAID: Redundant Arrays of Independent Disks StAndrews-arrays, 9

10 RAID levels 0, 1, 10! RAID0: striping (no redundancy) Stripe unit Stripe Striping balances the load and allows large transfers to happen in parallel! RAID1: aka mirroring (full redundancy) Mirror copies Mirroring gives 2x the read bandwidth per disk, but writes have to go to both! RAID10: striped mirroring (full redundancy) Mirror copies StAndrews-arrays, 10

11 RAID level 3 - parity-protected striping! RAID3: byte-interleaved all disks read/written in lock step parity on dedicated disk (ok: sees same load as remainder) great for high-bandwidth, large transfers; otherwise poor XOR parity is single-bit ECC that can correct single-bit erasures If a disk is missing, XORing the others will give you its contents XOR Parity unit (xor of rest of stripe units in same stripe) StAndrews-arrays, 11

12 RAID level 4 - block-interleaved stripes! RAID4: block-interleaved independent reads/writes possible parity on dedicated disk (hot spot!)! Updating parity is expensive for small writes one effect: write-caching becomes especially important XOR Read old data & parity 2. Compute new parity 3. Write new data + parity => 4x I/O operations per small write StAndrews-arrays, 12

13 RAID level 5! RAID5 - rotated-parity-protected striping to balance the load Parity unit (xor of rest of stripe units in same stripe) Rotating the parity balances the parity load across all the disks; striping allows fast large transfers RAID5 is the configuration of choice for all but performance-intensive loads StAndrews-arrays, 13

14 RAID levels Currently accepted RAID levels 0: no redundancy 1: full copy (mirrors) 10: striped mirrors Note: not really levels, just a list 2: Hamming-code/ECC (not used) 3: byte-interleaved parity 4: block-interleaved parity (more useful variant of RAID3) 5: rotated block-interleaved parity 6: double parity ( P+Q parity -- rare) StAndrews-arrays, 14

15 RAID: some tricky points! Updates in flight at time of power failure can corrupt the parity either: an expensive parity rebuild on power up or: keep a non-volatile intentions log! Reliability calculations based on disks alone are bogus power source is single largest problem then controller failures, cooling, backplane, connections,... redundancy helps here, too nobody likes to talk about software StAndrews-arrays, 15

16 Storage management: talk roadmap! Why disk arrays? Failures Redundancy! RAID! Performance considerations normal and degraded modes! Disk array designs and implementations! Case study: HP AutoRAID StAndrews-arrays, 16

17 Improving performance: RAID 4/5! Floating parity [Menon92] write parity anywhere -- saves one revolution XOR Read old data & parity * claim: old data is already cached 2. Compute new parity 3. Write new data + parity * parity write is free => ~2x I/O operations per small write StAndrews-arrays, 17

18 Improving performance: RAID 4/5! Parity logging [Stodolsky94] aggregate parity updates into an append-only log propagate log in background StAndrews-arrays, 18

19 Improving performance: AFRAID! A Frequently Redundant Array of Independent Disks [Savage&Wilkes96] live (a little) dangerously... update parity opportunistically in the background gives smooth tradeoff between availability and performance StAndrews-arrays, 19

20 Improving performance: choice of stripe size! Optimum size is dependent on: read:write mix data layout (RAID1 vs RAID5) concurrency level back-end disk characteristics (e.g., track size)! Choices are a daunting problem for sysadmins [Chen90] for RAID1, [Chen95] for RAID StAndrews-arrays, 20

21 Degraded-mode performance! Reading RAID4/5 when a disk is broken is expensive XOR 1. Read all surviving data & parity 2. Compute missing data (XOR) => all surviving disks are involved StAndrews-arrays, 21

22 Improving RAID5 degraded mode performance! Chained declustering [Hsaio+DeWitt90] spread the second copy out over other disks when the primary copy breaks, each secondary disk takes up only a portion of the slack Primary copy Secondary copy, spread over other drives StAndrews-arrays, 22

23 Improving RAID5 degraded mode performance! Declustering [Muntz90] make stripes narrower than whole array only stripes that have the broken disk need pay performance penalty each stripe uses a different set of disks some complexity in the mappings that do this nicely but close enough works just fine better degraded-mode performance, at the cost of more disks stripes are smaller => more parity improvements Approximate block designs Prime/Relpr [Alvarez98] - better-spread large-transfer load StAndrews-arrays, 23

24 Recovery/rebuild after a disk failure! Reduce MTTR: keep an online spare e.g., XP256: up to 4 spares per rack of 64 drives! Distributed sparing [Menon92] makes the spare useful spread its contents across all the disks effectively adds an extra disk!s performance to the array StAndrews-arrays, 24

25 Recovery/rebuild after a disk failure! Reconstruction after failure sweep across data: read every stripe, rewrite parity/missing data poor performance if done too simply: data transfers are too small; too much blocking better: disk-oriented reconstruction [Holland93] keep >= one outstanding read for each disk can also piggyback updates on foreground activity requires keeping a map of reconstructed stripes big tradeoff: faster recovery or slower foreground activity? StAndrews-arrays, 25

26 Storage management: talk roadmap! Why disk arrays? Failures Redundancy! RAID! Performance considerations normal and degraded modes! Disk array designs and implementations! Case study: HP AutoRAID StAndrews-arrays, 26

27 Array implementations! In software cheapest, but consumes memory (and cpu) cycles usually mirroring, in OS Logical Volume Manager! In host bus adapter common in PC servers big win is from the read/write cache fault handling is very limited Host interface (PCI) Array controller Cache StAndrews-arrays, 27

28 Array implementations! Mid-range array (e.g., HP FC60) sometimes separate controller and disk boxes up to 1-2TB disk, 0.5Gb cache RAM can saturate a 100MB/s FibreChannel link; O(10,000 IOs/s) Host interfaces Array controller Array controller Cache Cache Packaging: whole array is in a single box, or array controller is in separate box Power, etc Power, etc StAndrews-arrays, 28

29 Array implementations! High-end array: integrated box (e.g., HP XP256; EMC Symmetrix) up to a few TB of disk up to a few GB of cache up to a few \$million! What you pay for: lots of caching (vital to performance) multiple host interface types e.g., HP XP256: SCSI, FibreChannel, and ESCON quality power distribution, cabling, cooling, vibration isolation phone-home, remote management, support infrastructure can saturate a few 100MB/s FibreChannel links; O(50,000+ IOs/s) StAndrews-arrays, 29

30 Disk array architecture - high end Host interfaces Packaging: array controller in a separate box (disks in rack-mountable trays), to array controller is one of several 6! racks Front-end controller Front-end controller Cache Parity XOR Cache Power, etc Power, etc Back-end controllers Power, etc Power, etc StAndrews-arrays, 30

31 Disk arrays: Logical UNits! Most arrays provide multiple LUNs (SCSI Logical UNits) one or more disk drives bound together into a common layout different LUNs can have different sizes, different layouts LUN 0 is often special (used for controlling the array as a whole) at low end: 8-32 LUNs at high end: thousands of LUNs SCSI limit: 4096 LUNs, from a 12 bit LUNid! A few common variations (there are many more): parts of disks instead of whole disks LUNs may be named relative to ports, not uniquely LUNs can have different caching behavior StAndrews-arrays, 31

32 Summary! Disk arrays use redundancy to protect against disk (and other storage component) failures rest easy: the storage system is no longer the main problem!! They can also provide performance benefits caching can easily provide x performance boost! But at cost of lots of complexity algorithms configuration choices implementation options StAndrews-arrays, 32

33 Storage management: talk roadmap! Why disk arrays? Failures Redundancy! RAID! Performance considerations normal and degraded modes! Disk array designs and implementations! Case study: HP AutoRAID StAndrews-arrays, 33

### Reliability and Fault Tolerance in Storage

Reliability and Fault Tolerance in Storage Dalit Naor/ Dima Sotnikov IBM Haifa Research Storage Systems 1 Advanced Topics on Storage Systems - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom

### RAID: Redundant Arrays of Inexpensive Disks this discussion is based on the paper: on Management of Data (Chicago, IL), pp.109--116, 1988.

: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings

### CS161: Operating Systems

CS161: Operating Systems Matt Welsh mdw@eecs.harvard.edu Lecture 18: RAID April 19, 2007 2007 Matt Welsh Harvard University 1 RAID Redundant Arrays of Inexpensive Disks Invented in 1986-1987 by David Patterson

### Operating Systems. RAID Redundant Array of Independent Disks. Submitted by Ankur Niyogi 2003EE20367

Operating Systems RAID Redundant Array of Independent Disks Submitted by Ankur Niyogi 2003EE20367 YOUR DATA IS LOST@#!! Do we have backups of all our data???? - The stuff we cannot afford to lose?? How

### COMP303 Computer Architecture Lecture 17. Storage

COMP303 Computer Architecture Lecture 17 Storage Review: Major Components of a Computer Processor Devices Control Memory Output Datapath Input Secondary Memory (Disk) Main Memory Cache Magnetic Disk Purpose

### ES-1 Elettronica dei Sistemi 1 Computer Architecture

ES- Elettronica dei Sistemi Computer Architecture Lesson 7 Disk Arrays Network Attached Storage 4"» "» 8"» 525"» 35"» 25"» 8"» 3"» high bandwidth disk systems based on arrays of disks Decreasing Disk Diameters

### Theoretical Aspects of Storage Systems Autumn 2009

Theoretical Aspects of Storage Systems Autumn 2009 Chapter 1: RAID André Brinkmann University of Paderborn Personnel Students: ~13.500 students Professors: ~230 Other staff: ~600 scientific, ~630 non-scientific

### SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology

SSDs and RAID: What s the right strategy Paul Goodwin VP Product Development Avant Technology SSDs and RAID: What s the right strategy Flash Overview SSD Overview RAID overview Thoughts about Raid Strategies

### RAID Storage, Network File Systems, and DropBox

RAID Storage, Network File Systems, and DropBox George Porter CSE 124 February 24, 2015 * Thanks to Dave Patterson and Hong Jiang Announcements Project 2 due by end of today Office hour today 2-3pm in

### RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array

ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com sales@attotech.com RAID Overview: Identifying What RAID Levels

### RAID: Redundant Arrays of Independent Disks

RAID: Redundant Arrays of Independent Disks Dependable Systems Dr.-Ing. Jan Richling Kommunikations- und Betriebssysteme TU Berlin Winter 2012/2013 RAID: Introduction Redundant array of inexpensive disks

### Case for storage. Outline. Magnetic disks. CS2410: Computer Architecture. Storage systems. Sangyeun Cho

Case for storage CS24: Computer Architecture Storage systems Sangyeun Cho Computer Science Department Shift in focus from computation to communication & storage of information Eg, Cray Research/Thinking

### Storing Data: Disks and Files

Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve

### Definition of RAID Levels

RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

### High Performance Computing. Course Notes 2007-2008. High Performance Storage

High Performance Computing Course Notes 2007-2008 2008 High Performance Storage Storage devices Primary storage: register (1 CPU cycle, a few ns) Cache (10-200 cycles, 0.02-0.5us) Main memory Local main

### Operating Systems. Redundant Array of Inexpensive Disks (RAID) Thomas Ropars.

1 Operating Systems Redundant Array of Inexpensive Disks (RAID) Thomas Ropars thomas.ropars@imag.fr 2016 2 References The content of these lectures is inspired by: Operating Systems: Three Easy Pieces

### Disk Storage & Dependability

Disk Storage & Dependability Computer Organization Architectures for Embedded Computing Wednesday 19 November 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition,

### RAID 5 rebuild performance in ProLiant

RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild

### Price/performance Modern Memory Hierarchy

Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion

### RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant

### systems' resilience to disk failure through

BY Shashwath Veerappa Devaru CS615 Aspects of System Administration Using Multiple Hard Drives for Performance and Reliability RAID is the term used to describe a storage systems' resilience to disk failure

### RAID technology and IBM TotalStorage NAS products

IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID

### Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1

Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,

### RAID Basics Training Guide

RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E

### Physical Storage Media

Physical Storage Media These slides are a modified version of the slides of the book Database System Concepts, 5th Ed., McGraw-Hill, by Silberschatz, Korth and Sudarshan. Original slides are available

### CS 6290 I/O and Storage. Milos Prvulovic

CS 6290 I/O and Storage Milos Prvulovic Storage Systems I/O performance (bandwidth, latency) Bandwidth improving, but not as fast as CPU Latency improving very slowly Consequently, by Amdahl s Law: fraction

### Disk Array Data Organizations and RAID

Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution

### Mass-Storage Devices: Disks. CSCI 5103 Operating Systems. Basic Disk Functionality. Magnetic Disks

Mass-Storage Devices: Disks CSCI 5103 Operating Systems Instructor: Abhishek Chandra Disk Structure and Attachment Disk Scheduling Disk Management RAID Structure Stable Storage 2 Magnetic Disks Most common

### Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs

Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is

### Sistemas Operativos: Input/Output Disks

Sistemas Operativos: Input/Output Disks Pedro F. Souto (pfs@fe.up.pt) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction

### Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance

### Redundant Array of Independent Disks (RAID) Technology Overview

Redundant Array of Independent Disks (RAID) Technology Overview What is RAID? The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or

### ECE Enterprise Storage Architecture. Fall 2016

ECE590-03 Enterprise Storage Architecture Fall 2016 RAID Tyler Bletsch Duke University Slides include material from Vince Freeh (NCSU) A case for redundant arrays of inexpensive disks Circa late 80s..

### Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems

Fault Tolerance & Reliability CDA 5140 Chapter 3 RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components

### Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

### COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/)

COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Today s Topics Magnetic disks Magnetic disk performance

### PIONEER RESEARCH & DEVELOPMENT GROUP

SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for

### File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

### Topics. Hamming Algorithm

Topics Hamming algorithm Magnetic disks RAID Hamming Algorithm In a Hamming code r parity bits added to m-bit word Forms codeword with length (m + r) bits Bit numbering Starts at with leftmost (high-order)

### CS420: Operating Systems

NK YORK COLLEGE OF PENNSYLVANIA HG OK 2 RAID YORK COLLEGE OF PENNSYLVAN James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz,

### Secondary Storage Devices 4

4 Secondary Storage Devices Copyright 2004, Binnur Kurt Content Secondary storage devices Organization of disks Organizing tracks by sector Organizing tracks by blocks Non-data overhead The cost of a disk

### RAID0.5: Active Data Replication for Low Cost Disk Array Data Protection

RAID0.5: Active Data Replication for Low Cost Disk Array Data Protection John A. Chandy Department of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-2157 john.chandy@uconn.edu

### Solving Data Loss in Massive Storage Systems Jason Resch Cleversafe

Solving Data Loss in Massive Storage Systems Jason Resch Cleversafe 2010 Storage Developer Conference. Insert Your Company Name. All Rights Reserved. 1 In the beginning There was replication Long before

### RAID (Redundant. Disks) Array of Independent. The Basics

RAID (Redundant Array of Independent Disks) The Basics Definitions RAID Disk Striping Disk Mirroring Reconstruction Hot Swap Parity -Oct-99_ Definitions: RAID RAID Redundant Array of Inexpensive (or Independent)

### RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1

RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.

### technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels

technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California

### Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

### ECE 5730 Memory Systems. Redundant Array of Inexpensive (Independent) Disks (RAID)

ECE 5730 Memory Systems Spring 2009 Redundant Array of Inexpensive (Independent) Disks (RAID) Lecture 24: 1 No office hours today Announcements Make-up class #2 Thursday, April 30, 6:00-7:15pm, PH 403

### Data Storage - II: Efficient Usage & Errors

Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina

### Outline. Database Tuning. Disk Allocation Raw vs. Cooked Files. Overview. Hardware Tuning. Nikolaus Augsten. Unit 6 WS 2015/16

Outline Database Tuning Hardware Tuning Nikolaus Augsten University of Salzburg Department of Computer Science Database Group Unit 6 WS 2015/16 1 2 3 Conclusion Adapted from Database Tuning by Dennis Shasha

### Data Storage - II: Efficient Usage & Errors. Contains slides from: Naci Akkök, Hector Garcia-Molina, Pål Halvorsen, Ketil Lund

Data Storage - II: Efficient Usage & Errors Contains slides from: Naci Akkök, Hector Garcia-Molina, Pål Halvorsen, Ketil Lund Overview Efficient storage usage Disk errors Error recovery INF3 7.3.26 Ellen

### Click on the diagram to see RAID 0 in action

Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written

### Today s Papers. RAID Basics (Two optional papers) Array Reliability. EECS 262a Advanced Topics in Computer Systems Lecture 4

EECS 262a Advanced Topics in Computer Systems Lecture 4 Filesystems (Con t) September 15 th, 2014 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley Today

### Distribution One Server Requirements

Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

### Summer Student Project Report

Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months

### Outline. Database Management and Tuning. Overview. Hardware Tuning. Johann Gamper. Unit 12

Outline Database Management and Tuning Hardware Tuning Johann Gamper 1 Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 12 2 3 Conclusion Acknowledgements: The slides are provided

### Today: Coda, xfs. Coda Overview

Today: Coda, xfs Case Study: Coda File System Brief overview of other file systems xfs Log structured file systems HDFS Object Storage Systems CS677: Distributed OS Lecture 21, page 1 Coda Overview DFS

### Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers

Overview The SA6400A is a high-performance Ultra320, PCI-X array controller. It provides maximum performance, flexibility, and reliable data protection for HP OpenVMS AlphaServers through its unique modular

### Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

### Lecture 36: Chapter 6

Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for

### QuickSpecs. HP Smart Array 5312 Controller. Overview

Overview Models 238633-B21 238633-291 (Japan) Feature List: High Performance PCI-X Architecture High Capacity Two Ultra 3 SCSI channels support up to 28 drives Modular battery-backed cache design 128 MB

### Chapter 6 External Memory. Dr. Mohamed H. Al-Meer

Chapter 6 External Memory Dr. Mohamed H. Al-Meer 6.1 Magnetic Disks Types of External Memory Magnetic Disks RAID Removable Optical CD ROM CD Recordable CD-R CD Re writable CD-RW DVD Magnetic Tape 2 Introduction

### Improving Lustre OST Performance with ClusterStor GridRAID. John Fragalla Principal Architect High Performance Computing

Improving Lustre OST Performance with ClusterStor GridRAID John Fragalla Principal Architect High Performance Computing Legacy RAID 6 No Longer Sufficient 2013 RAID 6 data protection challenges Long rebuild

### Why disk arrays? CPUs improving faster than disks

Why disk arrays? CPUs improving faster than disks - disks will increasingly be bottleneck New applications (audio/video) require big files (motivation for XFS) Disk arrays - make one logical disk out of

### Computing Support UNIT 17

UNIT 17 Computing Support STARTER Find out what the most common computing problems are for your classmates and how they get help with these problems. Use this form to record your results. Problems viruses

### IBM ^ xseries ServeRAID Technology

IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were

### GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"

GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID

### 200 Chapter 7. (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.

200 Chapter 7 (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.) 7.2 RAID Disks are potential bottlenecks for system performance and

### Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor

RAID Inc Xanadu 200 100 Series Storage Systems Xanadu 130 Business Class Storage Solution 8G FC Host Connectivity and 6G SAS Backplane 2U 12-Bay 3.5 Form Factor Highlights Highest levels of data integrity

### Data Striping. data blocks. disk controller one I/O stream. disk controller. Gustavo Alonso. IKS. ETH Zürich Low level caching 1

Data Striping The idea behind data striping is to distribute data among several disks so that it can be accessed in parallel Data striping takes place at a low system level (it is not user driven) and

An Introduction to RAID Giovanni Stracquadanio stracquadanio@dmi.unict.it www.dmi.unict.it/~stracquadanio Outline A definition of RAID An ensemble of RAIDs JBOD RAID 0...5 Configuring and testing a Linux

### Algorithms and Methods for Distributed Storage Networks 4: Volume Manager and RAID Christian Schindelhauer

Algorithms and Methods for Distributed Storage Networks 4: Volume Manager and RAID Institut für Informatik Wintersemester 2007/08 RAID Redundant Array of Independent Disks Patterson, Gibson, Katz, A Case

### Storage node capacity in RAID0 is equal to the sum total capacity of all disks in the storage node.

RAID configurations defined 1/7 Storage Configuration: Disk RAID and Disk Management > RAID configurations defined Next RAID configurations defined The RAID configuration you choose depends upon how you

### HP Smart Array Controllers and basic RAID performance factors

Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

### RAID 0 (striping) RAID 1 (mirroring)

RAID 0 (striping) RAID 0 uses the read/write capabilities of two or more hard drives working together to maximize storage performance. Data in a RAID 0 volume is arranged into blocks that are spread across

### Storage Devices for Database Systems

Storage Devices for Database Systems These slides are mostly taken verbatim, or with minor changes, from those prepared by Stephen Hegner (http://www.cs.umu.se/ hegner/) of Umeå University Storage Devices

### CS 153 Design of Operating Systems Spring 2015

CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations Physical Disk Structure Disk components Platters Surfaces Tracks Arm Track Sector Surface Sectors Cylinders Arm Heads

### RAID Technology Overview

RAID Technology Overview HP Smart Array RAID Controllers HP Part Number: J6369-90050 Published: September 2007 Edition: 1 Copyright 2007 Hewlett-Packard Development Company L.P. Legal Notices Copyright

### VTrak 15200 SATA RAID Storage System

Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

### Hard Disk Drives and RAID

Hard Disk Drives and RAID Janaka Harambearachchi (Engineer/Systems Development) INTERFACES FOR HDD A computer interfaces is what allows a computer to send and retrieve information for storage devices such

### QuickSpecs. Models. Overview

Overview The HP Smart Array P800 is HP's first 16 port serial attached SCSI (SAS) RAID controller with PCI-Express (PCIe). It is the highest performing controller in the SAS portfolio and provides new

### 416 Distributed Systems

416 Distributed Systems RAID, Feb 5 2016 Thanks to Greg Ganger and Remzi Arapaci-Dusseau for slides Replacement Rates HPC1 COM1 COM2 Component % Component % Component % Hard drive 30.6 Power supply 34.8

### DELL POWEREDGE EXPANDABLE RAID CONTROLLER 6. on the DELL POWERVAULT MD1000 & MD1120 PERFORMANCE ANALYSIS REPORT

TM DELL TM POWEREDGE EXPANDABLE RAID CONTROLLER 6 on the DELL TM POWERVAULT MD1000 & MD1120 PERFORMANCE ANALYSIS REPORT Dan Hambrick Storage Performance Analysis Lab Office of the Chief Technology Officer

### Why disk arrays? CPUs speeds increase faster than disks. - Time won t really help workloads where disk in bottleneck

1/19 Why disk arrays? CPUs speeds increase faster than disks - Time won t really help workloads where disk in bottleneck Some applications (audio/video) require big files Disk arrays - make one logical

### Module: Storage Systems

Module: Storage Systems Sudhakar Yalamanchili, Georgia Institute of Technology (except as indicated) Storage technologies and trends Section 6.2 Reading http://www.storagereview.com/map/lm.cgi/areal_de

### COMP 7970 Storage Systems

COMP 797 Storage Systems Dr. Xiao Qin Department of Computer Science and Software Engineering Auburn University http://www.eng.auburn.edu/~xqin xqin@auburn.edu COMP 797, Auburn University Slide 3b- Problems

### Non-Redundant (RAID Level 0)

There are many types of RAID and some of the important ones are introduced below: Non-Redundant (RAID Level 0) A non-redundant disk array, or RAID level 0, has the lowest cost of any RAID organization

### an analysis of RAID 5DP

an analysis of RAID 5DP a qualitative and quantitative comparison of RAID levels and data protection hp white paper for information about the va 7000 series and periodic updates to this white paper see

### Storage. The text highlighted in green in these slides contain external hyperlinks. 1 / 14

Storage Compared to the performance parameters of the other components we have been studying, storage systems are much slower devices. Typical access times to rotating disk storage devices are in the millisecond

### EMC CLARiiON RAID 6 Technology A Detailed Review

A Detailed Review Abstract This white paper discusses the EMC CLARiiON RAID 6 implementation available in FLARE 26 and later, including an overview of RAID 6 and the CLARiiON-specific implementation, when

### Lenovo RAID Introduction Reference Information

Lenovo RAID Introduction Reference Information Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and cost-efficient methods to increase server's storage performance,

### Computer Organization II. CMSC 3833 Lecture 61

RAID, an acronym for Redundant Array of Independent Disks was invented to address problems of disk reliability, cost, and performance. In RAID, data is stored across many disks, with extra disks added

### Tandem. Tandem. Tandem. Background Tandem NonStop Systems - Cyclone. Commercial Database Systems with very long MTTF Modularity. from.

from to HP 2013 A.W. Krings 1 Background NonStop Systems - Cyclone Commercial Database Systems with very long MTTF Modularity» units of service, failure, diagnosis, repair, growth» fault containment regions»

### A Tutorial on RAID Storage Systems

A Tutorial on RAID Storage Systems Sameshan Perumal and Pieter Kritzinger CS04-05-00 May 6, 2004 Data Network Architectures Group Department of Computer Science University of Cape Town Private Bag, RONDEBOSCH

### CHAPTER 4 RAID. Section Goals. Upon completion of this section you should be able to:

HPTER 4 RI s it was originally proposed, the acronym RI stood for Redundant rray of Inexpensive isks. However, it has since come to be known as Redundant rray of Independent isks. RI was originally described

### Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

### Computer Club Talk Series. Storage Systems. Sponsored by Green Hills Software

Computer Club Talk Series Storage Systems Sponsored by Green Hills Software Green Hills make the world's highest performing compilers, most secure real time operating systems, revolutionary debuggers,

### Storage Technologies - 2

Antoniu Pop 1 The University of Manchester COMP 25212 Storage Technologies - 2 Antoniu Pop antoniu.pop@manchester.ac.uk 16 April 2015 Antoniu Pop 2 The University of Manchester COMP 25212 Storage Technologies

### DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top