RAID Basics Training Guide



Similar documents
Intel RAID Web Console 2 and StorCLI Command Line Tool

Definition of RAID Levels

How To Create A Multi Disk Raid

technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels

Benefits of Intel Matrix Storage Technology

IBM ^ xseries ServeRAID Technology

Intel Server Raid Controller. RAID Configuration Utility (RCU)

USER S GUIDE. MegaRAID SAS Software. June 2007 Version , Rev. B

How to choose the right RAID for your Dedicated Server

Technical White paper RAID Protection and Drive Failure Fast Recovery

Intel RAID Software User s Guide:

Intel RAID Software User s Guide:

Intel RAID Volume Recovery Procedures

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

Intel Matrix Storage Manager 8.x

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

PIONEER RESEARCH & DEVELOPMENT GROUP

Intel Matrix Storage Console

RAID Made Easy By Jon L. Jacobi, PCWorld

Configuring RAID for Optimal Performance

RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1

High Availability Server Clustering Solutions

Intel RAID Controllers

Intel RAID Software User s Guide:

Configuring ThinkServer RAID 100 on the TS140 and TS440

RAID 6 with HP Advanced Data Guarding technology:

Intel Rapid Storage Technology

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

Performance Report Modular RAID for PRIMERGY

Redundant Array of Independent Disks (RAID)

Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer

an analysis of RAID 5DP

White Paper A New RAID Configuration for Rimage Professional 5410N and Producer IV Systems November 2012

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection

Data Storage - II: Efficient Usage & Errors

RAID EzAssist Configuration Utility Quick Configuration Guide

RAID Implementation for StorSimple Storage Management Appliance

RAID Level Descriptions. RAID 0 (Striping)

New Advanced RAID Level for Today's Larger Storage Capacities: Advanced Data Guarding

Using RAID6 for Advanced Data Protection

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

Contents. Overview. Drive Policy RAID 500 features. Disable BGI RAID 700 features. Management Tasks Choosing the RAID Level.

IncidentMonitor Server Specification Datasheet

What is RAID and how does it work?

RAID 5 rebuild performance in ProLiant

VERY IMPORTANT NOTE! - RAID

An Introduction to RAID 6 ULTAMUS TM RAID

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Benefits of Using RAID 50 or 60 in Single High Capacity RAID Array Volumes Greater than 16 Disk Drives

SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology

Maintenance Best Practices for Adaptec RAID Solutions

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05

Nutanix Tech Note. Failure Analysis All Rights Reserved, Nutanix Corporation

Intel RAID SSD Cache Controller RCS25ZB040

BrightStor ARCserve Backup for Windows

RAID Technology White Paper

GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4

WebBIOS Configuration Utility Guide

OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS

Xserve G5 Using the Hardware RAID PCI Card Instructions for using the software provided with the Hardware RAID PCI Card

Getting Started With RAID

Configuring ThinkServer RAID 100 on the Lenovo TS430

RAID OPTION ROM USER MANUAL. Version 1.6

Configuring ThinkServer RAID 100 on the Lenovo TS130

Chapter 6 External Memory. Dr. Mohamed H. Al-Meer

User Guide - English. Embedded MegaRAID Software

RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array

RAID-DP : NETWORK APPLIANCE IMPLEMENTATION OF RAID DOUBLE PARITY FOR DATA PROTECTION A HIGH-SPEED IMPLEMENTATION OF RAID 6

Distribution One Server Requirements

Enterprise-class versus Desktopclass

How To Write A Disk Array

USER S GUIDE. MegaRAID SAS Software. April , Rev. D

Summer Student Project Report

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Intel Platform and Big Data: Making big data work for you.

RAID Technology Overview

An Oracle White Paper January A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c

Storage node capacity in RAID0 is equal to the sum total capacity of all disks in the storage node.

Why disk arrays? CPUs improving faster than disks

Outline. Database Management and Tuning. Overview. Hardware Tuning. Johann Gamper. Unit 12

RAID Levels and Components Explained Page 1 of 23

2-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual

CS420: Operating Systems

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

5-BAY RAID STATION. Manual

Why disk arrays? CPUs speeds increase faster than disks. - Time won t really help workloads where disk in bottleneck

5-Bay Raid Sub-System Smart Removable 3.5" SATA Multiple Bay Data Storage Device User's Manual

Lecture 36: Chapter 6

Dynamic Disk Pools Technical Report

Operating Systems. RAID Redundant Array of Independent Disks. Submitted by Ankur Niyogi 2003EE20367

Dynamode External USB3.0 Dual RAID Encloure. User Manual.

RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card.

Intel RAID Software User s Guide:

Managing RAID. RAID Options

XtremIO DATA PROTECTION (XDP)

Transcription:

RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID.

Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E RAID 50 RAID 60 JBOD 3. Management Console 4. Summary

What is RAID? RAID stands for a Redundant Array of Independent Drives (or disks). RAID gathers individual disk drives into a cohesive drive set. Often called RAID groups or Raid Arrays that can be operated in unison. RAID also offers other benefits. The first is that higher levels of data protection can be achieved as redundancy is accomplished through mirroring. Where the contents of one drive is duplicated on another drive. Also drive capacity is aggregated therefore larger storage volumes can be created. Another key benefit is performance, this is realized through disk striping in which the contents of a file are written to and read from across several drives in the RAID Group. Over the years RAID variations have been identified by their RAID level, there are multiple levels of RAID and at each level. There are specific costs, performance and fault tolerance benefits. The level of RAID itself is designated by a number and it s important to note that this numbering system is not intuitive and does not indicate which RAID level is better so it really goes back to understanding which RAID level will work best for your customer s environment. Let s look closer at some of these RAID levels.

RAID 0 RAID Level 0 is simply striping. Striping takes a series of drives, groups them so they are presented as a single device to the host and stripes the data across all these drives to improve performance. It certainly gives us a higher data transfer rate and is relatively low cost. The disadvantage here is that there is no redundancy or high availability. This does not mean that this RAID level is not used, in fact this RAID level is used with many applications in which performance and not data redundancy is important to your customer, but remember, if one drive fails the entire array fails because part of the data is missing with no way to recover it other than restoring from a back up. Disk striping enhances performance because multiple drives are accessed simultaneously, but remember disk striping does not provide data redundancy. It is recommended that you keep stripe sizes the same across RAID arrays. For example in a 3 disk system using only disk striping, Segment 1 is written to Disk 1, Segment 2 is written to Disk 2 and so on. Data transfer rates will be three times faster than a single disk or JBOD because no redundancy is required and all the reads and writes can be handled simultaneously from each disk.

RAID 1 With Mirroring used in RAID 1 data written to one disk is simultaneously written to another if one fails the contents of the other can be used to run the system and reconstruct the failed disk. The primary advantage of disk mirroring is that it provides 100% data redundancy but it is expensive because each drive in the system must be duplicated. Because the contents of the disk are completely written to a second, it does not matter if one of them fails. Both drives contain the same data at all times and either drive can act as the operational drive. This RAID 1 Illustration shows data being written to two drives at the same time creating an exact duplicate or mirrored data. If a drive fails the controller switches the mirror drive with no lapse in user acceptability. Extensions to RAID 1 One advantage of the LSI* MegaRAID RAID 1 algorithms utilized by Intel RAID is a technology called tier reads. During heavy read loads the controller will load balance the read request between both mirrored drives enhancing overall read performance. Traditional RAID 1 allows configuration of only two hard drives and RAID 10 configurations of up to 16 drives. With MegaRAID extensions to RAID 1, bandwidth-intensive applications can now benefit from much larger disk configurations. New enhancements allow up to 32 drives per RAID 1 volume improving overall system capacity and storage performance.

RAID 5 RAID 5 consists of block level striping and parity data distributed across all of RAID member disks. This gives more balanced access load across the drives. The array capacity is the sum of all the disks minus one with a minimum of three drives required. The parity information is used to recover data. If one drive fails so only one disk in an array is used to achieve data redundancy. This is the main reason this method is the most popular, also read performance in a RAID 5 configuration is virtually as good as RAID 0.

RAID 5 Cont. The disadvantage however is a relatively slow write cycle, 2 reads and 2 writes are required for each block written (one read and write for data block and another for the parity block). This RAID 5 illustration includes 6 physical disks where 5 data blocks are written on 5 physical disks, then parity data is written on the sixth disk. Parity is rotated and eventually written to every disk enabling the controller to recreate lost data onto a replacement disk without system interruption.

RAID 6 In a RAID 6 configuration, multiple parity operations are spread across the disk group. Which can survive the loss of two drives or the loss of a drive while another disk is being rebuilt. Of all of the standard RAID levels, RAID 6 provides the highest level of protection against drive failures. Read performance is similar to that of RAID 5, there is a performance penalty on write operations due to the over head association with the additional parity calculations. Performance is also further reduced during a drive rebuild. It is not well suited to tasks requiring a lot of writes or environments with few drives. Remember two complete disk failures in a single array is uncommon, an occasional bad blocks vary in frequency but the chance increases proportionally to capacity and number of disk drives. Some calculations show that arrays using the largest capacity disk drives are vulnerable to media errors and up to 1 in 4 rebuilds. In this illustration, Parity P+Q is rotated to every disk and written twice. The term P+Q indicates 2 algorithms are in use enabling the controller to recreate lost data from multiple disk failures on to replacement disks without system interruption. Disk spanning allows multiple physical disk arrays to function like one big drive. Spanning alone does not provide reliability or performance enhancements.

RAID 10 RAID 10 is the spanning of two or more RAID 1 mirrors. The advantages of RAID 10 are faster data access, like RAID 0 and single drive fault tolerance like RAID 1. RAID 10 still requires twice the number of disks like RAID 1. So it offers some performance improvements by striping but capacity is low since the mirror requires a duplicate set of drives.

RAID 10 Current Intel RAID controllers support up to 8 mirror groups in a RAID 10 configuration also note that spanned virtual disks must have the same stripe size and must be contiguous.

RAID 0+1 In RAID 0+1 data is striped across multiple drives and mirrored to a duplicate set of drives. RAID 0+1 is similar to RAID 10 with the exception that it cannot tolerate two simultaneous disk failures unless the second failed drive is from the same stripe as the first, that is once a single drive fails each of the disks in the other stripe is a single point of failure. Also, once the single failed drive is replaced all the disks in the array must participate in the rebuild. This Illustration of RAID 0+1 shows two groups of striped disks that are mirrors for redundancy.

RAID 1E RAID 1E is also a combination of mirroring with data striping. This RAID level stripes data and copies of the data across all the drives in the array. The first set of stripes are the data and the second set of stripes are mirrors of the first data stripe contained within the next logical drive. As with the standard RAID 1, the data is mirrored so the capacity of the logical drive is 50% of the total physical drive capacity of the array. RAID 1E requires a minimum of 3 drives. The following illustration is an example of a RAID 1E logical drive, each disk gets logically divided in half and mirrored data is written to the adjacent disk.

RAID 50 Another example of RAID spanning is RAID 50, like RAID 10 data is striped across multiple drive groups. However Raid 50 provides features of both RAID 0 and RAID 5. RAID 50 provides high throughput, redundancy and performance but required twice as many parity drives as a single RAID 5, configure RAID 50 by spanning two continuous RAID 5 virtual disks. As with RAID 10 the RAID 5 virtual disks must have the same stripe size. RAID 50 is commonly used in large disk groups. As the number of drives in a RAID set increases fault recovery time or the interval for rebuilding the RAID set increases. Instead of configuring one large RAID 5 array users can span multiple, smaller RAID 5 groups. The main advantage here is to reduce rebuild times thus reducing the likelihood of another disk failure while an array is in a degraded mode Also RAID 50 improves on the performance of RAID 5 particularly during writes. This level is recommended for applications that require high fault tolerance, capacity and random I/O activity performance. As you can see in this RAID 50 Illustration data is striped across multiple drive groups, data redundancy is achieved via rotated parity data.

RAID 60 RAID 60 combines data striping of RAID 0 with a distributed double parity of RAID 6, that is a RAID 0 array striped across RAID 6 elements. It typically requires at least 6 to 8 disks, RAID 60 has improved fault tolerance, as any 2 disk sets of each the RAID 6 sets can fail without data loss. Also drive failures or unrecoverable media errors occurring while a single disk is rebuilding in one RAID 6 set will not lead to data loss. Striping helps to increase capacity and performance without adding disks to each. RAID 6 set which would decrease data availability and could impact performance. RAID 60 improves upon the performance of RAID 6. Despite the fact that RAID 60 is slightly slower than RAID 50 in terms of writes, due to the added overhead of more parity calculations when data security is concerned this performance drop may be negligible.

RAID JBOD Concatenation or spanning of disks is not one of the numbered RAID levels but it is a popular method for combining multiple physical disk drives into a single virtual disk. It provides no data redundancy as the name implies disks are merely concatenated together so they appear to be a single large disk. This mode is sometimes called JBOD or just a bunch of disks. Performance is decreased because drives cannot be used concurrently. It is most commonly used when you have odd sized drives that need to be combined into a single virtual disk.

Management Console A RAID management console provides a simple way to manage and optimize storage application performance and data protection. It brings critical storage operations and reporting to the administrator s fingertips, allowing for easy deployment of storage functions. In addition to a Graphical User Interface, a Command Line Management tool also exists to provide additional flexibility, control and scripting capability. It enables administrators to easily deploy all critical system storage functions, including creating and managing virtual drives, adding a drive to a RAID virtual drive and on the fly RAID migration. Upgrading RAID levels is easy through a simple user interface. Depending on the console, it can upgrade to RAID 0, 1, 5, and 6 configurations and associated spans (10, 50 and 60), while allowing end-users to define specific properties for drive READs and WRITEs. Server operations are also allowed and can include creating a virtual drive, load configuration, firmware updating, silence alarm, and unlocking advanced software options, among other things.

Summary To summarize the different RAID levels discussed: RAID 0 is the fastest and most efficient but offers no fault tolerance. RAID 1 is ideal for highly fault tolerance environments but requires twice the storage. RAID 5 is the most cost efficient choice for server environments which are not write performance sensitive. RAID 10 is ideal for environments that require 100% redundancy and improved performance and capacity. RAID 50 is ideal for high capacity RAID 5 environments with additional reliability and performance. RAID 0+1 is optimal in systems requiring both fault tolerance and high performance but requires additional disk capacity investment. RAID 1E is a great choice for limited capacity environments such as small databases that need fault tolerance. RAID 6 is the choice for organizations that require high capacity along with high data redundancy where read performance is critical. RAID 60 is great for high capacity RAID 6 environments where additional data protection and performance is desired. Concatenation or JBOD mode is commonly used when combining odd sized drives into a single virtual disk. Intel RAID, or Redundant Array of Inexpensive (Independent) Disks, is a storage technology that combines multiple disk drive components into a logical unit, which helps with data redundancy and improves performance. RAID levels are the different ways data is distributed across the drives and depend on the level of redundancy and performance required. For more information visit: www.intel.com/go/raid INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel may make changes to specifications and product descriptions at any time, without notice. The information here is subject to change without notice. Do not finalize a design with this information. Intel, the Intel logo, Intel Inside, Xeon and Xeon Inside are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright 2014 Intel Corporation. All rights reserved. 0414/SJ/EM/PDF Please Recycle