Using the Software RAID Functionality of Linux
|
|
|
- Bonnie Dixon
- 10 years ago
- Views:
Transcription
1 Using the Software RAID Functionality of Linux Kevin Carson Solution Architect Hewlett-Packard (Canada) Co Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice
2 What this presentation IS Description of Linux Software RAID Functionality Examination of Booting from Software RAID Examination of Swapping failed Disks How to Layer RAID Levels How to mirror to Network Block Devices Issues in Choosing Between Software and Hardware RAID Configurations Based upon Debian and 2.6.6/7 kernels 2
3 What this presentation IS NOT Introduction to RAID concepts RAID kernel code discussion A discussion of Linux Logical Volume Management Impromptu troubleshooting of any particular Linux Software RAID implementation 3
4 Linux Software RAID Description
5 md Driver md = multiple devices Various configurations are called personalities so the RAID levels supported each have a personality as well as special groupings of devices for such as a logical volume management and concatentation of devices. md is not loaded as a driver module, instead by each personality ; one does need to load every possible personality. The kernel, some filesystems, and partition tables are able to recognize some/all personalities to leverage them for boot, file layout, auto-detection. 5
6 md personalities Linear RAID 0 RAID 1 RAID 5 RAID 6 Multipath HSM Translucent Concatenation of devices into one large device. Striped set of devices. Mirrored set of devices. Distributed parity with multiple choices for the distribution. Also implements RAID 4 (all parity on a single disk). Kernel 2.6 only. Dual parity information. Failover between different connections to the same device. Hierarchical Storage Management Unused in kernel tree. Placeholder for external features? Meant for union/overlay filesystems but is unused in the kernel. Separate efforts trying to provide this facility. 6
7 Software RAID Concepts & Concerns chunksize is more commonly known as a stripe. It is the basic division of data spread out over the disks within an array. Size ranges from 1 page to 4MB in powers of 2. md devices must be explicitly stopped before shutdown. If a spare device is assigned to the array, it will take the place of a failed device. Later replacements of the failed physical device will become the new spare if they are added after the rebuild has finished. Spares are NOT scrubbed while idle. They are only known to be good when a rebuild action commences. Most RAID personalities base their per drive disk usage on the smallest partition/drive in the set. A md device will survive the physical shuffling of drives. 7
8 General kernel support of RAID Under the following conditions, the kernel will automatically start existing md arrays during boot only: The RAID array consists of partitions whose type is linux raid autodetect The partitions contain a persistent RAID superblock. The partition table type is recognized by the kernel. The appropriate RAID personality is not a dynamic module but compiled in to the kernel. The driver for the IO channels to the disks is compiled in as well. Arrays not properly shutdown will need to be rebuilt upon activation but this will proceed in the background. When multiple arrays rely on the same physical device; only one rebuild action will be active. 8
9 General kernel support of RAID cont d Status through /proc/mdstat ~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] md3 : active raid5 sdh[2] sdf[1] sde[0] blocks level 4, 64k chunk, algorithm 0 [3/3] [UUU] md2 : active raid1 md1[1] md0[0] blocks [2/2] [_U] [==================>..] resync = 94.5% ( / ) finish=4.0min speed=3996k/sec unused devices: <none> 9
10 General kernel support of RAID cont d Status through /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] md3 : active raid5 sdf[1] sdh[3](f) sde[0] blocks level 4, 64k chunk, algorithm 0 [3/2] [UU_] md1 : active raid1 sdd[2] sdc[0] blocks [2/1] [U_] [>...] recovery = 0.4% (43392/ ) finish=37.3min speed=3944k/sec unused devices: <none> 10
11 General kernel support of RAID cont d Control of rebuild through /proc/sys/dev/raid/ or sysctl. Speed_limit_min defines the floor (minimum rebuild speed) and speed_limit_max defines the ceiling. Usually the floor has to be raised as any activity on any disk will slow the rebuild dramatically. /proc/sys/dev/raid/speed_limit_min echo >speed_limit_min (default is 100 KiB/s) sysctl w dev.raid.speed_limit_min=10000 /proc/sys/dev/raid/speed_limit_max echo >speed_limit_max (default is 10 MB/s) sysctl w dev.raid.speed_limit_max=
12 RAID 0 Personality Very little error checking on device state so system can hang with retries. Maximum IO transaction to a device will be chunksize. Attempts to merge adjancent IOs to a limit of chunksize If devices within RAID 0 set are varying size, remaining chunks will land on largest device 12
13 RAID 1 Personality chunksize is ignored; transactions are size of block of underlying device (typically 512 bytes). Lower block layer would normally merge successive requests. Read balancing (as of ) based upon device with nearest last known position with respect to RAID device. If multiple partitions or mirrors active on the same device, then this strategy fails as the last known head position is tracked by mirror. For constant sequential reads, this strategy can cause the same device to be hit constantly. For 2.6, a better strategy is implemented for spreading sequential reads and tracking last known head position. Resynchronization proceeds in 64KiB transactions. One can specify multiple copies to get n-way mirroring. To split off a mirror (eg for backup), one would set the device as faulty and then remove the device from the RAID set. Adding back causes a resynchroniziaton. Need a minimum of two devices. 13
14 RAID 5 Personality Upon boot/module load, several different parity algorithms are tried and the best method chosen. Four choices on how parity chunks are distributed among the devices of the RAID set: Left Asymmetric, Right Asymmetric, Left Symmetric, Right Symmetric. Also handles RAID 4 functionality (dedicated parity disk instead of distributed parity chunks). The last active disk in the RAID set is chosen as the parity disk. RAID 4 shows up as a RAID 5 device /proc/mdstat but with level 4 noted in device line. Need a minimum of 3 devices. 14
15 RAID 6 Personality Only available with 2.6.x kernels. Two sets of parity information so will survive failure of two devices. Specialized algorithms for parity calculation are only available for x86 or x86-64 MMX/SSE capable CPUs though generic algorithms perform well on IA64 (only ~24% decrease over RAID 5 calculation). For very generic CPUs, results can be as much as 5X slower over RAID5. Choices of parity distribution is identical to RAID5. Can only be manipulated with mdadm utility. Minimum of 4 devices. 15
16 Tools for Software RAID
17 raidtools2 Second version of original set of tools. Each different action has a different command associated (eg, lsraid, mkraid, raidhotadd, raidhotremove, etc.) Expects to have a valid configuration file describing the current (or about to be created) md devices. /etc/raidtab by default. Use lsraid R p to output a formatted raidtab Example /etc/raidtab entry: # md device [dev 9, 0] /dev/md0 queried online raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 chunk-size 0 device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 17
18 mdadm Provides all the same actions as the raidtools2 suite but with different command line options rather than separate commands. Most actions don t require a configuration file. Mdadm can also run in the background, monitoring the running md devices and detect changes. An event can trigger log events, mail notification, and the running of scripts. While mdadm is running as a monitor it can float a spare device between arrays to create a global spare. Without this functionality, spares are statically assigned per array only. For some operations, the monitoring action of mdadm may prevent changes to the array s state. Temporarily shut off the daemon. Use mdadm detail --scan to output a formatted configuration file. Default configuration file is /etc/mdadm/mdadm.conf Example /etc/mdadm/mdadm.conf entry: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=40e8c451:71a182ed:23642c05:476cf390 devices=/dev/sda1,/dev/sdb1 18
19 Filesystems support of RAID EXT2/3 and XFS provide parameters to tune allocation of data to the stripe size of the RAID unit. These parameters are supplied when the filesystem is first created on the md device. Ext2/3: use -R stride=xxx parameter to align data allocation. Stride should be set to chunk size / filesystem block size. XFS aligns allocations for both its log and data sections using: -d su=xxx or -d sunit=xxx -l su=xxx or -l sunit=xxx where su= is specified in bytes and sunit= is specified in 512 byte blocks. Caveats: Its best to move journalling filesystems log to a separate RAID 1 device. Ext2/3 allocates blocks in groups of by default. The start of each block group has the inode and block allocation bitmaps. If the group block size is an even multiple of stripe set size (chunk size * #devices) and the -R stride=xxx isn t used, then accesses for these bitmaps will happen on the same device. Don t forget the -R stride=xxx option! 19
20 Layering Software RAID Personalities
21 Layering md Devices Simple as using other md devices as the raid disks. Here is an example creating a RAID10 (striped mirrors) array: ~# mdadm -C /dev/md0 -l 1 -n 2 -a /dev/sda1 -a /dev/sdb1 ~# mdadm -C /dev/md1 -l 1 -n 2 -a /dev/sdc1 -a /dev/sdd1 ~# mdadm -C /dev/md2 -l 1 -n 2 -a /dev/md0 -a /dev/md1 Personalities : [raid0] [raid1] md2 : active raid0 md1[1] md0[0] blocks 64k chunks md1 : active raid1 sdd1[1] sdc1[0] blocks [2/2] [UU] [>...] resync = 1.3% (120704/ ) finish=54.4min speed=2678k/sec md0 : active raid1 sdb1[1] sda1[0] blocks [2/2] [UU] [>...] resync = 1.5% (134336/ ) finish=97.1min speed=1498k/sec unused devices: <none> Raidtab order is important such that /dev/md0 and /dev/md1 entries must appear before /dev/md2 s entry. One could do something similar with nested RAID5 sets to get a RAID6 like protection in a 2.4 kernel. Using layers increases overhead in processing. This is a reason for implementing true RAID6 in the kernel. 21
22 Booting from Software RAID
23 Boot Process Conceptualized Most platforms implement these steps with varying divisions of which firmware/bootloader does what: Firmware checks system integrity Initial System Code determines list of bootable devices Initial System Code attempts to handoff execution to Boot Management presented by first bootable device Boot Management hands off to an Operating System 23
24 Firmware Checks System Integrity I m ALIVE!!!! Or not 24
25 Initial System Code Finds Boot Devices Two extremes of implementation: Can be a very restricted list of device types the ISC will recognize. EFI allows adapter to install driver from adapter option ROM. Different ways of making the list: BIOS expansion ROM presented by adapter adds entry if adapter has been told of/detected a candidate. Firmware is configured with explicit list of devices. Depending on the Initial System Code s flexibility, one may have to mix device types (eg, a boot floppy and a hard drive) to get redundancy. Identical setups of later stage software on each. 25
26 ISC to Boot Management Hand-off Initial System Code attempts to handoff execution to Boot Management presented by first bootable device. Multiple methods of determining that a device has a boot loader installed: Load sector 0 and look for magic bytes. Device contains a partition marked as bootable with a specific filesystem. Boot Management may be multi stage. For redundancy, there must boot management on multiple devices. If there is a detectable failure, then the ISC may be able to try the next candidate. If not a detectable failure or a Boot Management misconfiguration/corruption then boot process stops. 26
27 Boot Management to OS Hand-off The boot partition contains last stages of boot management. The boot partition has to contain a kernel that will be able to recognize RAID partitions. The boot partition must be present on each redundant device. For root devices (mounted by the kernel from the boot partition), other RAID levels can be used. 27
28 BIOS Enhanced RAID of HP Proliant BL30p If enabled, the BIOS will take special steps with respect to any Linux Software RAID1 boot partitions. Blade server has two ATA drives on a single ATA channel. If the primary ATA disk has failed, it will boot off of the secondary ATA disk if it also has a bootable Linux Software RAID1 partition. If the software RAID superblocks are out of sync on the two disks, it will choose to boot from the one with the most recently updated superblock. 28
29 Lilo Support of RAID Boot Lilo (Linux Loader) only supports being installed on RAID 1 partitions for redundant boot. Using either x command line option or raid-extra-boot= lilo.conf entry, the boot management configuration is replicated on multiple devices. Variants from other platforms (palo on PA-RISC, elilo for EFI) don t support the above handy feature. Lilo and variants are the only choice on several platforms. Partitions used as boot should be primary instead of extended. The Master Boot Record loader expects to find a primary partition marked as active/bootable. 29
30 Grub Support of RAID Boot Create boot partition as RAID 1 with type Linux raid autodetect (0xfd) on disks. The boot partition should be marked active (or bootable). Create RAID root partitions also with 0xfd type. Install 1 st stage bootloader on MBR on both disks with GRUB s setup command. Make two boot entries in menu.lst. The two entries point to the same kernel and root parititon (/dev/md1) but specifies different boot devices: # First entry is default entry to boot default 0 # If the default is not available boot second entry fallback 1 # Don t wait for manual selection after 10 seconds timeout 10 # Default boot entry title Default kernel hd(0,0)/vmlinuz root=/dev/md1 # Fallback boot entry title Fallback kernel hd(1,0)/vmlinuz root=/dev/md1 (x86) Install the 1 st stage bootloader in the MBR on both disks with GRUB s setup command. 30
31 Retrofit of existing system 1. Set up first system disk as usual on Linux partition. 2. Set up partition of type Linux raid auto-detect on second drive. 3. Configure a RAID device where second disk is active and first disk is specified but failed. 4. Copy contents of system on first disk to second drive. 5. Set boot management (lilo/grub) to expect to boot md device. This may include a regeneration of initial ram disk image via mkinitrd. 6. Reboot system so now its booting off new md device. 7. Repartition first disk to match second disk and then add it to the md device. Resync will begin. 8. Add first drive as another boot device in boot management. 9. Wait until resync has finished before next reboot and voila! 31
32 Swapping Failed Disks
33 Issues for Device Hot Replacement Channel & Device has to support electrical isolation (avoiding electrostatic discharges and controlling inrush current) Device has to be rescanned by kernel Transfer partitions from old device to new device Old device has to be removed from md device New device has to be added back Device will be immediately accessible but not highly available until resync is finished. 33
34 SCSI Swap Example Capture disk partitions sfdisk d /dev/sdc >/tmp/sdc.parts Remove disk (or partition) from md device raidsetfaulty /dev/md0 /dev/sdc raidhotremove /dev/md0 /dev/sdc OR mdadm /dev/md0 f /dev/sdc r /dev/sdc Gather host adapter, channel, target, lun information cat /proc/scsi/scsi OR sginfo -l Notify kernel s scsi subsystem of the removal of the device via /proc pseudo filesystem interface using host adapter, channel, target and lun numbers echo scsi remove-single-device >/proc/scsi/scsi Physically swap the drive with an identical replacement Notify kernel s scsi subsystem of the addition of the device via /proc pseudo filesystem interface echo scsi add-single-device >/proc/scsi/scsi Re-establish the partitioning sfdisk /dev/sdc </tmp/sdc.parts Add the disk (or partition) back into the md device raidhotadd /dev/md0 /dev/sdc OR mdadm /dev/md0 a /dev/sdc 34
35 Swapping on Software RAID
36 How to do it Just like any other block device: mkswap /dev/md0 swapon /dev/md0 But this a so-so idea Swap partitions on a RAID device should work and limited testing has been done. However, no one is ready to commit that kernel code paths have been thoroughly checked after every update. 36
37 Workarounds If RAID 0 is of interest (ie, higher performance is required) then one can do this with just the regular swap driver. In the /etc/fstab file: /dev/sda none swap sw,pri=8 0 0 /dev/sdb none swap sw,pri=8 0 0 /dev/sdc none swap sw,pri=8 0 0 The matching values for priority in the option field ( pri=8 ) means that pages will be striped between the three SCSI disks. This yields performance without loading another kernel driver. Priorities range from 0 to with being the highest. Higher priority devices will be used before lower priority devices. 37
38 Workarounds Cont d Don t use swap. Swap is a side effect of installation but often not used in production. Out of control processes are better handled by the OOM (Out Of Memory) killer which often picks the errant process correctly. Use filesystem swap which also allows for better dynamic control of swap/disk resources. It uses slightly different code paths but the extra layers make it slower. Hardware RAID based swap. 38
39 Mirroring to Network Block Devices
40 Network Block Device with RAID NBD is standard in kernel. Presents a file or block device on a server to a different client computer across a TCP connection. Client can include local block device entry (eg, /dev/nbd0) in creation of a RAID entry. Device can be formatted with regular filesystem. Local caching is a side effect of the normal block cache mechanism (not specially designed like NFS). References are 64 bit but underlying platform may restrict devices to 2GB. Use multiple NBD devices to overcome this restriction. Implementations of the nbd server available for Windows and BSD as well. 40
41 What one might use NBD for. Disaster Recovery site like commercial solutions (eg, CA, SRDF, etc ) but any temporary network loss would cause a FULL resync. Enhanced NBD project (which ISN T in the kernel) allows for delta synchronization. Have a highly available ram disk by mirroring a local /dev/ramxx device to /dev/nbdxx. ndbxx could be a file resident in cache or another ramdisk to yield good performance. Checkpointing in High Performance Computing. When checkpointing is all writes it makes less sense to use a protocol with a lot of caching algorithms like NFS. Diskless systems. Can t provide boot capability but can provide other partitions and swap devices. Copy on write option for NBD server allows sharing of read only block device among many. Stretch a full image backup window by mirroring over network but using software RAID s speed_limit_max throttling to control amount of bandwidth used during working hours. 41
42 Choosing Between Hardware and Software RAID
43 The Differences: Simplicity Software RAID Exposes the management of each individual device. Requires understanding of entire chain of reliances. Setup can invariably be scripted. Usually generically over many configurations. Hardware RAID Most interaction is with logical block device representation. Usually has an Express setup capability. If vendor doesn t provide OS neutral or Linux compatible command line configuration then manual intervention required. Generic configuration often harder. 43
44 The Differences: Flexibility Software RAID Can use many controllers and hard disks. Including ones that have never been qualified together. Allows use or mixing of more block device types; ATA, ESDI, /dev/ram NBD, etc. Layering of RAID levels allow for many permutations. Better match to specific workloads. Tricky RAID level migration in place. RAID 6 only available with 2.6 kernels. Hardware RAID Tries to lock down configuration to only supported disk types. May even load disk firmware or do low level reformat. Usually constrained to one disk type (eg, SCSI). Mixed types and higher capabilities available in Enterprise gear. Invariably a subset of RAID levels supported. Easy RAID level migration in place. Higher availability choices available now on all platforms. 44
45 The Differences: Cost Software RAID Hardware RAID Cheaper hardware due to huge range of choices. More administrative cost due to more exposed complexity and testing migrated from vendor to user. Offload of I/O may allow lesser SPU to fufill same requirements. Potentially a price already paid if hardware RAID is standard. Less administrative cost for user but cost of vendor testing and support amortized over installed base of controllers. 45
46 The Differences: Supportability Software RAID Somewhat supported to the extent the user has done testing of complete configuration including corner failure cases. Device error detection rely on the underlying device and/or communication channels. No assumptions are made about components to match the range of possible configurations. Raw disks and partitions are exposed so server firmware and boot management must be able to utilize these. Hardware RAID High levels of support only on specific configurations of controllers and disks. Tests are exhaustive and include corner cases. Hardware RAID can offer active spare, disk surface scrubbing, parity verification. Error checking goes beyond passive detection. Component choices are constrained so optimal use of device through specific capabilities Logical block device is exposed that emulates common device geometry and behaviour. 46
47 The Differences: Performance Software RAID On benchmarks dedicated to measuring storage subsystem only, software RAID is usually faster. RAID 1 requires a transaction over every individual IO channel and device that has a copy of the data. Either the devices are assumed to be able to commit remaining writes upon power failure or the volumes are mounted Optimal synchronously. performance means choosing transfer sizes that are best suited to device behaviour first and workload second. With respect to capital cost, can usually configure higher peak performance/$. Hardware RAID Reduces number of context switches and interrupts flowing through system. Read- Update-Write parity cycle does not occupy system resources. RAID 1 only requires a transaction to the controller which handles propagation to disks. Battery backed cache allows for quick returns from writes over PCI bus. Onboard cache allows transaction size to suit workload. Controller handles mapping to device characteristics. May yield better results in actual production for less administrative effort/$. 47
48 Linux Software RAID Resources
49 References Linux Software RAID HowTo Boot+Root+RAID+LILO mini-howto Kernel Linux-RAID List linux-raid at NBD and Enhanced NBD Project Sites NBD: ENBD: 49
50 Thanks to. Dan Zink Rick Stern Karthik Prabhakar Theodora Carson Lia Lindell Christie Melnychuk Yizhi Wang 50
51 Co-produced by:
How To Set Up Software Raid In Linux 6.2.2 (Amd64)
Software RAID on Red Hat Enterprise Linux v6 Installation, Migration and Recovery November 2010 Ashokan Vellimalai Raghavendra Biligiri Dell Enterprise Operating Systems THIS WHITE PAPER IS FOR INFORMATIONAL
Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there?
Filesystems, LVM, MD Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there? Filesystem - Introduction Short description of
LVM and Raid. Scott Gilliland
LVM and Raid Scott Gilliland Why aren't flat partitions enough? Redundancy Speed Live changes Multipath I/O Shared partitions Raid Levels Raid-0: Block-level Striping Raid-1: Mirroring Raid-2: Byte striping
Table of Contents. Software-RAID-HOWTO
Table of Contents The Software-RAID HOWTO...1 Jakob Østergaard [email protected] and Emilio Bueso [email protected] 1. Introduction...1 2. Why RAID?...1 3. Devices...1 4. Hardware issues...1 5. RAID
Installing Debian with SATA based RAID
Installing Debian with SATA based RAID Now for 2.6 kernel version I've read that there will soon be an installer that will do raid installs and perhaps even support SATA, but today it is manual. My install
Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system.
Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) All commands run from terminal as super user. Default CentOS 6.3 installation with two hard drives, /dev/sda
Linux Software Raid. Aug 2010. Mark A. Davis
Linux Software Raid Aug 2010 Mark A. Davis a What is RAID? Redundant Array of Inexpensive/Independent Drives It is a method of combining more than one hard drive into a logic unit for the purpose of: Increasing
Ubuntu 12.04 32 bit + 64 bit Server Software RAID Recovery and Troubleshooting.
Ubuntu 12.04 32 bit + 64 bit Server Software RAID Recovery and Troubleshooting. (Version 1.0) 10(1)/2008-OTC/CHN-PROJECT-16/05/2013 216 OPEN TECHNOLOGY CENTRE NATIONAL INFORMATICS CENTRE DEPARTMENT OF
How To Manage Your Volume On Linux 2.5.5 (Evms) 2.4.5 On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux
www.suse.com/documentation Storage Administration Guide SUSE Linux Enterprise Server 10 SP3/SP4 March 6, 2011 Legal Notices Novell, Inc. makes no representations or warranties with respect to the contents
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.
Elastix Application Note #201201091: Elastix RAID Setup Step By Step Including Recovery Title Elastix Raid Setup Step By Step Including Recovery Author Bob Fryer Date Document Written 9 th January 2012
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
Intel Rapid Storage Technology enterprise (Intel RSTe) for Linux OS
Intel Rapid Storage Technology enterprise (Intel RSTe) for Linux OS Software User s Guide June 2012 Reference Number: 454301, Revision: 1.0 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Adaptec ASR-7805/ASR-7805Q/ASR-71605/ASR-71605Q/ASR-71605E/ASR-71685/ASR-72405 SAS/SATA RAID Controllers AFM-700 Flash Backup Unit
README.TXT Adaptec ASR-7805/ASR-7805Q/ASR-71605/ASR-71605Q/ASR-71605E/ASR-71685/ASR-72405 SAS/SATA RAID Controllers AFM-700 Flash Backup Unit NOTE: All Adaptec by PMC products are UL listed and for use
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Shared Storage Setup with System Automation
IBM Tivoli System Automation for Multiplatforms Authors: Markus Müller, Fabienne Schoeman, Andreas Schauberer, René Blath Date: 2013-07-26 Shared Storage Setup with System Automation Setup a shared disk
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Introduction A History of Hard Drive Capacity Starting in 1984, when IBM first introduced a 5MB hard drive in
-------------------------------------------------------------------- README.TXT
README.TXT Adaptec ASR-7805/ASR-7805Q/ASR-71605/ASR-71605Q/ASR-71605E/ASR-71685/ASR-72405 SAS/SATA RAID Controllers Adaptec ASR-6405/ASR-6445/ASR-6805/ASR-6805Q/ASR-6405E/ASR-6805E/ASR-6805E R5 SAS/SATA
Chapter 12: Mass-Storage Systems
Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure
IBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM)
PRIMERGY RX300 S2 Onboard SCSI RAID User s Guide Areas Covered Chapter 1 Features (Overview/Note) This chapter explains the overview of the disk array and features of the SCSI array controller. Chapter
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Oracle Database 10g: Performance Tuning 12-1
Oracle Database 10g: Performance Tuning 12-1 Oracle Database 10g: Performance Tuning 12-2 I/O Architecture The Oracle database uses a logical storage container called a tablespace to store all permanent
Creating a Cray System Management Workstation (SMW) Bootable Backup Drive
Creating a Cray System Management Workstation (SMW) Bootable Backup Drive This technical note provides the procedures to create a System Management Workstation (SMW) bootable backup drive. The purpose
Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller.
Embedded MegaRAID SATA User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter introduces
Chapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems Physical structure of secondary storage devices and its effects on the uses of the devices Performance characteristics of mass-storage devices Disk scheduling algorithms
Root-on-LVM-on-RAID HOWTO
Massimiliano Ferrero [email protected] This document describes a procedure to install a Linux system with software RAID and Logical Volume Manager (LVM) support and with a root file system stored into
PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute
PADS GPFS Filesystem: Crash Root Cause Analysis Computation Institute Argonne National Laboratory Table of Contents Purpose 1 Terminology 2 Infrastructure 4 Timeline of Events 5 Background 5 Corruption
Resilient NAS by Design. Buffalo. A deployment guide to disaster recovery. Page 1
Buffalo Resilient NAS by Design A deployment guide to disaster recovery Page 1 Table of contents Introduction. 3 1.0 RAID configuration. 4 1.1 RAID Disk crash recovery..4 1.2 RAID Disk crash recovery Procedure..4
10 Red Hat Linux Tips and Tricks
Written and Provided by Expert Reference Series of White Papers 10 Red Hat Linux Tips and Tricks 1-800-COURSES www.globalknowledge.com 10 Red Hat Linux Tips and Tricks Compiled by Red Hat Certified Engineers
QuickSpecs. HP Smart Array 5312 Controller. Overview
Overview Models 238633-B21 238633-291 (Japan) Feature List: High Performance PCI-X Architecture High Capacity Two Ultra 3 SCSI channels support up to 28 drives Modular battery-backed cache design 128 MB
RAID EzAssist Configuration Utility Quick Configuration Guide
RAID EzAssist Configuration Utility Quick Configuration Guide DB15-000277-00 First Edition 08P5520 Proprietary Rights Notice This document contains proprietary information of LSI Logic Corporation. The
Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
Cloning Complex Linux Servers
Cloning Complex Linux Servers Cloning A Linux Machine That Has A Complex Storage Setup Where I work we have Cent OS servers whose drives are in various software raid and LVM2 configurations. I was recently
Intel Rapid Storage Technology (Intel RST) in Linux*
White Paper Ramu Ramakesavan (Technical Marketing Engineer) Patrick Thomson (Software Engineer) Intel Corporation Intel Rapid Storage Technology (Intel RST) in Linux* August 2011 326024 Executive Summary
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Education. Servicing the IBM ServeRAID-8k Serial- Attached SCSI Controller. Study guide. XW5078 Release 1.00
Education Servicing the IBM ServeRAID-8k Serial- Attached SCSI Controller Study guide XW5078 Release 1.00 May 2006 Servicing the IBM ServeRAID-8k Serial- Attached SCSI (SAS) Controller Overview The ServeRAID
HP Array Technologies
19 6717-4 CH13.qxd 9/15/04 5:11 PM Page 253 T H I R T E E N HP Array Technologies Chapter Syllabus 13.1 Advantages of HP Smart Array Controllers 13.2 HP Array Controller Utilities 13.3 Array Controller
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
Ultra ATA 133 RAID PCI Pro
Ultra ATA 133 RAID PCI Pro 1-1 Introduction Quick Installation Guide This ultra high-speed dual channel Ultra ATA/133 RAID controller is designed to support RAID 0, 1, 0+1 and JBOD. RAID configurations
RAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
ITE RAID Controller USER MANUAL
ITE RAID Controller USER MANUAL 120410096E1N Copyright Copyright 2004. All rights reserved. No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval system or translated
HP Embedded SATA RAID Controller
HP Embedded SATA RAID Controller User Guide Part number: 433600-001 First Edition: June 2006 Legal notices Copyright 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject
SATA RAID Function (Only for chipset Sil3132 used) User s Manual
SATA RAID Function (Only for chipset Sil3132 used) User s Manual 12ME-SI3132-001 Table of Contents 1 WELCOME...4 1.1 SATARAID5 FEATURES...4 2 AN INTRODUCTION TO RAID...5 2.1 DISK STRIPING (RAID 0)...5
Serial ATA RAID PCI. User's Manual
Serial ATA RAID PCI User's Manual Chapter 1 Introduction Table of Contents 1-1 Features and Benefits. 1 1-2 System Requirements. 1 Chapter 2 RAID Arrays 2-1 RAID Overview.. 2 2-1.1 RAID 0 (striping)...
Getting Started With RAID
Dell Systems Getting Started With RAID www.dell.com support.dell.com Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A
QuickSpecs. Models HP Smart Array E200 Controller. Upgrade Options Cache Upgrade. Overview
Overview The HP Smart Array E200 is HP's first entry level PCI Express (PCIe) Serial Attached SCSI (SAS) RAID controller. The full size card has 8 ports and utilizes DDR1-266 memory. The E200 is ideal
AirWave 7.7. Server Sizing Guide
AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba
Reference Guide for Linux
PCI-X SCSI RAID Controller Reference Guide for Linux SA23-1327-01 PCI-X SCSI RAID Controller Reference Guide for Linux SA23-1327-01 Note Before using this information and the product it supports, be sure
RAID Rebuilding. Objectives CSC 486/586. Imaging RAIDs. Imaging RAIDs. Imaging RAIDs. Multi-RAID levels??? Video Time
Objectives 00:13 CSC 486/586 RAID Rebuilding In your previous module, you learned about RAID technology, including hardware and software RAIDs. In this module you will learn about the issues you need to
HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler
Overview HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler Models Smart Array 5i Plus Controller and BBWC Enabler bundled Option Kit (for ProLiant DL380 G2, ProLiant DL380
Unix System Administration
Unix System Administration Caleb Phillips Data Redundancy CSCI 4113, Spring 2010 Where are We? Advanced Topics NTP and Cron Data Redundancy (RAID, rsync, backups...) Division of Labor (NFS, NIS, PAM, LDAP,...)...
VTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
RAID installation guide for ITE8212F
RAID installation guide for ITE8212F Contents Contents 2 1 Introduction 3 1.1 About this Guide 3 1.2 The Basics 3 1.2.1 What is RAID? 3 1.2.2 Advantages of RAID 3 1.2.3 Disadvantages of RAID 3 1.3 Different
TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS
TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge
The Logical Volume Manager (LVM)
Page 1 WHITEPAPER The Logical Volume Manager (LVM) This document describes the LVM in SuSE Linux. It is freely distributable as long as it remains unchanged. SuSE has included a Logical Volume Manager
Using Linux mdadm Multipathing with Sun StorEdge Systems
Using Linux mdadm Multipathing with Sun StorEdge Systems Sun Microsystems, Inc. www.sun.com Part No. 819-2452-10 April 2005, Revision A Submit comments about this document at: http://www.sun.com/hwdocs/feedback
USER S GUIDE. Integrated RAID. July 2003 Version 1.0 - Preliminary DB15-000292-00
USER S GUIDE Integrated RAID July 2003 - Preliminary DB15-000292-00 This document is preliminary. As such, it contains data derived from functional simulations and performance estimates. LSI Logic has
LifeKeeper for Linux. Software RAID (md) Recovery Kit v7.2 Administration Guide
LifeKeeper for Linux Software RAID (md) Recovery Kit v7.2 Administration Guide February 2011 SteelEye and LifeKeeper are registered trademarks. Adobe Acrobat is a registered trademark of Adobe Systems,
1 Storage Devices Summary
Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious
RAID: Redundant Arrays of Independent Disks
RAID: Redundant Arrays of Independent Disks Dependable Systems Dr.-Ing. Jan Richling Kommunikations- und Betriebssysteme TU Berlin Winter 2012/2013 RAID: Introduction Redundant array of inexpensive disks
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
VERITAS Volume Management Technologies for Windows
WHITE PAPER VERITAS Volume Management Technologies for Windows V E R I T A S W H I T E P A P E R The Next Generation of Disk Management for Windows Platforms Windows 2000 and Windows Server 2003 1 TABLE
Optimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
RAID Technology Overview
RAID Technology Overview HP Smart Array RAID Controllers HP Part Number: J6369-90050 Published: September 2007 Edition: 1 Copyright 2007 Hewlett-Packard Development Company L.P. Legal Notices Copyright
Date: March 2006. Reference No. RTS-CB 018
Customer Bulletin Product Model Name: CS3102 and FS3102 subsystems Date: March 2006 Reference No. RTS-CB 018 SUBJECT: Volumes greater than 2TB on Windows OS Overview This document explores how different
XenDesktop 7 Database Sizing
XenDesktop 7 Database Sizing Contents Disclaimer... 3 Overview... 3 High Level Considerations... 3 Site Database... 3 Impact of failure... 4 Monitoring Database... 4 Impact of failure... 4 Configuration
Extended installation documentation
Extended installation documentation Version 3.0-1 Revision 12431 Stand: March 13, 2012 Alle Rechte vorbehalten. / All rights reserved. (c) 2002 bis 2012 Univention GmbH Mary-Somerville-Straße 1 28359 Bremen
This chapter explains how to update device drivers and apply hotfix.
MegaRAID SAS User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter explains an overview
Virtualization and Performance NSRC
Virtualization and Performance NSRC Overhead of full emulation Software takes many steps to do what the hardware would do in one step So pure emulation (e.g. QEMU) is slow although much clever optimization
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Windows clustering glossary
Windows clustering glossary To configure the Microsoft Cluster Service with Windows 2000 Advanced Server, you need to have a solid grounding in the various terms that are used with the Cluster Service.
USER S GUIDE. MegaRAID SAS Software. June 2007 Version 2.0. 80-00156-01, Rev. B
USER S GUIDE MegaRAID SAS Software June 2007 Version 2.0 80-00156-01, Rev. B This document contains proprietary information of LSI Corporation. The information contained herein is not to be used by or
PA-5000 Series SSD Storage Options Configuring RAID and Disk Backup
PA-5000 Series SSD Storage Options Configuring RAID and Disk Backup 2014, Palo Alto Networks, Inc. https://paloaltonetworks.com/documentation P/N 810-000076-00C Contents OVERVIEW...3 SSD UPGRADE PROCEDURE...3
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
SATA II 4 Port PCI RAID Card RC217 User Manual
SATA II 4 Port PCI RAID Card RC217 SATA II 4 Port PCI RAID Card This Manual is for many Models of SATAII RAID5 HBA that uses Silicon Image SATA II SiI3124 chipset: 1) 32bit PCI bus host card 2) 64bit PCI-X
USB Bare Metal Restore: Getting Started
USB Bare Metal Restore: Getting Started Prerequisites Requirements for the target hardware: Must be able to boot from USB Must be on the same network as the Datto device Must be 64 bit hardware Any OSs
RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card.
RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card. 1 Contents 3 RAID Utility User s Guide 3 Installing the RAID Software 4 Running
RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1
RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.
Summer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
RAID Utility User Guide. Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card
RAID Utility User Guide Instructions for setting up RAID volumes on a computer with a Mac Pro RAID Card or Xserve RAID Card Contents 3 RAID Utility User Guide 3 The RAID Utility Window 4 Running RAID Utility
Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers
Overview The SA6400A is a high-performance Ultra320, PCI-X array controller. It provides maximum performance, flexibility, and reliable data protection for HP OpenVMS AlphaServers through its unique modular
LSN 10 Linux Overview
LSN 10 Linux Overview ECT362 Operating Systems Department of Engineering Technology LSN 10 Linux Overview Linux Contemporary open source implementation of UNIX available for free on the Internet Introduced
What s New in VMware vsphere Storage Appliance 5.1 TECHNICAL MARKETING DOCUMENTATION
What s New in VMware vsphere TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated JUly 2012 Table of Contents Introduction.... 3 Support for Additional Disk Drives.... 3 Maximum Storage per ESXi Host... 4 Increase
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
How To Fix A Fault Fault Fault Management In A Vsphere 5 Vsphe5 Vsphee5 V2.5.5 (Vmfs) Vspheron 5 (Vsphere5) (Vmf5) V
VMware Storage Best Practices Patrick Carmichael Escalation Engineer, Global Support Services. 2011 VMware Inc. All rights reserved Theme Just because you COULD, doesn t mean you SHOULD. Lessons learned
Chapter 8: Installing Linux The Complete Guide To Linux System Administration Modified by M. L. Malone, 11/05
Chapter 8: Installing Linux The Complete Guide To Linux System Administration Modified by M. L. Malone, 11/05 At the end of this chapter the successful student will be able to Describe the main hardware
DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution
DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency
Firebird and RAID. Choosing the right RAID configuration for Firebird. Paul Reeves IBPhoenix. mail: [email protected]
Firebird and RAID Choosing the right RAID configuration for Firebird. Paul Reeves IBPhoenix mail: [email protected] Introduction Disc drives have become so cheap that implementing RAID for a firebird
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
SATA RAID SIL 3112 CONTROLLER USER S MANUAL
SATA RAID SIL 3112 CONTROLLER USER S MANUAL 120410056E1N Copyright Copyright 2003. All rights reserved. No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval system
Enkitec Exadata Storage Layout
Enkitec Exadata Storage Layout 1 Randy Johnson Principal Consultant, Enkitec LP. 20 or so years in the IT industry Began working with Oracle RDBMS in 1992 at the launch of Oracle 7 Main areas of interest
