California Software Labs
|
|
|
- Frank Casey
- 10 years ago
- Views:
Transcription
1 WHITE PAPERS FEBRUARY 2006 California Software Labs CSWL INC R e a l i z e Y o u r I d e a s Redundant Array of Inexpensive Disks (RAID) Redundant Array of Inexpensive Disks (RAID) aids development of an application by reducing considerable development time by providing tools for implementing different software architectures. The document is meant for the exclusive preview of the management of Calsoft and chosen reviewers from the industry. It reflects Calsoft's continuing commitment to the business community to offer its thoughtful voice on what is newly learned, what is to be taken care of and what truly matters while deciding upon on strategic IT solutions.
2 Table of Contents History of Raid Systems Raid Systems Overview RAID Levels RAID Level 0 RAID Level 1 RAID Level 2 RAID Level 3 RAID Level 4 RAID Level 5 RAID Level 6 RAID Level 10 Compound RAID Levels Types of RAID Systems RAID Performance Modeling & Fault Tolerance Our Implementation Introduction Low cost, high performance, 3.5-inch hard-disk drives dominate the storage systems for all practical purposes. But storage and reliability requirements for the high-end enterprise systems exceed the features available with the single drives. That's were Redundant Array of Inexpensive Disks (RAID) systems are essentially useful with the highend systems. RAID technology was developed to address the faulttolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications. Introduction 2
3 History of RAID Systems RAID technology was first defined by a group of computer scientists at the University of California at Berkeley in The scientists studied the possibility of using two or more disks to appear as a single device to the host system. Although the array's performance was better than that of large, single-disk storage systems, reliability was unacceptably low. To address this, the scientists proposed redundant architectures to provide ways of achieving storage fault tolerance. In addition to defining RAID levels 1 through 5, the scientists also studied data striping -- a non-redundant array configuration that distributes files across multiple disks in an array. Often known as RAID 0, this configuration actually provides no data protection. However, it does offer maximum throughput for some data-intensive applications such as desktop digital video production. A number of factors are responsible for the growing adoption of arrays for critical network storage. More and more organizations have created enterprise-wide networks to improve productivity and streamline information flow. While the distributed data stored on network servers provides substantial cost benefits, these savings can be quickly offset if information is frequently lost or becomes inaccessible. As today's applications create larger files, network storage needs have increased proportionately. In addition, accelerating CPU speeds have outstripped data transfer rates to storage media, creating bottlenecks in today's systems. RAID storage solutions overcome these challenges by providing a combination of outstanding data availability, extraordinary and highly scalable performance, high capacity, and recovery with no loss of data or interruption of user access. By integrating multiple drives into a single array which is viewed by the network operating system as a single disk drive -- organizations can create cost-effective, mini computersized solutions of up to a terabyte or more of storage. RAID Systems Overview The dropping cost and rising capacity of high-end 3.5-in. drives plus the falling cost of RAID controllers have brought RAID to low-end servers. Motherboards with integrated SCSI and ATA RAID controllers are readily available. RAID controllers support one or more standard RAID configurations specified as RAID 0 through RAID 5. Proprietary or experimental configurations have been given designations above RAID 6, and some configurations like RAID 10 are a combination of RAID 1 and RAID 0. All of the configurations except RAID 0 provide some form of redundancy. RAID controllers with redundancy often support hot swappable drives, allowing the removal and replacement of a failed drive. Typically, information on a replacement drive must be rebuilt from information on the other drives before the system will be redundant and able to handle another drive failure. To understand how a RAID system works, see the figure below. The letters A through I indicate where data is placed on a disk along with the parity and error- History of RAID Systems 3
4 correction code (ECC) support. The diagrams show the minimum disk configurations for each approachwith the exception of RAID 0that can operate with only two drives. A D G A RAID 0 Controller B E H RAID 1 Controller C F I A A C E A0 RAID 3 Controller B D F RAID 4 Controller A1 Also known as striping, RAID 0 uses any number of drives. Data is written sequentially by sector or block across the drives. This provides high read and write performance because many operations can be performed simultaneously, even when a sequential block of information is being processed. The downside is the lack of redundancy. Striping is commonly combined with other RAID architectures to provide high performance and redundancy. B C A C E RAID 2 Controller B D F B C B0 C0 ECC A ECC C ECC RAID 6 Controller B1 C1 RAID 5 Controller B E D F Called mirroring or duplexing, RAID 1 has 100% overhead requiring two drives to store information normally stored on only one. Although writing speed is the same as with a single drive, it's possible to have twice the read transfer rate because information is duplicated and the drives can be accessed independently. Also, there's no loss of write performance when a drive is lost and no rebuild delay as with other RAID architectures. RAID 0 is the most expensive approach when it comes to disk usage. A0 A1 A2 3 D B0 B1 2 C B2 C0 1 B C1 C2 0 A D0 D1 D2 The RAID 2 configuration employs Hamming Code ECC to provide redundancy. It has a simple controller design, but no commercial implementations exist, partly due to the high ratio of ECC disks to data disks. Striped data with the addition of a parity disk is used by the RAID 3 configuration. It has a high read/write transfer rate, but controller design is complex and difficult to accomplish as software RAID. Plus, the transaction rate of the system is the same as a single-disk drive. RAID 4 utilizes independent disks with shared parity. This configuration has a very high read transaction and aggregate read rate. Furthermore, it has a low ratio of ECC disks to data disks. Unfortunately, it has a poor write transaction rate and a complex controller design. Most implementations prefer RAID 3 to RAID 4. The RAID 5 configuration is a set of independent disks with distributed parity. Similar to RAID 3, the contents of the parity disk are spread across each disk instead RAID Systems Overview 4
5 of concentrating all of this information on a single disk. RAID 5 has high read performance with a medium write transaction rate speed. It has a good aggregate transfer rate, making it one of the most common approaches to RAID, even with a software RAID solution. This configuration, however, has one of the most complex controller designs, and the individual block transfer rate is the same as a single disk. Rebuilding a replacement drive is difficult and time consuming. But some controllers implement this feature so that it takes place in the background with a degraded application throughput until rebuilding has been completed. Essentially, RAID 6 is RAID 5 plus an extra parity drive. One parity is generated across disks while the second parity tends to the data on another disk. The two independent parity drives provide high fault tolerance that permits multiple drive failures to happen. Controller design is very complex, even compared to RAID 5. Furthermore, RAID 6 has very poor write performance because two parity changes must be made for each write. Many RAID controllers place disks on independent channels to allow simultaneous access to multiple drives. Likewise, large memory caches significantly improve controller performance. RAID Levels There are several different RAID levels or redundancy schemes, each with inherent cost, performance, and availability (fault-tolerance) characteristics designed to meet different storage needs. No individual RAID level is inherently superior to any other. Each of the five array architectures is well-suited for certain types of applications and computing environments. For client/server applications, storage systems based on RAID levels 1, 0/1, and 5 have been the most widely used. RAID Level 0 (Non-Redundant) A non-redundant disk array, or RAID level 0, has the lowest cost of any RAID organization because it does not employ redundancy at all. This scheme offers the best performance since it never needs to update redundant information but it does not have the best performance. Redundancy schemes that duplicate data, such as mirroring, can perform better on reads by selectively scheduling requests on the disk with the shortest expected seek and rotational delays. Without, redundancy, any single disk failure will result in data-loss. Non-redundant disk arrays are widely used in super-computing environments where performance and capacity, rather than reliability, are the primary concerns. Sequential blocks of data are written across multiple disks in stripes. The size of a data block, which is known as the stripe width, varies with the implementation, but is always at least as large as a disk's sector size. When it comes time to read back this sequential data, all disks can be read in parallel. In a multi-tasking operating system, there is a high probability that even non-sequential disk accesses will keep all of the disks working in parallel. RAID Levels 5
6 Minimum number of drives: 2 Strengths: Highest performance Weaknesses: No data protection; One drive fails, all data is lost RAID Level 1 (Mirrored) The traditional solution, called mirroring or shadowing, uses twice as many disks as a non-redundant disk array. Whenever data is written to a disk the same data is also written to a redundant disk, so that there are always two copies of the information. When data is read, it can be retrieved from the disk with the shorter queuing, seek and rotational delays. If a disk fails, the other copy is used to service requests. Mirroring is frequently used in database applications where availability and transaction time are more important than storage efficiency. Minimum number of drives: 2 Strengths: Very high performance; Very high data protection; Very minimal penalty on write performance. Weaknesses: High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required. RAID Level 2 (Memory Style) Memory systems have provided recovery from failed components with much less cost than mirroring by using Hamming codes. Hamming codes contain parity for distinct overlapping subsets of components. In one version of this scheme, four disks require three redundant disks, one less than mirroring. Since the number of redundant disks is proportional to the log of the total number of the disks on the system, storage efficiency increases as the number of data disks increases. If a single component fails, several of the parity components will have inconsistent values, and the failed component is the one held in common by each incorrect subset. The lost information is recovered by reading the other components in a subset, including the parity component, and setting the missing bit to 0 or 1 to create proper parity value for that subset. Thus, multiple redundant disks are needed to identify the failed disk, but only one is needed to recover the lost information. In you are unaware of parity, you can think of the redundant disk as having the sum of all data in the other disks. When a disk fails, you can subtract all the data on the good disks form the parity disk; the remaining information must be the missing information. is simply this sum modulo 2. A RAID 2 system would normally have as many data disks as the word size of the computer, typically 32. In addition, RAID 2 requires the use of extra disks to store an error-correcting code for redundancy. With 32 data disks, a RAID 2 system would require 7 additional disks for a Hamming-code ECC. For a number of reasons, including the fact that modern disk drives contain their own internal ECC, RAID 2 is not a practical disk array scheme. Minimum number of drives: Not used in LAN Strengths: Previously used for RAM error environments correction (known RAID Level 1 6
7 as Hamming Code ) and in disk drives before he use of embedded error correction. Weaknesses: No practical use; Same performance can be achieved by RAID 3 at lower cost. RAID Level 3 (Bit-Interleaved ) One can improve upon memory-style ECC disk arrays by noting that, unlike memory component failures, disk controllers can easily identify which disk has failed. Thus, one can use a single parity rather than a set of parity disks to recover lost information. In a bit-interleaved, parity disk array, data is conceptually interleaved bit-wise over the data disks, and a single parity disk is added to tolerate any single disk failure. Each read request accesses all data disks and each write request accesses all data disks and the parity disk. Thus, only one request can be serviced at a time. Because the parity disk contains only parity and no data, the parity disk cannot participate on reads, resulting in slightly lower read performance than for redundancy schemes that distribute the parity and data over all disks. Bit-interleaved, parity disk arrays are frequently used in applications that require high bandwidth but not high I/O rates. They are also simpler to implement than RAID levels 4, 5, and 6. Here, the parity disk is written in the same way as the parity bit in normal Random Access Memory (RAM), where it is the Exclusive Or of the 8, 16 or 32 data bits. In RAM, parity is used to detect single-bit data errors, but it cannot correct them because there is no information available to determine which bit is incorrect. With disk drives, however, we rely on the disk controller to report a data read error. Knowing which disk's data is missing, we can reconstruct it as the Exclusive Or (XOR) of all remaining data disks plus the parity disk. Minimum number of drives: 3 Strengths: Excellent performance for large, sequential data requests. Weaknesses: Not well-suited for transaction-oriented network applications; Single parity drive does not support multiple, simultaneous read and write requests. RAID Level 4 (Block-Interleaved ) The block-interleaved, parity disk array is similar to the bit-interleaved, parity disk array except that data is interleaved across disks of arbitrary size rather than in bits. The size of these blocks is called the striping unit. Read requests smaller than the striping unit access only a single data disk. Write requests must update the requested data blocks and must also compute and update the parity block. For large writes that touch blocks on all disks, parity is easily computed by exclusive-or'ing the new data for each disk. For small write requests that update only one data disk, parity is computed by noting how the new data differs from the old data and applying those differences to the parity block. Small write requests thus require four disk I/Os: one to write the new data, two to read the old data and old parity for computing the new parity, and one to write the new parity. This is referred to as a RAID Level 3 7
8 read-modify-write procedure. Because a block-interleaved, parity disk array has only one parity disk, which must be updated on all write operations, the parity disk can easily become a bottleneck. Because of this limitation, the block-interleaved distributed parity disk array is universally preferred over the block-interleaved, parity disk array. Minimum number of drives: 3 (Not widely used) Strengths: Data striping supports multiple simultaneous read requests. Weaknesses: Write requests suffer from same single parity-drive bottleneck as RAID 3; RAID 5 offers equal data protection and better performance at same cost. For small writes, the performance will decrease considerably. To understand the cause for this, a one-block write will be used as an example. A write request for one block is issued by a program. The RAID software determines which disks contain the data, and parity, and which block they are in. The disk controller reads the data block from disk. The disk controller reads the corresponding parity block from disk. The data block just read is XORed with the parity block just read. The data block to be written is XORed with the parity block. The data block and the updated parity block are both written to disk. It can be seen from the above example that a one block write will result in two blocks being read from disk and two blocks being written to disk. If the data blocks to be read happen to be in a buffer in the RAID controller, the amount of data read from disk could drop to one, or even zero blocks, thus improving the write performance. RAID Level 5 (Block-Interleaved Distributed ) The block-interleaved distributed-parity disk array eliminates the parity disk bottleneck present in the block-interleaved parity disk array by distributing the parity uniformly over all of the disks. An additional, frequently overlooked advantage to distributing the parity is that it also distributes data over all of the disks rather than over all but one. This allows all disks to participate in servicing read operations in contrast to redundancy schemes with dedicated parity disks in which the parity disk cannot participate in servicing read requests. Block-interleaved distributed-parity disk array have the best small read, large write performance of any redundancy disk array. Small write requests are somewhat inefficient compared with redundancy schemes such as mirroring however, due to the need to perform read-modify-write operations to update parity. This is the major performance weakness of RAID level 5 disk arrays. The exact method used to distribute parity in block-interleaved distributed-parity disk arrays can affect performance. Following figure illustrates left-symmetric parity distribution. RAID Level 5 8
9 P P2 P P0 P Left - Symmetric Each square corresponds to a stripe unit. Each column of squares corresponds to a disk. P0 computes the parity over stripe units 0, 1, 2 and 3; P1 computes parity over stripe units 4, 5, 6, and 7 etc. A useful property of the left-symmetric parity distribution is that whenever you traverse the striping units sequentially, you will access each disk once before accessing any disk device. This property reduces disk conflicts when servicing large requests. Minimum number of drives: 3 Strengths: Best cost/performance for transaction-oriented networks; Very high performance, very high data protection; Supports multiple simultaneous reads and writes; Can also be optimized for large, sequential requests. Weaknesses: Write performance is slower than RAID 0 or RAID 1. RAID Level 6 (P+Q Redundancy) is a redundancy code capable of correcting any single, self-identifying failure. As large disk arrays are considered, multiple failures are possible and stronger codes are needed. Moreover, when a disk fails in parity-protected disk array, recovering the contents of the failed disk requires successfully reading the contents of all non-failed disks. The probability of encountering an uncorrectable read error during recovery can be significant. Thus, applications with more stringent reliability requirements require stronger error correcting codes. Once such scheme, called P+Q redundancy, uses Reed-Solomon codes to protect against up to two disk failures using the bare minimum of two redundant disk arrays. The P+Q redundant disk arrays are structurally very similar to the blockinterleaved distributes-parity disk arrays and operate in much the same manner. In particular, P+Q redundant disk arrays also perform small write operations using a read-modify-write procedure, except that instead of four disk accesses per write requests, P+Q redundant disk arrays require six disk accesses due to the need to update both the `P' and `Q' information. RAID Level 10 (Striped Mirrors) RAID 10 is now used to mean the combination of RAID 0 (striping) and RAID 1 (mirroring). Disks are mirrored in pairs for redundancy and improved performance, and then data is striped across multiple disks for maximum performance. RAID 10 uses more disk space to provide redundant data than RAID 5. However, it also provides a performance advantage by reading from all disks in parallel while eliminating the write penalty of RAID 5. In addition, RAID 10 gives better performance than RAID 5 while a failed drive remains un-replaced. Under RAID 5, each attempted read of the failed drive can be performed only by reading all of the other disks. On RAID 10, a failed disk can be recovered by a single read of its mirrored pair. RAID Level 6 9
10 Minimum number of drives: 4 Strengths: Highest performance, highest data protection (can tolerate multiple drive failures). Weaknesses: High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required; Requires minimum of four drives. Compound RAID Levels There are times when more then one type of RAID must be combined, in order to achieve the desired effect. In general, this would consist of RAID-0, combined with another RAID level. The primary reason for combining multiple RAID architectures would be to get either a very large, or a very fast, logical disk. The list below contains a few examples. It is not the limit of what can be done. RAID-1+0 RAID Level 1+0 (also called RAID-10) is the result of RAID-0 applied to multiple RAID-1 arrays. This will create a very fast, stable array. In this array, it is possible to have multiple disk failures, without loosing any data, and with a minimum performance impact. To recover from a failed disk, it is necessary to replace the failed disk, and rebuild that disk from its mirror. For two-drive failures, the probability of survival is 66% for a 4-disk array, and approaches 100% as the number of disks in the array increases. RAID-0+1 RAID Level 0+1 is the result of RAID-1 applied to multiple RAID-0 arrays. This will create a very fast array. If the RAID-0 controllers (hardware or software) are capable of returning an error for data requests to failed drives, then this array has all the abilities of RAID-10. If an entire RAID-0 array is disabled when one drive fails, this becomes only slightly more reliable then RAID-0. To recover from a failed disk, it is necessary to replace the failed disk, and rebuild the entire RAID-0 array from its mirror. This requires much more disk I/O than is required to recover from a disk failure in RAID-10. It should be noted that some enterprise-level RAID controllers are capable of tracking which drives in a RAID-0 array have failed, and only rebuilding that drive. These controllers are very expensive. For two-drive failures, the probability of survival is 33% for a 4-disk array, and approaches 50% as the number of disks in the array increases. This RAID level is significantly less reliable than RAID-1+0. This is because the structure is inherently less reliable in a multi-disk failure, combined with the longer time to reconstruct after a failure (due to a larger amount of data needing to be copied). The longer time increases the probability of a second disk failing before the first disk has been completely rebuilt. Compound RAID Levels 10
11 RAID-3+0 RAID Level 3+0 is the result of RAID-0 applied to multiple RAID-3 arrays. This will improve the performance of a RAID-3 array, and allow multiple RAID-3 arrays to be dealt with as a single logical device. RAID-3+0 has reliability similar to RAID-3, with improved performance. This type of array is most commonly found when combining multiple hardware RAID devices into a single logical device. RAID-5+0 RAID Level 5+0 (also called RAID-53 for some unknown reason) is the result of RAID-0 applied to multiple RAID-5 arrays. This will improve the performance of a RAID-5 array, and allow multiple RAID-5 arrays to be dealt with as a single logical device. The reliability of this type of array is similar to that of a RAID-1+0 array, but it has the performance impacts of RAID-5. This type of array is most commonly found when combining multiple hardware RAID devices into a single logical device. Types of RAID Systems There are three primary array implementations: software-based arrays, bus-based array adapters/controllers, and subsystem-based external array controllers. As with the various RAID levels, no one implementation is clearly better than another - - although software-based arrays are rapidly losing favor as high-performance, lowcost array adapters become increasingly available. Each array solution meets different server and network requirements, depending on the number of users, applications, and storage requirements. It is important to note that all RAID code is based on software. The difference among the solutions is where that software code is executed -- on the host CPU (softwarebased arrays) or offloaded to an on-board processor (bus-based and external array controllers). Software based RAID Primarily used with entry-level servers, software-based arrays rely on a standard host adapter and execute all I/O commands and mathematically intensive RAID algorithms in the host server CPU. This can slow system performance by increasing host PCI bus traffic, CPU utilization, and CPU interrupts. Some NOSs such as NetWare and Windows NT include embedded RAID software. The chief advantage of this embedded RAID software has been its lower cost compared to higher-priced RAID alternatives. However, this advantage is disappearing with the advent of lower-cost, bus-based array adapters. The major advantages are Low-Cost & it requires only a standard controller. Hardware based RAID Unlike software-based arrays, bus-based array adapters/controllers plug into a host bus slot (typically a 133 MByte (MB)/sec PCI bus) and offload some or all of the I/O commands and RAID operations to one or more secondary processors. Originally used only with mid- to high-end servers due to cost, lower-cost bus-based array adapters are now available specifically for entry-level server network applications. Types of RAID Systems 11
12 In addition to offering the fault-tolerant benefits of RAID, bus-based array adapters /controllers perform connectivity functions that are similar to standard host adapters. By residing directly on a host PCI bus, they provide the highest performance of all array types. Bus-based arrays also deliver more robust faulttolerant features than embedded NOS RAID software. Advantages are data protection & performance benefits of RAID and more robust fault-tolerant features and increased performance versus software-based RAID. RAID Cache IDE / SCSI / SATA Interfaces PCI Bus Scsi Bus Disk Disk CPU SCSI Card RAID Controller Disk Typical Architecture of a Hardware RAID implementation External Hardware RAID Card Intelligent external array controllers bridge between one or more server I/O interfaces and Single / multiple-device channels. These controllers feature an onboard microprocessor, which provides high performance and handles functions such as executing RAID software code and supporting data caching. External array controllers offer complete operating system independence, the highest availability, and the ability to scale storage to extraordinarily large capacities (up to a terabyte and beyond). These controllers are usually installed in networks of stand-alone Intel-based and UNIX-based servers as well as clustered server environments. Advantages are OS independent and to build super high-capacity storage systems for high-end servers. RAID Performance Modeling As RAID architectures are being developed, their performance is analyzed using either simulation or analytical methods. Simulators are generally handcrafted, which entails code duplication and potentially unreliable systems. Equally problematic is that RAID architectures, once defined, are difficult to prove correct without extensive simulations. RAID Performance Modeling 12
13 Analytical Modeling Analytical models of RAID operations generally utilize drive characteristics to model typical operations. Key factors such as Seek Time (time to move to a specified track), Rotational Latency (time to rotate disk to required sector) and Transfer Time (time to read or write to disk) are used to derive mathematical expressions of the time taken for different operations. Such expressions are usually independent of any particular hardware, and are derived from analysis of typical actions performed during a given RAID operation. Most analytical models of RAID are based on other models of the underlying disks driving the array. Using these drive characteristics, queuing models can be introduced to model the arrival of tasks at each disk in a RAID, as such tasks are handed out by the RAID controller. Combining these queued disks with arrival rates for a single disk in normal operation provides a starting point from which more complicated RAID systems can be modeled. Simulation The most notable simulation tool for RAID systems is the previously mentioned RAIDframe system. The system offers multiple ways to test any RAID architecture using the previously mentioned, generic DAG representation. The simulator is able to use utilization data gathered from real world systems to feed in to the simulator as well as accepting real requests from outside systems. It can also be used to generate a Linux software driver to test architecture in a real world system. One downside to this approach is that, since all RAID operations are performed in software, the controller software can easily become a bottleneck (up to 60% of total processing due to software) and hence skew the results. Another simulation tool presented in the literature (and available for academic use) is the RAID-Sim application. It is built on top of a simulator for single disks, known as DiskSim, developed at the University of Michigan. Unlike RAIDframe, RAIDSim is labeled as very difficult to use, and has no generic interface. One other tool the authors are aware of is Sim-SAN, an environment for simulating and analyzing Storage Area Networks in which RAID usually serves as the underlying storage. & Fault Tolerance The concept behind RAID is relatively simple. The fundamental premise is to be able to recover data on-line in the event of a disk failure by using a form of redundancy called parity. In its simplest form, parity is an addition of all the drives used in an array. Recovery from a drive failure is achieved by reading the remaining good data and checking it against parity data stored by the array. is used by RAID levels 2, 3, 4, and 5. RAID 1 does not use parity because all data is completely duplicated (mirrored). RAID 0, used only to increase performance, offers no data redundancy at all. RAID technology does not prevent drive failures. However, RAID does provide insurance against disk drive failures by enabling real-time data recovery without data loss. The fault tolerance of arrays can also be significantly enhanced by choosing the right storage enclosure. Enclosures that feature redundant, hot- & Fault Tolerance 13
14 swappable drives, power supplies, and fans can greatly increase storage subsystem uptime based on a number of widely accepted measures: Mean Time to Data Loss (MTDL). The average time before the failure of an array component causes data to be lost or corrupted. Mean Time between Data Access / Availability (MTDA). The average time before non-redundant components fail, causing data inaccessibility without loss or corruption. Mean Time To Repair (MTTR). The average time required to bring an array storage subsystem back to full fault tolerance. Mean Time Between Failure (MTBF). Used to measure computer component average reliability/life expectancy. MTBF is not as well-suited for measuring the reliability of array storage systems as MTDL, MTTR or MTDA because it does not account for an array's ability to recover from a drive failure. In addition, enhanced enclosure environments used with arrays to increase uptime can further limit the applicability of MTBF ratings for array solutions. Our Implementation In one of our driver projects related to Storage Area Networks (SAN), we have implemented the RAID-01 & RAID-10. The RAID-01 (or RAID 0+1) is a mirrored pair (RAID-1) made from two stripe sets (RAID-0); hence the name RAID 0+1, because it is created by first creating two RAID-0 sets and adding RAID-1. If you lose a drive on one side of a RAID-01 array, then lose another drive on the other side of that array before the first side is recovered, you will suffer complete data loss. It is also important to note that all drives in the surviving mirror are involved in rebuilding the entire damaged stripe set, even if only a single drive was damaged. Performance during recovery is severely degraded during recovery unless the RAID subsystem allows adjusting the priority of recovery. However, shifting the priority toward production will lengthen recovery time and increase the risk of the kind of the catastrophic data loss mentioned earlier. RAID-10 (or RAID 1+0) is a stripe set made up from N mirrored pairs. Only the loss of both drives in the same-mirrored pair can result in any data loss and the loss of that particular drive is 1/Nth as likely as the loss of some drive on the opposite mirror in RAID-01. Recovery only involves the replacement drive and its mirror so the rest of the array performs at 100% capacity during recovery. Also since only the single drive needs recovery bandwidth requirements during recovery are lower and recovery takes far less time reducing the risk of catastrophic data loss. The most appropriate RAID configuration for a specific file system or database table space must be determined based on data access patterns and cost versus performance tradeoffs. RAID-0 offers no increased reliability. It can, however, supply performance acceleration at no increased storage cost. RAID-1 provides the highest performance for redundant storage, because it does not require readmodify-write cycles to update data, and because multiple copies of data may be used to accelerate read-intensive applications. Unfortunately, RAID-1 requires at least Our Implementation 14
15 double the disk capacity of RAID-0. Also, since more than two copies of the data exist; RAID-1 arrays may be constructed to endure loss of multiple disks without interruption. RAID allows redundancy with less total storage cost. The readmodify-write it requires, however, will reduce total throughput in any small write operations (read-only or extremely read-intensive applications are fine). The loss of a single disk will cause read performance to be degraded while the system reads all other disks in the array and re-computes the missing data. Additionally, it does not support losing multiple disks, and cannot be made redundant. For More Information For more information on this project or our related services, contact us through our web site. CSWL INC California Software Labs R e a l i z e Y o u r I d e a s West Coast - Head Office 4637 Chabot Drive, Suite 240, Pleasanton CA 94588, USA Tel Fax [email protected] East Coast 2 Clock Tower Place, Suite 430, Maynard, MA Tel Fax [email protected] Alameda Bangalore Boston Chennai Cochin Dubai London Omaha Singapore Tokyo California Software Co Ltd. All rights Reserved. The contents on the document are not to be reproduced or duplicated in any form or kind, either in part or full, without the written permission of California Software Co ltd. Product and company names mentioned here in are the trademarks of their respective companies.
How To Create A Multi Disk Raid
Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written
Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs
Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is
Definition of RAID Levels
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
RAID Technology. RAID Overview
Technology In the 1980s, hard-disk drive capacities were limited and large drives commanded a premium price. As an alternative to costly, high-capacity individual drives, storage system developers began
Data Storage - II: Efficient Usage & Errors
Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina
IBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
How To Write A Disk Array
200 Chapter 7 (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.) 7.2 RAID Disks are potential bottlenecks for system performance and
RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array
ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com [email protected] RAID Overview: Identifying What RAID Levels
Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
Lecture 36: Chapter 6
Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for
technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels
technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California
Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1
Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,
1 Storage Devices Summary
Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
Disk Array Data Organizations and RAID
Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
Operating Systems. RAID Redundant Array of Independent Disks. Submitted by Ankur Niyogi 2003EE20367
Operating Systems RAID Redundant Array of Independent Disks Submitted by Ankur Niyogi 2003EE20367 YOUR DATA IS LOST@#!! Do we have backups of all our data???? - The stuff we cannot afford to lose?? How
Reliability and Fault Tolerance in Storage
Reliability and Fault Tolerance in Storage Dalit Naor/ Dima Sotnikov IBM Haifa Research Storage Systems 1 Advanced Topics on Storage Systems - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom
Why disk arrays? CPUs improving faster than disks
Why disk arrays? CPUs improving faster than disks - disks will increasingly be bottleneck New applications (audio/video) require big files (motivation for XFS) Disk arrays - make one logical disk out of
Using RAID6 for Advanced Data Protection
Using RAI6 for Advanced ata Protection 2006 Infortrend Corporation. All rights reserved. Table of Contents The Challenge of Fault Tolerance... 3 A Compelling Technology: RAI6... 3 Parity... 4 Why Use RAI6...
Why disk arrays? CPUs speeds increase faster than disks. - Time won t really help workloads where disk in bottleneck
1/19 Why disk arrays? CPUs speeds increase faster than disks - Time won t really help workloads where disk in bottleneck Some applications (audio/video) require big files Disk arrays - make one logical
RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1
RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.
RAID Technology Overview
RAID Technology Overview HP Smart Array RAID Controllers HP Part Number: J6369-90050 Published: September 2007 Edition: 1 Copyright 2007 Hewlett-Packard Development Company L.P. Legal Notices Copyright
RAID technology and IBM TotalStorage NAS products
IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID
Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems
Fault Tolerance & Reliability CDA 5140 Chapter 3 RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
RAID Levels and Components Explained Page 1 of 23
RAID Levels and Components Explained Page 1 of 23 What's RAID? The purpose of this document is to explain the many forms or RAID systems, and why they are useful, and their disadvantages. RAID - Redundant
Price/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
RAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?
STORAGE SOLUTIONS WHITE PAPER Hardware vs. Software : Which Implementation is Best for my Application? Contents Introduction...1 What is?...1 Software...1 Software Implementations...1 Hardware...2 Hardware
TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS
TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge
RAID: Redundant Arrays of Independent Disks
RAID: Redundant Arrays of Independent Disks Dependable Systems Dr.-Ing. Jan Richling Kommunikations- und Betriebssysteme TU Berlin Winter 2012/2013 RAID: Introduction Redundant array of inexpensive disks
Module 6. RAID and Expansion Devices
Module 6 RAID and Expansion Devices Objectives 1. PC Hardware A.1.5 Compare and contrast RAID types B.1.8 Compare expansion devices 2 RAID 3 RAID 1. Redundant Array of Independent (or Inexpensive) Disks
Performance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
How To Improve Performance On A Single Chip Computer
: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings
RAID. Contents. Definition and Use of the Different RAID Levels. The different RAID levels: Definition Cost / Efficiency Reliability Performance
RAID Definition and Use of the Different RAID Levels Contents The different RAID levels: Definition Cost / Efficiency Reliability Performance Further High Availability Aspects Performance Optimization
RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
PIONEER RESEARCH & DEVELOPMENT GROUP
SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for
RAID Basics Training Guide
RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E
RAID Level Descriptions. RAID 0 (Striping)
RAID Level Descriptions RAID 0 (Striping) Offers low cost and maximum performance, but offers no fault tolerance; a single disk failure results in TOTAL data loss. Businesses use RAID 0 mainly for tasks
A Tutorial on RAID Storage Systems
A Tutorial on RAID Storage Systems Sameshan Perumal and Pieter Kritzinger CS04-05-00 May 6, 2004 Data Network Architectures Group Department of Computer Science University of Cape Town Private Bag, RONDEBOSCH
Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.
Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds
What is RAID--BASICS? Mylex RAID Primer. A simple guide to understanding RAID
What is RAID--BASICS? Mylex RAID Primer A simple guide to understanding RAID Let's look at a hard disk... Several platters stacked on top of each other with a little space in between. One to n platters
Storing Data: Disks and Files
Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve
Best Practices RAID Implementations for Snap Servers and JBOD Expansion
STORAGE SOLUTIONS WHITE PAPER Best Practices RAID Implementations for Snap Servers and JBOD Expansion Contents Introduction...1 Planning for the End Result...1 Availability Considerations...1 Drive Reliability...2
How To Make A Backup System More Efficient
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
CS 6290 I/O and Storage. Milos Prvulovic
CS 6290 I/O and Storage Milos Prvulovic Storage Systems I/O performance (bandwidth, latency) Bandwidth improving, but not as fast as CPU Latency improving very slowly Consequently, by Amdahl s Law: fraction
How to choose the right RAID for your Dedicated Server
Overview of RAID Let's first address, "What is RAID and what does RAID stand for?" RAID, an acronym for "Redundant Array of Independent Disks, is a storage technology that links or combines multiple hard
RAID Performance Analysis
RAID Performance Analysis We have six 500 GB disks with 8 ms average seek time. They rotate at 7200 RPM and have a transfer rate of 20 MB/sec. The minimum unit of transfer to each disk is a 512 byte sector.
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Chapter 11 I/O Management and Disk Scheduling
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization
Intel RAID Software User s Guide:
Intel RAID Software User s Guide: Intel Embedded Server RAID Technology II Intel Integrated Server RAID Intel RAID Controllers using the Intel RAID Software Stack 3 Revision 8.0 August, 2008 Intel Order
Disk Storage & Dependability
Disk Storage & Dependability Computer Organization Architectures for Embedded Computing Wednesday 19 November 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition,
CS161: Operating Systems
CS161: Operating Systems Matt Welsh [email protected] Lecture 18: RAID April 19, 2007 2007 Matt Welsh Harvard University 1 RAID Redundant Arrays of Inexpensive Disks Invented in 1986-1987 by David Patterson
RAID. Tiffany Yu-Han Chen. # The performance of different RAID levels # read/write/reliability (fault-tolerant)/overhead
RAID # The performance of different RAID levels # read/write/reliability (fault-tolerant)/overhead Tiffany Yu-Han Chen (These slides modified from Hao-Hua Chu National Taiwan University) RAID 0 - Striping
Assessing RAID ADG vs. RAID 5 vs. RAID 1+0
White Paper October 2001 Prepared by Industry Standard Storage Group Compaq Computer Corporation Contents Overview...3 Defining RAID levels...3 Evaluating RAID levels...3 Choosing a RAID level...4 Assessing
File System Design and Implementation
Transactions and Reliability Sarah Diesburg Operating Systems CS 3430 Motivation File systems have lots of metadata: Free blocks, directories, file headers, indirect blocks Metadata is heavily cached for
Hard Disk Drives and RAID
Hard Disk Drives and RAID Janaka Harambearachchi (Engineer/Systems Development) INTERFACES FOR HDD A computer interfaces is what allows a computer to send and retrieve information for storage devices such
an analysis of RAID 5DP
an analysis of RAID 5DP a qualitative and quantitative comparison of RAID levels and data protection hp white paper for information about the va 7000 series and periodic updates to this white paper see
Benefits of Intel Matrix Storage Technology
Benefits of Intel Matrix Storage Technology White Paper December 2005 Document Number: 310855-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,
VERY IMPORTANT NOTE! - RAID
Disk drives are an integral part of any computing system. Disk drives are usually where the operating system and all of an enterprise or individual s data are stored. They are also one of the weakest links
Optimizing Large Arrays with StoneFly Storage Concentrators
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
3PAR Fast RAID: High Performance Without Compromise
3PAR Fast RAID: High Performance Without Compromise Karl L. Swartz Document Abstract: 3PAR Fast RAID allows the 3PAR InServ Storage Server to deliver higher performance with less hardware, reducing storage
Chapter 6 External Memory. Dr. Mohamed H. Al-Meer
Chapter 6 External Memory Dr. Mohamed H. Al-Meer 6.1 Magnetic Disks Types of External Memory Magnetic Disks RAID Removable Optical CD ROM CD Recordable CD-R CD Re writable CD-RW DVD Magnetic Tape 2 Introduction
Chapter 7. Disk subsystem
Chapter 7. Disk subsystem Ultimately, all data must be retrieved from and stored to disk. Disk accesses are usually measured in milliseconds, whereas memory and PCI bus operations are measured in nanoseconds
RAID 6 with HP Advanced Data Guarding technology:
RAID 6 with HP Advanced Data Guarding technology: a cost-effective, fault-tolerant solution technology brief Abstract... 2 Introduction... 2 Functions and limitations of RAID schemes... 3 Fault tolerance
An Introduction to RAID 6 ULTAMUS TM RAID
An Introduction to RAID 6 ULTAMUS TM RAID The highly attractive cost per GB of SATA storage capacity is causing RAID products based on the technology to grow in popularity. SATA RAID is now being used
white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID
white paper A CASE FOR VIRTUAL RAID ADAPTERS Beyond Software RAID Table of Contents 1. Abstract...3 2. Storage Configurations...4 3. RAID Implementation...4 4. Software RAID.4-5 5. Hardware RAID Adapters...6
Q & A From Hitachi Data Systems WebTech Presentation:
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
CS420: Operating Systems
NK YORK COLLEGE OF PENNSYLVANIA HG OK 2 RAID YORK COLLEGE OF PENNSYLVAN James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz,
Intel RAID Controllers
Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications
RAID. Storage-centric computing, cloud computing. Benefits:
RAID Storage-centric computing, cloud computing. Benefits: Improved reliability (via error correcting code, redundancy). Improved performance (via redundancy). Independent disks. RAID Level 0 Provides
GPFS Storage Server. Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " 4 April 2013"
GPFS Storage Server Concepts and Setup in Lemanicus BG/Q system" Christian Clémençon (EPFL-DIT)" " Agenda" GPFS Overview" Classical versus GSS I/O Solution" GPFS Storage Server (GSS)" GPFS Native RAID
Intel RAID Software User s Guide:
Intel RAID Software User s Guide: Intel Embedded Server RAID Technology II Intel Integrated Server RAID Intel RAID Controllers using the Intel RAID Software Stack 3 Revision 9.0 December, 2008 Intel Order
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Load Balancing in Fault Tolerant Video Server
Load Balancing in Fault Tolerant Video Server # D. N. Sujatha*, Girish K*, Rashmi B*, Venugopal K. R*, L. M. Patnaik** *Department of Computer Science and Engineering University Visvesvaraya College of
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4
1 Table of Contents GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4 XD5 FAMILULARIZATION... 5 PACKAGE CONTENTS... 5 HARDWARE
RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection
Technical Report RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection Jay White & Chris Lueth, NetApp May 2010 TR-3298 ABSTRACT This document provides an in-depth overview of the NetApp
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Outline. Database Management and Tuning. Overview. Hardware Tuning. Johann Gamper. Unit 12
Outline Database Management and Tuning Hardware Tuning Johann Gamper 1 Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 12 2 3 Conclusion Acknowledgements: The slides are provided
Filing Systems. Filing Systems
Filing Systems At the outset we identified long-term storage as desirable characteristic of an OS. EG: On-line storage for an MIS. Convenience of not having to re-write programs. Sharing of data in an
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
SCSI vs. Fibre Channel White Paper
SCSI vs. Fibre Channel White Paper 08/27/99 SCSI vs. Fibre Channel Over the past decades, computer s industry has seen radical change in key components. Limitations in speed, bandwidth, and distance have
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer
Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance
What is RAID? data reliability with performance
What is RAID? RAID is the use of multiple disks and data distribution techniques to get better Resilience and/or Performance RAID stands for: Redundant Array of Inexpensive / Independent Disks RAID can
RAID Made Easy By Jon L. Jacobi, PCWorld
9916 Brooklet Drive Houston, Texas 77099 Phone 832-327-0316 www.safinatechnolgies.com RAID Made Easy By Jon L. Jacobi, PCWorld What is RAID, why do you need it, and what are all those mode numbers that
HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler
Overview HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler Models Smart Array 5i Plus Controller and BBWC Enabler bundled Option Kit (for ProLiant DL380 G2, ProLiant DL380
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
