Chapter 7. Disk subsystem

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Chapter 7. Disk subsystem"

Transcription

1 Chapter 7. Disk subsystem Ultimately, all data must be retrieved from and stored to disk. Disk accesses are usually measured in milliseconds, whereas memory and PCI bus operations are measured in nanoseconds or microseconds. Disk operations are typically thousands of times slower than PCI transfers, memory accesses, and LAN transfers. For this reason the disk subsystem can easily become the major bottleneck for any server configuration. Disk subsystems are also important because the physical orientation of data stored on disk has a dramatic influence on overall server performance. A detailed understanding of disk subsystem operation is critical for effectively solving many server performance bottlenecks. A disk subsystem consists of the physical hard disk and the controller. A disk is made up of multiple platters coated with magnetic material to store data. The entire platter assembly mounted on a spindle revolves around the central axis. A head assembly mounted on an arm moves to and fro (linear motion) to read the data stored on the magnetic coating of the platter. The linear movement of the head is referred to as the seek. Thetimeittakes to move to the exact track where the data is stored is called seek time. The rotational movement of the platter to the correct sector to present the data under the head is called latency. The ability of the disk to transfer the requested data is called the data transfer rate. The most widely used drive technology today in servers is SCSI (Small Computer System Interface). IBM s flagship SCSI controller is the ServeRAID-4H adapter. Besides SCSI, other storage technologies are available, such as: SSA (Serial Storage Architecture) FC-AL (Fibre Channel Arbitrated Loop) EIDE (Enhanced Integrated Drive Electronics) Using EIDE in servers For performance reasons, do not use EIDE disks in your server. The EIDE interface does not handle multiple simultaneous I/O requests very efficiently and so is not suited to a server environment. The EIDE interface uses more server CPU capacity than SCSI. We recommend you limit EIDE use to CD-ROM and tape devices. Copyright IBM Corp. 1998,

2 In this redbook we will focus only on SCSI and Fibre Channel. 7.1 SCSI bus overview The SCSI bus has evolved into the predominant server disk connection technology. Several different versions of SCSI exist. The table below contains all versions covered by the current SCSI specification. Table 11. SCSI specifications SCSI Standard Bus Clock Speed 50-pin Narrow (8-bit) / 68-pin Wide (16-bit) Maximum Cable Length SCSI 5 MHz 5 MBps 6 meters SCSI-2 Fast 10 MHz 10 MBps / 20 MBps 3 meters Ultra SCSI 20 MHz 20 MBps / 40 MBps 1.5 meters Ultra2 SCSI 40 MHz 40 MBps / 80 MBps 12 meters (LVD) Ultra3 SCSI 80 MHz 80 MBps / 160 MBps 12 meters (LVD) SCSI SCSI-2 First implemented as an ANSI standard in 1986, the Small Computer System Interface defines an 8-bit interface with a burst-transfer rate of 5 MBps with a 5 MHz clock (that is, 1 byte transferred per clock cycle). SCSI cable lengths are limited to 6 meters. The SCSI-2 standard was released by ANSI in 1996 and allowed for better performance than the original SCSI interface. It defines extensions that allow for 16-bit transfers and twice the data transfer due to a 10 MHz clock. The 8-bit interface is called SCSI-2 Fast and the 16-bit interface is called SCSI-2 Fast/Wide. In addition to the faster speed, SCSI-2 also introduced new command sets to improve performance when multiple requests are issued from the server. The trade-off with increased speed was shorter cable length. The 10 MHz SCSI-2 interface supported a maximum of 3 meter cable lengths. 120 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

3 7.1.3 Ultra SCSI Ultra SCSI is an update to the SCSI-2 interface offering faster data transfer rates and was introduced in It is a subset of the SCSI-3 parallel interface (SPI) standard currently under development within the X3T10 SCSI committee. The clock speed was doubled again to 20 MHz and provides a data transfer speed up to 40 MBps with a 16-bit data width, while maintaining the backward compatibility with SCSI and SCSI 2. Although the data transfer can be done at 20 MHz (that is 40 MBps wide), all SCSI commands are issued at 10 MHz to maintain compatibility. This means that the maximum bandwidth is less than 31 MBps, even with 64 KB blocks. Once again, with the increased speed, cable lengths were halved to 1.5 meters maximum Ultra2 SCSI Ultra2 SCSI uses Low Voltage Differential (LVD) signalling, which is designed to improve SCSI bus signal quality, enabling faster transfer rates and longer cable lengths. Ultra2 SCSI doubles the clock speed to 40 MHz. It employs the same concept as the older Differential SCSI specification where two signal lines are used to transmit each of the 8 or 16 bits, one signal the negative of the other. See Figure 34. At the receiver, one signal, A+ is subtracted from the other, A- (that is, the differential is taken) which effectively removes spikes and other noise from the original signal. The result is A± as shown in Figure 34. A+ A A+_ 1 0 Figure 34. Differential SCSI Differential components tend to be more expensive than similar single-ended SCSI components, and differential termination requires a lot of power, generating significant heat levels. Because of the large voltage swings (20 Volts) and high power requirements, current differential transceivers cannot Chapter 7. Disk subsystem 121

4 7.1.5 Ultra3 SCSI be integrated onto the SCSI chip, but must be additional external components. LVD has differential's advantages of long cables and noise immunity without the power and integration problems. Because LVD uses a small (1.1 Volts) voltage swing, LVD transceivers can be implemented in CMOS, allowing them to be built into the SCSI chip, reducing cost, board area, power requirements, and heat. The use of LVD allows cable lengths to be up to 12 meters. The maximum theoretical throughput of Ultra3 160/m SCSI can reach 160 MBps on each SCSI channel. Ultra3 160/m uses the same clock frequency as Ultra2 SCSI, but data transfers occur on both rising and falling edges of the clock signal, effectively doubling the throughput. This feature is called double transition clocking. Note: double transition clocking requires LVD signalling. On a single-ended SCSI bus, clocking will revert to Single Transition mode. If you use a mixture of Ultra3 and Ultra2 devices on an LVD-enabled SCSI bus, there is no need for all devices use run at Ultra2 speed: the Ultra3 SCSI devices will still operate at the Ultra3 (160 MBps) speed. Additionally, Ultra3 160/m SCSI can use CRC to ensure data integrity and is therefore far more reliable than older SCSI implementations which only support parity control. Domain validation is another feature of Ultra3 160/m SCSI. It is performed during the SCSI bus initialization and the intent is to ensure devices on the SCSI bus (=domain) can reliably transfer data at negotiated speed. Only Ultra3 capable devices can use domain validation. Note: Ultra3 160/m is a subset of Ultra3 SCSI. It supports double transition clocking, CRC and domain validation, but does not include all Ultra3 SCSI features, like packetization or quick arbitration SCSI controllers and devices There are two basic types of SCSI controller designs array and non-array. A standard non-array SCSI controller allows connection of SCSI disk drives to the PCI bus. Each drive is presented to the operating system as an individual, physical drive. 122 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

5 Figure 35 shows a typical non-array controller. The SCSI bus (an internal cable typically) is terminated on both ends. The SCSI controller (or host adapter) normally has one of the end terminators integrated within its electronics, so only one physical terminator is required. The SCSI bus can contain different device types, such as disk, CD-ROM and tape all on the same bus. However, most non-disk devices conform to the slower SCSI and SCSI-2 Fast standards. So, if I/O to a CD-ROM or tape drive is required, the entire SCSI bus would have to switch to the slower speed during that access, which dramatically affects performance. This would not be much of a problem if the CD-ROM is not used for production purposes (that is, the CD-ROM is not a LAN resource available to users) and the tape drive is only accessed after hours, when performance is not critical. If at all possible, we recommend you do not attach CD-ROMs or tape drives to the same SCSI bus as disk devices. Fortunately, most Netfinity servers have the standard CD-ROM on the EIDE bus. System Bus SCSI Host Adapter Controller Host System T SCSI Bus (Cable) T Controller Controller Controller Controller Disk Disk Disk CD-ROM Figure 35. Non-array SCSI configuration The array controller, a more advanced design, contains hardware designed to combine multiple SCSI disk drives into a larger single logical drive. Combining multiple SCSI drives into a larger logical drive greatly improves I/O performance compared to single-drive performance. Most array controllers employ fault-tolerant techniques to protect valuable data in the event of a disk drive failure. Array controllers are installed in almost all servers because of these advantages. Note: Although there are many array controller technologies, each possessing unique characteristics, this redbook includes details and tuning information specific to the IBM ServeRAID array controller. Chapter 7. Disk subsystem 123

6 7.2 SCSI IDs With the introduction of SCSI-2, a total of 16 devices can be connected to a single SCSI bus. To uniquely identify each device, each is assigned a SCSI ID from 0 to 15. One of these is the SCSI controller itself and it is assigned ID 7. Because the 16 devices share a single data channel, only one device can use the bus at a time. When two SCSI devices attempt to control the bus, the SCSI IDs determine who wins according to a priority scheme, as shown in Figure 36. The highest priority ID is that of the controller. Next are the low order IDs from 6 to 0 and then the high order IDs from 15 to 8. Although this priority scheme allows backward compatibility, it can result in negative system Figure 36. SCSI ID priority SCSI ID Priority 7 (Highest) Controller (Lowest) performance if your devices are configured incorrectly. Narrow (8-bit) devices with lower IDs will automatically preempt use of the bus by the faster F/W devices with addresses greater than 7. This is especially important when CD-ROMs and tape drives are placed on the same SCSI bus as F/W disk drives. Note: With the use of hot-swap drives, the SCSI ID is automatically set by the hot-swap backplane. Typically, the only change is whether the backplane assigns high-order IDs or low-order IDs. 7.3 Disk array controller architecture Almost all server disk controllers implement the SCSI communication between the disk controller and disk drives. SCSI is an intelligent interface that allows simultaneous processing of multiple I/O requests. This is the single most important advantage for using SCSI controllers on servers. Servers must process multiple independent requests for I/O. SCSI s ability to concurrently process many different I/O operations makes it the optimal choice for servers. SCSI array controllers consist of the following primary components: PCI bus interface/controller 124 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

7 SCSI bus controller(s) and SCSI bus(es) Microprocessor Memory (microprocessor code and cache buffers) Internal bus (connects PCI interface, microprocessor, and SCSI controllers) SCSI Disk Drives Microprocessor Memory Microcode SCSI Bus SCSI Controller Internal Bus Data Buffers Cache PCI Bus Controller Figure 37. Architecture of a disk array controller 7.4 Disk array controller operation The SCSI-based disk array controller is a PCI busmaster initiator with capability to master the PCI bus to gain direct access to server main memory. The following sequence outlines the fundamental operations that occur when a disk-read operation is performed: 1. The server operating system generates a disk I/O read operation by building an I/O control block command in memory. The I/O control block contains the READ command, a disk address called a Logical Block Address (LBA), a block count or length, and the main memory address where the read data from disk is to be placed (destination address). 2. The operating system generates an interrupt to tell the disk array controller that it has an I/O operation to perform. This interrupt initiates execution of the disk device driver. The disk device driver (executing on the server s CPU) addresses the disk array controller and sends it the address of the I/O control block and a command instructing the disk array controller to fetch the I/O control block from memory. 3. The disk array controller initiates a PCI bus transfer to copy the I/O control block from server memory into its local adapter memory. The on-board microprocessor executes instructions to decode the I/O control block Chapter 7. Disk subsystem 125

8 command, to allocate buffer space in adapter memory to temporarily store the read data, and to program the SCSI controller chip to initiate access to the SCSI disks containing the read data. The SCSI controller chip is also given the address of the adapter memory buffer that will be used to temporarily store the read data. 4. At this point, the SCSI controller arbitrates for the SCSI bus, and when bus access is granted, a read command, along with the length of data to be read, is sent to the SCSI drives that contain the read data. The SCSI controller disconnects from the SCSI bus and waits for the next request. 5. The target SCSI drive begins processing the read command by initiating the disk head to move to the track containing the read data (called a seek operation). The average seek time for current high-performance SCSI drives is about 5 to 7 milliseconds. This time is derived by measuring the average amount of time it takes to position the head randomly from any track to any other track on the drive. The actual seek time for each operation can be significantly longer or shorter than the average. In practice, the seek time depends upon the distance the disk head must move to reach the track containing the read data. 6. After the seek time elapses, and the head reaches its destination track, the head begins to read a servo track (adjacent to the data track). A servo track is used to direct the disk head to accurately follow the minute variations of the data signal encoded within the disk surface. The disk head also begins to read the sector address information to identify the rotational position of the disk surface. This allows the head to know when the requested data is about to rotate underneath the head. The time that elapses between the point when the head settles and is able to read the data track, and the point when the read data arrives is called the rotational latency. Most disk drives have a specified average rotational latency, which is half the time it takes to traverse one complete revolution. It is half the rotational time because on average, the head will have to wait a half revolution to access any block of data on a track. The average rotational latency of a 7200 RPM drive is about 4 milliseconds, whereas the average rotational latency of a 10,000 RPM drive is about 3 milliseconds. The actual latency depends upon the angular distance to the read data when the seek operation completes, and the head can begin reading the requested data track. 7. When the read data becomes available to the read head, it is transferred from the head into a buffer contained on the disk drive. Usually this buffer is large enough to contain a complete track of data. 126 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

9 8. The disk drive has the ability to be a SCSI bus initiator or SCSI bus target (similar terminology used for PCI). Now the controller logic in the disk drive arbitrates to gain access to the SCSI bus, as an initiator. When the bus becomes available, the disk drive begins to burst the read data into buffers on the adapter SCSI controller chip. The adapter SCSI controller chip then initiates a DMA (direct memory access) operation to move the read data into a cache buffer in array controller memory. 9. When the transfer of read data into disk array cache memory is complete, the disk controller becomes an initiator and arbitrates to gain access to the PCI bus. Using the destination address that was supplied in the original I/O control block as the target address, the disk array controller performs a PCI data transfer (memory write operation) of the read data into server main memory. 10.When the entire read transfer to server memory has completed, the disk array controller generates an interrupt to communicate completion status to the disk device driver. This interrupt informs the operating system that the read operation has completed. 7.5 RAID summary Most of us have heard of RAID (redundant array of independent disks) technology. Unfortunately, there is still significant confusion about how RAID actually works and the performance implications of each RAID strategy. Therefore, this section presents a brief overview of RAID and the performance issues as they relate to commercial server environments. RAID was created by computer scientists at the University of California at Berkeley, to address the huge gap between computer I/O requirements and single disk drive latency and throughput. RAID is a collection of techniques that treat multiple, inexpensive disk drives as a unit, with the object of improving performance and/or reliability. IBM and the IT industry have also Chapter 7. Disk subsystem 127

10 introduced more RAID levels to meet industry demand. The following RAID strategies are defined by the Berkeley scientists, IBM and the IT industry: Table 12. RAID summary RAID level Fault tolerant? Description RAID-0 No All data evenly distributed to all drives. RAID-1 Yes A mirrored copy of one drive to another drive (2 disks). RAID-1E Yes Mirrored copies of each drive. RAID-3 Yes Single checksum drive. Bits of data are striped across N-1 drives. RAID-4 Yes Single checksum drive. Blocks of data are striped across N-1 drives. RAID-5 Yes Distributed checksum. Both data and parity are striped across all drives. RAID-5E Yes Distributed checksum and hot-spare. Data, parity and hot-spare are striped across all drives. RAID-10 Yes Mirror copies of RAID-0 arrays RAID-0 RAID-3 is useful for scientific applications that require increased byte throughput. It has very poor random access characteristics, and is not generally used in commercial applications. RAID-4 uses a single checksum drive that becomes a significant bottleneck in random commercial applications. It is not likely to be used by a significant number of customers because of its slow performance. RAID strategies that are supported by the IBM ServeRAID adapter are: RAID-0 RAID-1 RAID-1E RAID-5 RAID-5E Composite RAID levels, such as RAID-10 and RAID-50 RAID-0 is a technique that stripes data evenly across all disk drives in the array. Strictly, it is not a RAID level, as no redundancy is provided. On average, accesses will be random, thus keeping each drive equally busy. 128 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

11 SCSI has the ability to process multiple, simultaneous I/O requests, and I/O performance is improved because all drives can contribute to system I/O throughout. Since RAID-0 has no fault tolerance, when a single drive fails, the entire array becomes unavailable. RAID-0 offers the fastest performance of any RAID strategy for random commercial workloads. RAID-0 also has the lowest cost of implementation because redundant drives are not supported. Logical view RAID-0 - Physical view Figure 38. RAID-0: All data evenly distributed across all drives but there is no fault tolerance RAID-1 RAID-1 provides fault tolerance by mirroring one drive to another drive. The mirror drive ensures access to data should a drive fail. RAID-1 also has good I/O throughput performance compared to single-drive configurations because read operations can be performed on any data record on any drive contained within the array. Most array controllers (including the ServeRAID family) do not attempt to optimize read latency by issuing the same read request to both drives in the mirrored pair. The drive in the pair that is least busy is issued the read command, leaving the other drive to perform another read operation. This technique ensures maximum read throughput. Write performance is somewhat reduced because both drives in the mirrored pair must complete the write operation. For example, two physical write operations must occur for each write command generated by the operating system. Chapter 7. Disk subsystem 129

12 RAID-1 offers significantly better I/O throughout performance than RAID-5. However, RAID-1 is somewhat slower than RAID ' 2' 3' RAID-1 - Physical view Figure 39. RAID-1: Fault-tolerant. A mirrored copy of one drive to another drive RAID-1E RAID-1 Enhanced (which will be referred to as RAID-1E throughout the rest of this document), is only implemented by the IBM ServeRAID adapter and allows a RAID-1 array to consist of three or more disk drives. Regular RAID-1 consists of exactly two drives. The data stripe is spread across all disks in the array to maximize the number of spindles that are involved in an I/O request to achieve maximum performance. RAID-1E is also called mirrored stripe, as a complete stripe of data is mirrored to another stripe within the set of disks. Like RAID-1, only half of the total disk space is usable the other half is used by the mirror. 1 3' 4 2 1' 5 RAID-1E - Physical view 3 2' 6 Figure 40. RAID-1: Mirrored copies of each drive Because you can have more than two drives (up to 16), RAID-1E will out perform RAID-1. The only situation where RAID-1 will perform better then RAID-1E is the reading of sequential data. The reason for this is that when a RAID-1E reads sequential data off a drive, the data is striped across multiple drives. RAID-1E interleaves data on different drives therefore seek operations occur more frequently during sequential I/O. In RAID-1, data is not interleaved so fewer seek operations occur for sequential I/O. 130 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

13 7.5.4 RAID-5 RAID-5 offers an optimal balance between price and performance for most commercial server workloads. RAID-5 provides single-drive fault tolerance by implementing a technique called single equation single unknown. This technique says that if any single term in an equation is unknown, the equation can be solved to exactly one solution. The RAID-5 controller calculates a checksum (parity stripe in Figure 41) using a logic function known as an exclusive-or (XOR) operation. The checksum is the XOR of all data elements in a row. The XOR result can be performed quickly by the RAID controller hardware and is used to solve for the unknown data element. In Figure 41, addition is used instead of XOR to illustrate the technique: stripe 1 + stripe 2 + stripe 3 = parity stripe 1-3. Should drive one fail, stripe 1 becomes unknown and the equation becomes X + stripe 2 + stripe 3 = parity stripe 1-3. The controller solves for X and returns stripe 1 as the result. A significant benefit of RAID-5 is the low cost of implementation, especially for configurations requiring a large number of disk drives. To achieve fault tolerance, only one additional disk is required. The checksum information is evenly distributed over all drives, and checksum update operations are evenly balanced within the array parity 7-9 parity 8 RAID-5 - Physical view 1-3 parity 6 9 Figure 41. RAID-5: Both data and parity are striped across all drives However, RAID-5 yields lower I/O throughout then RAID-0 and RAID-1. This is due to the additional checksum calculation and write operations required. In general, I/O throughput with RAID-5 is 30-50% lower than with RAID-1. (The actual result depends upon the percentage of write operations.) A workload with a greater percentage of write requests generally has a lower RAID-5 throughput. RAID-5 will provide I/O throughput performance similar to RAID-0 when the workload does not require write operations (read only). For more information on RAID-5 performance, see 7.6, ServeRAID RAID-5 algorithms on page 136. Chapter 7. Disk subsystem 131

14 7.5.5 RAID-5E 1 5 Hot spare 2 6 Hot spare 3 7 Hot spare RAID-5E - Physical view parity Hot spare 1-4 parity 8 Hot spare Figure 42. RAID-5E: The hot spare is integrated into all disks, instead of a separate disk RAID-5E was invented by IBM research and is a technique that distributes the hot-spare drive space over the N+1 drives comprising the RAID-5 array plus standard hot-spare drive. It was first implemented in ServeRAID firmware V3.5. Adding a hot-spare drive to a server protects data by reducing the time spent in the critical state. This technique does not make maximum use of the hot-spare drive because it sits idle until a failure occurs. Often many years can elapse before the hot-spare drive is ever used. IBM invented a method to utilize the hot-spare drive to increase performance of the RAID-5 array during typical processing and preserve the hot-spare recovery technique. This method of incorporating the hot spare into the RAID array is called RAID-5E. RAID-5E is designed to increase the normal operating performance of a RAID-5 array in two ways: The hot-spare drive contains data that can be accessed during normal operation. The RAID-5 array now has an extra drive to contribute to the throughput of read and write operations. Standard 10,000 RPM drives can perform more than 100 I/O operations per second so the RAID-5 array throughput is increased with this extra I/O capability. The data in RAID-5E is distributed over N+1 drives instead of N as is done for RAID-5. As a result, the data occupies less tracks on each drive. This has the effect of physically utilizing less space on each drive keeping the head movement more localized and reducing seek times. Together, these improvements yield a typical system-level performance gain of about 10-20%. Another benefit of RAID-5E is the faster rebuild times needed to reconstruct a failed drive. In a standard RAID-5 hot-spare configuration the rebuild of a failed drive requires serialized write operations to the single hot-spare drive. Using RAID-5E the hot spare drive space is evenly distributed across all 132 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

15 drives, so the rebuild operations are evenly distributed to all remaining drives in the array. Rebuild times with RAID-5E can be dramatically faster than rebuild times using a standard hot-spare configuration. The only downside of RAID-5E is that the hot-spare drive cannot be shared across multiple physical arrays as can be done with standard RAID-5 plus hot-spare. This RAID-5 technique is more cost efficient for multiple arrays because it allows a single hot-spare drive to provide coverage for multiple physical arrays. This reduces the cost of using a hot-spare drive but the sacrifice is the inability to handle separate drive failures within different arrays. IBM ServeRAID adapters offer increased flexibility by providing the choice to use either standard RAID-5 with hot-spare or the newer integrated hot-spare provided with RAID-5E. While RAID-5E provides a performance improvement for most operating environments there is a special case where its performance can be slower than RAID-5. Consider a three-drive RAID-5 with hot-spare configuration as shown in Figure 43. This configuration employs a total of four drives but the hot-spare drive is idle so for a performance comparison it can be ignored. A four-drive RAID-5E configuration would have data and checksum on four separate drives. ServeRAID adapter 16 KB write operation Step 1 Adapter cache Step 2: calculate checksum 16 KB write operation Step 3: write data Step 4: write checksum RAID-5 with hot-spare 8KBstripe 8KB 8KB 8KB Figure43. Writinga16KBblocktoaRAID-5arraywithan8KBstripesize Referring to Figure 43, whenever a write operation is issued to the controller that is two times the stripe size (for example, a 16 KB I/O request to an array with an 8 KB stripe size), a three-drive RAID-5 configuration would not require any reads because the write operation would contain all the data needed for each of the two drives. The checksum would be generated by the Chapter 7. Disk subsystem 133

16 array controller (step 2) and immediately written to the corresponding drive (step 4) without the need to read any existing data or checksum. This entire series of events would require two writes for data to each of the drives storing the data stripe (step 3) and one write to the drive storing the checksum (step 4), for a total of three write operations. Contrast these events to the operation of a comparable RAID-5E array which contains four drives as shown in Figure 44. In this case, in order to calculate the checksum, a read must be performed of the data stripe on the extra drive (step 2). This extra read was not performed with the three-drive RAID-5 configuration and it slows the RAID-5E array for write operations that are twice the stripe size. ServeRAID adapter 16 KB write operation Step 1 Adapter cache Step 3: calculate checksum Extra step Step 2: read data 16 KB write operation Step 4: write data Step 5: write checksum RAID-5E with integrated hot-spare 8KBstripe 8KB 8KB 8KB 8KB Figure44. Writinga16KBblocktoaRAID-5Earraywitha8KBstripesize This problem with RAID-5E can be avoided with proper stripe size selection. By monitoring the average I/O size in bytes, or knowing the I/O size generated by the application, a large enough stripe size can be selected so that this performance degradation rarely occurs Composite RAID levels The ServeRAID-4 adapter family supports composite RAID levels. This means that it supports RAID arrays that are joined together to form larger RAID arrays. For example, RAID 10 is the result of forming a RAID-0 array from two or more RAID-1 arrays. With four SCSI channels each supporting 15 drives, this 134 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

17 means you can theoretically have up to 60 drives in one array. With the EXP200, the limit is 40 disks and with the EXP300, the limit is 56 disks. A ServeRAID RAID-10 array is shown in Figure 45: 1 1' 1 1' 1 1' 2 2' 2 2' 2 2' 3 3' 3 3' 3 3' RAID-10 - Physical view (striped RAID-1) Figure 45. RAID-10: A striped set of RAID-1 arrays Likewise a striped set of RAID-5 arrays is shown in Figure parity parity parity parity parity parity RAID-50 - Physical view (striped RAID-5) Figure 46. RAID-50: A striped set of RAID-5 arrays Chapter 7. Disk subsystem 135

18 The ServeRAID-4 family supports the following combinations: Table 13. Composite RAID levels supported by ServeRAID-4 adapters RAID level The sub-logical array is and the spanned array is RAID-00 RAID-0 RAID-0 RAID-10 RAID-1 RAID-0 RAID-1E0 RAID-1E RAID-0 RAID-50 RAID-5 RAID-0 Table 14 shows a summary of the performance characteristics of the three RAID levels commonly used in array controllers. A comparison is also made between small and large I/O data transfers. Table 14. Summary of RAID performance characteristics RAID level Data Sequential I/O Random I/O Data availability 2 capacity 1 performance 2 performance 2 Read Write Read Write With hot spare Without hot spare Single Disk n Not applicable RAID-0 n Not applicable RAID-1 n/ RAID-1E n/ RAID-5 n RAID-5E n N/A RAID-10 n/ Notes: 1 In the data capacity column, n refers to the number of equally sized disks in the array = best, 1=worst. You should only compare values within each column. Comparisons between columns is not valid for this table. 3 With the write back setting enabled. 7.6 ServeRAID RAID-5 algorithms The IBM ServeRAID adapter uses one of two algorithms for the calculation of RAID-5 parity. These algorithms ensure the best performance of RAID-5 write operations in array configurations, regardless of the number of drives in the array: 136 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

19 Use read/modify write forraid-5arraysoffivedrivesormore. Use full XOR for RAID-5 arrays of three or four drives. This section compares these two algorithms Read/modify write algorithm The read/modify write algorithm is optimized for configurations that use greater than four drives. The RAID-5 read/modify write algorithm is described in Figure 47. This algorithm always requires four disk operations to be performed for each write command, regardless of the number of drives in the RAID-5 array. As per Figure 47, the steps that occur are: 1. Read old data (data1) 2. Read old checksum (CS6) 3. Calculate the new checksum from old data, new data and old checksum 4. Write new data (data4) 5. Write new checksum (CS9) Read/Modify Write algorithm Command: Update data1 to data4 ServeRAID adapter Adapter cache data4 data1 New CS9 CS6 Step 3 (Calc) Step 5 (Write) Step 1 (Read) Step 2 (Read) Step 4 (Write) data1 data4 data2 data3 CS6 CS9 Figure 47. Read/modify write algorithm four I/O operations for every write command Regardless of the number of drives, with the read/modify write algorithm, the write command will always require four I/O operations: two reads and two writes. The algorithm is called read/modify write because it reads the checksum, modifies the checksum, then writes the checksum. Chapter 7. Disk subsystem 137

20 7.6.2 Full XOR algorithm A different method can be used to generate RAID-5 checksum information for a write operation that modifies data1 to be data4. This method is called the full exclusive or algorithm (full XOR algorithm). It involves disk read operations of data2 and data3. The full XOR algorithm then creates a new checksum from data4 + data2 + data3, writes the modified data (data4), and overwrites the old checksum with the new checksum (CS9). In this case, four disk operations are performed. The following operations (as per Figure 48) show the steps involved in the full XOR algorithm: 1. Read data2 2. Read data3 3. Calculate new checksum (CS6) from new data (data4), data2 and data3 4. Write data4 5. Write checksum (CS9) Full XOR algorithm Command: Update data1 to data4 ServeRAID Adapter Adapter cache data4 data2 data3 New CS9 Step 3 (Calc) Step 5 (Write) Step 1 (Read) Step 2 (Read) Step 4 (Write) data1 data4 data2 data3 CS6 CS9 Figure 48. Full XOR algorithm In this case, four disk operations are performed: two reads and two writes. If the number of disks in the array increases, then the number of read operations also increases: Five disks: five I/O operations (three reads and two writes) 138 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

21 Six disks: six I/O operations (four reads and two writes) n disks: n I/O operations (n-2 reads and two writes) The extra read operations required by this algorithm cause the performance of write commands to degrade as the number of drives increases. The algorithm is called full XOR due to the way the checksum is calculated. The checksum is calculated from all the data and then the calculated checksum is written to disk. The original checksum is not used in the calculation. However, for three disks, only three I/O operations are required: one read and two writes. Thus the following conclusions can be reached: For 3-drive RAID-5 arrays, full XOR is faster. For 4-drive RAID-5 arrays, the algorithms are the same. For 5+ drive RAID-5 arrays, read/modify write is faster. Thus, for a four-drive configuration, the full XOR algorithm requires the same number of disk operations as the read/modify write algorithm. A RAID-5 configuration using five drives would require four disk operations for the read/modify write algorithm, but five disk operations for the full XOR algorithm. Consequently, the number of disk operations increases for a full XOR algorithm as the number of drives configured in a RAID-5 array increases. The extra read operations required by the full XOR algorithm cause the performance of write operations to degrade as the number of drives increases. To take advantage of this, Version 2.3 of the ServeRAID firmware introduced a technique which used the better of these two algorithms depending on the number of drives in the array. It uses full XOR when the adapter is configured with three or four drives in a RAID-5 array, and read/modify write when the adapter is configured with five or more drives Sequential write commands The benefits of these two algorithms also affects the sequential write commands. When the ServeRAID adapter is configured for RAID-5 and the server I/O is sequential write operations (for example, when copying files to the server or when building a database), additional performance benefits can be achieved using a full XOR algorithm and using a write-back cache policy. (The benefits of write-back cache are discussed in 7.7.7, Disk cache write-back versus write-through on page 153.) Chapter 7. Disk subsystem 139

22 The ServeRAID firmware V2.7 has intelligence to detect this type of I/O, and switches to full XOR. This would cause each data element, data1, data2, data3, and checksum to be stored in the ServeRAID adapter cache after the first operation to update data1 to data4. In write-back mode, the updates to data2, data3 and the successive updates to the checksum could all be accomplished in cache memory. After the entire group of stripe elements is sequentially updated in cache memory, only three disk operations are required to store the updated data2, data3, and checksum information on disk. This feature of the ServeRAID can improve database load times in RAID-5 mode by up to eight times over earlier ServeRAID firmware levels. 7.7 Factors affecting disk array controller performance RAID strategy Many factors affect array controller performance. The most important considerations (in order of importance) for configuring the IBM ServeRAID adapter are: RAID strategy Number of drives Drive performance Logical drive configuration Stripe size SCSI bus organization and speed Disk cache write-back versus write-through RAID adapter cache size Device drivers Firmware Your RAID strategy should be carefully selected because it significantly affects disk subsystem performance. Figure 49 illustrates the performance differences between RAID-0, RAID-1E and RAID-5 for a server configured with 10,000 RPM Fast/Wide SCSI-2 drives and the IBM ServeRAID-3HB adapter with v3.6 code. The chart shows the RAID-0 configuration delivering about 97% greater throughput than RAID-5 and 35% greater throughput than RAID-1E. RAID-0 has no fault tolerance and is, therefore, best utilized for read-only environments when downtime for possible backup recovery is acceptable. RAID-1E or RAID-5 should be selected for applications requiring fault 140 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

23 tolerance. RAID-1E is usually selected when the number of drives is low (less than six) and the price for purchasing additional drives is acceptable. RAID-1E offers about 45% more throughput than RAID-5. These performance considerations should be understood before selecting a fault-tolerant RAID strategy. I/O operations per seconde Configuration: Windows NT Server 4.0 ServeRAID-3HB Firmware/driver v3.6 Maximum number of drives 10,000 RPM 8KBI/Osize Random I/O mix: 67/33 R/W 0 RAID-0 RAID-1E RAID Number of drives Figure 49. Comparing RAID levels In many cases, RAID-5 is the best choice because it provides the best price and performance combination for configurations requiring capacity greater than five or more disk drives. RAID-5 performance approaches RAID-0 performance for workloads where the frequency of write operations is low. Servers executing applications that require fast read access to data and high availability in the event of a drive failure should employ RAID-5. For more information about RAID-5 performance, see 7.6, ServeRAID RAID-5 algorithms on page 136. The number of disk drives significantly affects performance because each drive contributes to total system throughput. Capacity requirements are often the only consideration used to determine the number of disk drives configured in a server. Throughput requirements are usually not well understood or are completely ignored. Capacity is used because it is easily estimated and is often the only information available. Chapter 7. Disk subsystem 141

24 The result is a server configured with sufficient disk space, but insufficient disk performance to keep users working efficiently. High-capacity drives have the lowest price per byte of available storage and are usually selected to reduce total system price. This often results in disappointing performance, particularly if the total number of drives is insufficient. It is difficult to accurately specify server application throughput requirements when attempting to determine the disk subsystem configuration. Disk subsystem throughput measurements are complex. To express a user requirement in terms of bytes per second would be meaningless because the disk subsystem s byte throughput changes as the database grows and becomes fragmented, and as new applications are added. The best way to understand disk I/O and users throughput requirements is to monitor an existing server. Tools such as the Windows 2000 Performance console can be used to examine the logical drive queue depth and disk transfer rate (as described in Chapter 11, Windows 2000 Performance console on page 221). Logical drives that have an average queue depth much greater than the number of drives in the array are very busy. This indicates that performance would be improved by adding drives to the array. Adding drives In general, adding drives is one of the most effective changes that can be made to improve server performance. Measurements show that server throughput for most server application workloads increase as the number of drives configured in the server is increased. As the number of drives is increased, performance is usually improved for all RAID strategies. Server throughput continues to increase each time drives are added to the server. This can be seen in Figure Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

25 Tps Configuration: Windows NT 4.0 SQL Server 6.5 ServeRAID II 4.5 GB 7200 RPM Drives RAID-0 6Drives RAID-0 8Drives RAID-0 Figure 50. Improving performance by adding drives to arrays This trend will continue until another server component becomes the bottleneck. In general, most servers are configured with an insufficient number of disk drives. Therefore, performance increases as drives are added. Similar gains can be expected for all I/O-intensive server applications such as office-application file serving, Lotus Notes, Oracle, DB2 and Microsoft SQL Server. Rule of thumb For most server workloads, when the number of drives in the active logical array is doubled, server throughput will improve by about 50% until other bottlenecks occur Drive performance If you are using one of the IBM ServeRAID family of RAID adapters, you can use the logical drive migration feature to add drives to existing arrays without disrupting users or losing data. Drive performance contributes to overall server throughput because faster drives perform disk I/O in less time. There are four major components to the time it takes a disk drive to execute and complete a user request: Chapter 7. Disk subsystem 143

26 Command overhead This is the times it take for the drive s electronics to process the I/O request. The time depends on whether it is a read or write request and whether the command can be satisfied from the drive s buffer. This value is of the order of 0.1 ms for a buffer hit to 0.5 ms for a buffer miss. Seek time This is the time it takes to move the drive head from its current cylinder location to the target cylinder. As the radius of the drives has been decreasing, and drive components have become smaller and lighter, so too has the seek time been decreasing. Average seek time is usually 5-7 ms for most current SCSI-2 drives used in servers today. Rotational latency Once the head is at the target cylinder, the time it takes for the target sector to rotate under the head is called the rotational latency. Average latency is half the time it takes the drive to complete one rotation so it is inversely proportional to the RPM value of the drive: RPM drives have a 5.6 ms latency RPM drives have a 4.2 ms latency - 10,000 RPM drives have a 3.0 ms latency Data transfer time This value depends on the media data rate, which is how fast data can be transferred from the magnetic recording media, and the interface data rate, which is how fast data can be transferred between the disk drive and disk controller (that is, the SCSI transfer rate). The media data rate improves as a result of greater recording density and faster rotational speeds. A typical value is 0.8 ms. The interface data rate for SCSI-2 F/W is 20 MBps. With 4 KB I/O transfers (which are typical for Windows NT Server and Windows 2000), the interface data transfer time is 0.2 ms. Hence the data transfer time is approximately 1 ms. As you can see, the significant values that affect performance are the seek time and the rotational latency. For random I/O (which is normal for a multi-user server) this is true. Reducing the seek time will continue to improve as the physical drive attributes become less. For sequential I/O (such as with servers with small numbers of users requesting large amounts of data) or for I/O requests of large block sizes (for example 64 KB), the data transfer time does become important when compared to seek and latency, so the use of Ultra SCSI, Ultra2 SCSI or Ultra3 SCSI can have a significant positive effect on overall subsystem performance. 144 Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

27 Likewise when caching and read-ahead is employed on the drives themselves, the time taken to perform the seek and rotation are eliminated, so the data transfer time becomes very significant. The easiest way to improve disk performance is to increase the number of accesses that can be made simultaneously. This is achieved by using many drives in a RAID array and spreading the data requests across all drives as described in 7.7.2, Number of drives on page 141. Table 15 shows the seek and latency values and buffer sizes for three of IBM s high-end drives. Table 15. Comparing and 7200 RPM drives Disk Capacity RPM Seek Latency Buffer size Media data transfer rate (MBps) Ultrastar 36LP 18.3 GB ms 4.17 ms 4 MB Ultrastar 36LZ 18.3 GB 10K 4.9 ms 2.99 ms 4 MB Ultrastar 72ZX 73.4 GB 10K 5.3 ms 2.99 ms 16 MB Logical drive configuration Using multiple logical drives on a single physical array is convenient for managing the location of different files types. However, depending on the configuration, it can significantly reduce server performance. When you use multiple logical drives, you are physically spreading the data across different sections of the array disks. If I/O is performed to each of the logical drives, the disk heads have to seek further across the disk surface than when the data is stored on one logical drive. Using multiple logical drives greatly increases seek time and can slow performance by as much as 25%. An example of this is creating two logical drives in the one RAID array and putting a database on one logical drive and the transaction log on the other. Because heavy I/O is being performed on both, the performance will be poor. If the two logical drives are configured with the operating system on one and data on the other, then there should be little I/O to the operating system code once the server has booted so this type of configuration would be OK. It is best to put the page file on the same drive as the data when using one large physical array. This is counterintuitive: Most think the page file should be put on the operating system drive since the operating system will not see much I/O during runtime. However, this causes long seek operations as the Chapter 7. Disk subsystem 145

28 7.7.5 Stripe size head swings over the two partitions. Putting the data and page file on the data array keeps the I/O localized and reduces seek time. Of course this is not the most optimal case, especially for applications with heavy paging. Ideally, the page drive will be a separate device that can be formatted to the correct stripe size to match paging. In general, most applications will not page when given sufficient RAM so usually this is not a problem. The fastest configuration is a single logical drive for each physical RAID array. Instead of using logical drives to manage files, you should create directories and store each type of file in a different directory. This will significantly improve disk performance by reducing seek times because the data will be as physically close together as possible. If you really want or need to partition your data and you have a sufficient number of disks, you should configure multiple RAID arrays instead of configuring multiple logical drives in one RAID array. This will improve disk performance; seek time will be reduced because the data will be physically closer together on each drive. Note: If you plan to use RAID-5E arrays, you can only have one logical drive per array. With RAID technology, data is striped across an array of hard disk drives. Striping is the process of storing data across all the disk drives that are grouped in an array. The granularity at which data from one file is stored on one drive of the array before subsequent data is stored on the next drive of the array is called the stripe unit (alsoreferredtoasinterleave depth). For the ServeRAID adapter family, the stripe unit can be set to a stripe unit size of 8 KB, 16 KB, 32 KB, or 64 KB. With Netfinity Fibre Channel, a stripe unit is called a segment, and segment sizes can also be 8 KB, 16 KB, 32 KB, or 64 KB. The collection of these stripe units, from the first drive of the array to the last drive of the array, is called a stripe. The stripe and stripe unit are shown in Figure Tuning Netfinity Servers for Performance Getting the most out of Windows 2000 and Windows NT 4.0

IBM ^ xseries ServeRAID Technology

IBM ^ xseries ServeRAID Technology IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID

More information

Definition of RAID Levels

Definition of RAID Levels RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds

More information

The team that wrote this redbook Comments welcome Introduction p. 1 Three phases p. 1 Netfinity Performance Lab p. 2 IBM Center for Microsoft

The team that wrote this redbook Comments welcome Introduction p. 1 Three phases p. 1 Netfinity Performance Lab p. 2 IBM Center for Microsoft Foreword p. xv Preface p. xvii The team that wrote this redbook p. xviii Comments welcome p. xx Introduction p. 1 Three phases p. 1 Netfinity Performance Lab p. 2 IBM Center for Microsoft Technologies

More information

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS

TECHNOLOGY BRIEF. Compaq RAID on a Chip Technology EXECUTIVE SUMMARY CONTENTS TECHNOLOGY BRIEF August 1999 Compaq Computer Corporation Prepared by ISSD Technology Communications CONTENTS Executive Summary 1 Introduction 3 Subsystem Technology 3 Processor 3 SCSI Chip4 PCI Bridge

More information

Data Storage - II: Efficient Usage & Errors

Data Storage - II: Efficient Usage & Errors Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina

More information

HP Smart Array Controllers and basic RAID performance factors

HP Smart Array Controllers and basic RAID performance factors Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

Hard Disk Drives and RAID

Hard Disk Drives and RAID Hard Disk Drives and RAID Janaka Harambearachchi (Engineer/Systems Development) INTERFACES FOR HDD A computer interfaces is what allows a computer to send and retrieve information for storage devices such

More information

Performance Report Modular RAID for PRIMERGY

Performance Report Modular RAID for PRIMERGY Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers

More information

RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1

RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1 RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.

More information

Q & A From Hitachi Data Systems WebTech Presentation:

Q & A From Hitachi Data Systems WebTech Presentation: Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,

More information

Outline. Database Management and Tuning. Overview. Hardware Tuning. Johann Gamper. Unit 12

Outline. Database Management and Tuning. Overview. Hardware Tuning. Johann Gamper. Unit 12 Outline Database Management and Tuning Hardware Tuning Johann Gamper 1 Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 12 2 3 Conclusion Acknowledgements: The slides are provided

More information

Data Storage - II: Efficient Usage & Errors. Contains slides from: Naci Akkök, Hector Garcia-Molina, Pål Halvorsen, Ketil Lund

Data Storage - II: Efficient Usage & Errors. Contains slides from: Naci Akkök, Hector Garcia-Molina, Pål Halvorsen, Ketil Lund Data Storage - II: Efficient Usage & Errors Contains slides from: Naci Akkök, Hector Garcia-Molina, Pål Halvorsen, Ketil Lund Overview Efficient storage usage Disk errors Error recovery INF3 7.3.26 Ellen

More information

Benefits of Intel Matrix Storage Technology

Benefits of Intel Matrix Storage Technology Benefits of Intel Matrix Storage Technology White Paper December 2005 Document Number: 310855-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

RAID Technology. RAID Overview

RAID Technology. RAID Overview Technology In the 1980s, hard-disk drive capacities were limited and large drives commanded a premium price. As an alternative to costly, high-capacity individual drives, storage system developers began

More information

RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array

RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com sales@attotech.com RAID Overview: Identifying What RAID Levels

More information

Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1

Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1 Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,

More information

RAID Basics Training Guide

RAID Basics Training Guide RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E

More information

Chapter 10: Mass-Storage Systems

Chapter 10: Mass-Storage Systems Chapter 10: Mass-Storage Systems Physical structure of secondary storage devices and its effects on the uses of the devices Performance characteristics of mass-storage devices Disk scheduling algorithms

More information

Storing Data: Disks and Files

Storing Data: Disks and Files Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve

More information

Read this before starting!

Read this before starting! Points missed: Student's Name: Total score: /100 points East Tennessee State University Department of Computer and Information Sciences CSCI 4717 Computer Architecture TEST 2 for Fall Semester, 2006 Section

More information

RAID 5 rebuild performance in ProLiant

RAID 5 rebuild performance in ProLiant RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild

More information

RAID Level Descriptions. RAID 0 (Striping)

RAID Level Descriptions. RAID 0 (Striping) RAID Level Descriptions RAID 0 (Striping) Offers low cost and maximum performance, but offers no fault tolerance; a single disk failure results in TOTAL data loss. Businesses use RAID 0 mainly for tasks

More information

RAID technology and IBM TotalStorage NAS products

RAID technology and IBM TotalStorage NAS products IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID

More information

Intel RAID Software User s Guide:

Intel RAID Software User s Guide: Intel RAID Software User s Guide: Intel Embedded Server RAID Technology II Intel Integrated Server RAID Intel RAID Controllers using the Intel RAID Software Stack 3 Revision 8.0 August, 2008 Intel Order

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information

technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels

technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California

More information

Intel RAID Controllers

Intel RAID Controllers Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications

More information

Outline. Database Tuning. Disk Allocation Raw vs. Cooked Files. Overview. Hardware Tuning. Nikolaus Augsten. Unit 6 WS 2015/16

Outline. Database Tuning. Disk Allocation Raw vs. Cooked Files. Overview. Hardware Tuning. Nikolaus Augsten. Unit 6 WS 2015/16 Outline Database Tuning Hardware Tuning Nikolaus Augsten University of Salzburg Department of Computer Science Database Group Unit 6 WS 2015/16 1 2 3 Conclusion Adapted from Database Tuning by Dennis Shasha

More information

systems' resilience to disk failure through

systems' resilience to disk failure through BY Shashwath Veerappa Devaru CS615 Aspects of System Administration Using Multiple Hard Drives for Performance and Reliability RAID is the term used to describe a storage systems' resilience to disk failure

More information

WebBIOS Configuration Utility Guide

WebBIOS Configuration Utility Guide Dell PowerEdge Expandable RAID Controller 3/QC, 3/DC, 3/DCL and 3/SC WebBIOS Configuration Utility Guide www.dell.com support.dell.com Information in this document is subject to change without notice.

More information

Mass Storage Structure

Mass Storage Structure Mass Storage Structure 12 CHAPTER Practice Exercises 12.1 The accelerating seek described in Exercise 12.3 is typical of hard-disk drives. By contrast, floppy disks (and many hard disks manufactured before

More information

CS 6290 I/O and Storage. Milos Prvulovic

CS 6290 I/O and Storage. Milos Prvulovic CS 6290 I/O and Storage Milos Prvulovic Storage Systems I/O performance (bandwidth, latency) Bandwidth improving, but not as fast as CPU Latency improving very slowly Consequently, by Amdahl s Law: fraction

More information

RAID EzAssist Configuration Utility Quick Configuration Guide

RAID EzAssist Configuration Utility Quick Configuration Guide RAID EzAssist Configuration Utility Quick Configuration Guide DB15-000277-00 First Edition 08P5520 Proprietary Rights Notice This document contains proprietary information of LSI Logic Corporation. The

More information

Price/performance Modern Memory Hierarchy

Price/performance Modern Memory Hierarchy Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

Storage Devices for Database Systems

Storage Devices for Database Systems Storage Devices for Database Systems These slides are mostly taken verbatim, or with minor changes, from those prepared by Stephen Hegner (http://www.cs.umu.se/ hegner/) of Umeå University Storage Devices

More information

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is

More information

Sistemas Operativos: Input/Output Disks

Sistemas Operativos: Input/Output Disks Sistemas Operativos: Input/Output Disks Pedro F. Souto (pfs@fe.up.pt) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction

More information

Lecture 36: Chapter 6

Lecture 36: Chapter 6 Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for

More information

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections

Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Intel RAID Software User s Guide:

Intel RAID Software User s Guide: Intel RAID Software User s Guide: Intel Embedded Server RAID Technology II Intel Integrated Server RAID Intel RAID Controllers using the Intel RAID Software Stack 3 Revision 11.0 July, 2009 Intel Order

More information

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0

Assessing RAID ADG vs. RAID 5 vs. RAID 1+0 White Paper October 2001 Prepared by Industry Standard Storage Group Compaq Computer Corporation Contents Overview...3 Defining RAID levels...3 Evaluating RAID levels...3 Choosing a RAID level...4 Assessing

More information

Storage Technologies for Video Surveillance

Storage Technologies for Video Surveillance The surveillance industry continues to transition from analog to digital. This transition is taking place on two fronts how the images are captured and how they are stored. The way surveillance images

More information

Click on the diagram to see RAID 0 in action

Click on the diagram to see RAID 0 in action Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written

More information

SCSI vs. Fibre Channel White Paper

SCSI vs. Fibre Channel White Paper SCSI vs. Fibre Channel White Paper 08/27/99 SCSI vs. Fibre Channel Over the past decades, computer s industry has seen radical change in key components. Limitations in speed, bandwidth, and distance have

More information

UK HQ RAID Chunk Size T F www.xyratex.com ISO 14001

UK HQ RAID Chunk Size T F www.xyratex.com ISO 14001 RAID Chunk Size Notices The information in this document is subject to change without notice. While every effort has been made to ensure that all information in this document is accurate, Xyratex accepts

More information

Intel RAID Software User s Guide:

Intel RAID Software User s Guide: Intel RAID Software User s Guide: Intel Embedded Server RAID Technology II Intel Integrated Server RAID Intel RAID Controllers using the Intel RAID Software Stack 3 Revision 9.0 December, 2008 Intel Order

More information

Communicating with devices

Communicating with devices Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Optimizing LTO Backup Performance

Optimizing LTO Backup Performance Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...

More information

HARDWARE GUIDE. MegaRAID SCSI 320-2 RAID Controller

HARDWARE GUIDE. MegaRAID SCSI 320-2 RAID Controller HARDWARE GUIDE MegaRAID SCSI 320-2 RAID Controller November 2002 This document contains proprietary information of LSI Logic Corporation. The information contained herein is not to be used by or disclosed

More information

RAID Performance Analysis

RAID Performance Analysis RAID Performance Analysis We have six 500 GB disks with 8 ms average seek time. They rotate at 7200 RPM and have a transfer rate of 20 MB/sec. The minimum unit of transfer to each disk is a 512 byte sector.

More information

QuickSpecs. HP Smart Array 5312 Controller. Overview

QuickSpecs. HP Smart Array 5312 Controller. Overview Overview Models 238633-B21 238633-291 (Japan) Feature List: High Performance PCI-X Architecture High Capacity Two Ultra 3 SCSI channels support up to 28 drives Modular battery-backed cache design 128 MB

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

COMPUTER HARDWARE. Input- Output and Communication Memory Systems COMPUTER HARDWARE Input- Output and Communication Memory Systems Computer I/O I/O devices commonly found in Computer systems Keyboards Displays Printers Magnetic Drives Compact disk read only memory (CD-ROM)

More information

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server

Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance

More information

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server

More information

Chapter 13 Selected Storage Systems and Interface

Chapter 13 Selected Storage Systems and Interface Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.

More information

Chapter 9: Peripheral Devices: Magnetic Disks

Chapter 9: Peripheral Devices: Magnetic Disks Chapter 9: Peripheral Devices: Magnetic Disks Basic Disk Operation Performance Parameters and History of Improvement Example disks RAID (Redundant Arrays of Inexpensive Disks) Improving Reliability Improving

More information

Secondary Storage Devices 4

Secondary Storage Devices 4 4 Secondary Storage Devices Copyright 2004, Binnur Kurt Content Secondary storage devices Organization of disks Organizing tracks by sector Organizing tracks by blocks Non-data overhead The cost of a disk

More information

Understanding and Installing Hard Drives Chapter #8

Understanding and Installing Hard Drives Chapter #8 Understanding and Installing Hard Drives Chapter #8 Amy Hissom Key Terms 80 conductor IDE cable An IDE cable that has 40 pins but uses 80 wires, 40 of which are ground wires designed to reduce crosstalk

More information

HARDWARE GUIDE. MegaRAID SCSI 320-1 RAID Controller

HARDWARE GUIDE. MegaRAID SCSI 320-1 RAID Controller HARDWARE GUIDE MegaRAID SCSI 320-1 RAID Controller September 2002 This document contains proprietary information of LSI Logic Corporation. The information contained herein is not to be used by or disclosed

More information

StorTrends RAID Considerations

StorTrends RAID Considerations StorTrends RAID Considerations MAN-RAID 04/29/2011 Copyright 1985-2011 American Megatrends, Inc. All rights reserved. American Megatrends, Inc. 5555 Oakbrook Parkway, Building 200 Norcross, GA 30093 Revision

More information

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/)

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/) COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Today s Topics Magnetic disks Magnetic disk performance

More information

Redundant Array of Independent Disks (RAID) Technology Overview

Redundant Array of Independent Disks (RAID) Technology Overview Redundant Array of Independent Disks (RAID) Technology Overview What is RAID? The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or

More information

Using RAID6 for Advanced Data Protection

Using RAID6 for Advanced Data Protection Using RAI6 for Advanced ata Protection 2006 Infortrend Corporation. All rights reserved. Table of Contents The Challenge of Fault Tolerance... 3 A Compelling Technology: RAI6... 3 Parity... 4 Why Use RAI6...

More information

The Bus (PCI and PCI-Express)

The Bus (PCI and PCI-Express) 4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the

More information

HARDWARE GUIDE. MegaRAID SCSI 320-0 Zero-Channel RAID Controller

HARDWARE GUIDE. MegaRAID SCSI 320-0 Zero-Channel RAID Controller HARDWARE GUIDE MegaRAID SCSI 320-0 Zero-Channel RAID Controller September 2002 This document contains proprietary information of LSI Logic Corporation. The information contained herein is not to be used

More information

PIONEER RESEARCH & DEVELOPMENT GROUP

PIONEER RESEARCH & DEVELOPMENT GROUP SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for

More information

COMP303 Computer Architecture Lecture 17. Storage

COMP303 Computer Architecture Lecture 17. Storage COMP303 Computer Architecture Lecture 17 Storage Review: Major Components of a Computer Processor Devices Control Memory Output Datapath Input Secondary Memory (Disk) Main Memory Cache Magnetic Disk Purpose

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer. Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds

More information

Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral

Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid

More information

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant

More information

RAID Technology Overview

RAID Technology Overview RAID Technology Overview HP Smart Array RAID Controllers HP Part Number: J6369-90050 Published: September 2007 Edition: 1 Copyright 2007 Hewlett-Packard Development Company L.P. Legal Notices Copyright

More information

RAID 6 with HP Advanced Data Guarding technology:

RAID 6 with HP Advanced Data Guarding technology: RAID 6 with HP Advanced Data Guarding technology: a cost-effective, fault-tolerant solution technology brief Abstract... 2 Introduction... 2 Functions and limitations of RAID schemes... 3 Fault tolerance

More information

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance

More information

Chapter 12: Mass-Storage Systems

Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Chapter 12: Mass-Storage Systems Overview of Mass Storage Structure Disk Structure Disk Attachment Disk Scheduling Disk Management Swap-Space Management RAID Structure

More information

Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers

Models Smart Array 6402A/128 Controller 3X-KZPEC-BF Smart Array 6404A/256 two 2 channel Controllers Overview The SA6400A is a high-performance Ultra320, PCI-X array controller. It provides maximum performance, flexibility, and reliable data protection for HP OpenVMS AlphaServers through its unique modular

More information

Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment

Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment Best practices for Implementing Lotus Domino in a Storage Area Network (SAN) Environment With the implementation of storage area networks (SAN) becoming more of a standard configuration, this paper describes

More information

BrightStor ARCserve Backup for Windows

BrightStor ARCserve Backup for Windows BrightStor ARCserve Backup for Windows Tape RAID Option Guide r11.5 D01183-1E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's

More information

IncidentMonitor Server Specification Datasheet

IncidentMonitor Server Specification Datasheet IncidentMonitor Server Specification Datasheet Prepared by Monitor 24-7 Inc October 1, 2015 Contact details: sales@monitor24-7.com North America: +1 416 410.2716 / +1 866 364.2757 Europe: +31 088 008.4600

More information

Non-Redundant (RAID Level 0)

Non-Redundant (RAID Level 0) There are many types of RAID and some of the important ones are introduced below: Non-Redundant (RAID Level 0) A non-redundant disk array, or RAID level 0, has the lowest cost of any RAID organization

More information

GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4

GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4 1 Table of Contents GENERAL INFORMATION COPYRIGHT... 3 NOTICES... 3 XD5 PRECAUTIONS... 3 INTRODUCTION... 4 FEATURES... 4 SYSTEM REQUIREMENT... 4 XD5 FAMILULARIZATION... 5 PACKAGE CONTENTS... 5 HARDWARE

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Getting Started With RAID

Getting Started With RAID Dell Systems Getting Started With RAID www.dell.com support.dell.com Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A

More information

Deploying and Optimizing SQL Server for Virtual Machines

Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Deploying and Optimizing SQL Server for Virtual Machines Much has been written over the years regarding best practices for deploying Microsoft SQL

More information

Best Practices RAID Implementations for Snap Servers and JBOD Expansion

Best Practices RAID Implementations for Snap Servers and JBOD Expansion STORAGE SOLUTIONS WHITE PAPER Best Practices RAID Implementations for Snap Servers and JBOD Expansion Contents Introduction...1 Planning for the End Result...1 Availability Considerations...1 Drive Reliability...2

More information

Topics. Hamming Algorithm

Topics. Hamming Algorithm Topics Hamming algorithm Magnetic disks RAID Hamming Algorithm In a Hamming code r parity bits added to m-bit word Forms codeword with length (m + r) bits Bit numbering Starts at with leftmost (high-order)

More information

3PAR Fast RAID: High Performance Without Compromise

3PAR Fast RAID: High Performance Without Compromise 3PAR Fast RAID: High Performance Without Compromise Karl L. Swartz Document Abstract: 3PAR Fast RAID allows the 3PAR InServ Storage Server to deliver higher performance with less hardware, reducing storage

More information

Database Management Systems

Database Management Systems 4411 Database Management Systems Acknowledgements and copyrights: these slides are a result of combination of notes and slides with contributions from: Michael Kiffer, Arthur Bernstein, Philip Lewis, Anestis

More information

RAID: Redundant Arrays of Inexpensive Disks this discussion is based on the paper: on Management of Data (Chicago, IL), pp.109--116, 1988.

RAID: Redundant Arrays of Inexpensive Disks this discussion is based on the paper: on Management of Data (Chicago, IL), pp.109--116, 1988. : Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings

More information

HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler

HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler Overview HP Smart Array 5i Plus Controller and Battery Backed Write Cache (BBWC) Enabler Models Smart Array 5i Plus Controller and BBWC Enabler bundled Option Kit (for ProLiant DL380 G2, ProLiant DL380

More information

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture Chapter 2: Computer-System Structures Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture Operating System Concepts 2.1 Computer-System Architecture

More information

Data Storage and Backup. Sanjay Goel School of Business University at Albany, SUNY

Data Storage and Backup. Sanjay Goel School of Business University at Albany, SUNY Data Storage and Backup Sanjay Goel School of Business University at Albany, SUNY Data Backup 2 Data Backup Why? Files can be accidentally deleted Mission-critical data can become corrupt. Natural disasters

More information

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

Chapter 13 Disk Storage, Basic File Structures, and Hashing. Chapter 13 Disk Storage, Basic File Structures, and Hashing. Copyright 2004 Pearson Education, Inc. Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files

More information

ISTANBUL AYDIN UNIVERSITY

ISTANBUL AYDIN UNIVERSITY ISTANBUL AYDIN UNIVERSITY 2013-2014 Academic Year Fall Semester Department of Software Engineering SEN361 COMPUTER ORGANIZATION HOMEWORK REPORT STUDENT S NAME : GÖKHAN TAYMAZ STUDENT S NUMBER : B1105.090068

More information