The IntelliMagic White Paper: Green Storage: Reduce Power not Performance. December 2010
|
|
|
- Conrad Hill
- 9 years ago
- Views:
Transcription
1 The IntelliMagic White Paper: Green Storage: Reduce Power not Performance December 2010 Summary: This white paper provides techniques to configure the disk drives in your storage system such that they use the least amount of power while still providing good performance. Minimizing power usage is not so much about finding disk drives with a lower power usage, but rather about selecting a disk drive configuration that closely matches your workload needs.
2 This white paper was prepared by: IntelliMagic BV Lokhorststraat TA Leiden Leiden, The Netherlands Phone: IntelliMagic Inc 558 Silicon Dr. Ste 101 Southlake TX, USA Phone: [email protected] Web: Disclaimer This white paper discusses Disk Subsystem performance and capabilities in general terms, to the best of our knowledge. Any decisions based on this paper and its recommendations remain the responsibility of the reader. IntelliMagic products analyze measurement data and provide estimates for workload parameters based on this information. However, IntelliMagic does not guarantee the correctness of these numbers, and therefore any sizing based on the results remains the responsibility of the user. Support Please direct requests for information to [email protected] Trademarks All trademarks and registered trademarks are the property of their respective owners IntelliMagic BV Page ii IntelliMagic 2010
3 Table of Contents Preface... iv Drive Type Power Requirements... 1 Capacity or throughput?... 2 Cost of capacity requirements...2 Cost of throughput requirements...2 Watts per GB or Watts per IO/sec... 3 Flash Drives... 4 Access Density... 4 Front-end access density... 4 Back-end access density... 4 Relation between front-end and back-end access density... 5 Disk response time by access density... 5 Picking a RAID scheme... 7 Read miss throughputs... 7 Random and sequential writes... 7 Mixed workloads... 8 Over-configuring... 9 Example... 9 Migration options...10 Conclusion IntelliMagic 2010 Page iii
4 Preface This white paper provides techniques to configure the disk drives in your storage system such that they use the least amount of power while still providing good performance. Minimizing power usage is not so much about finding disk drives with a lower power usage, but rather about selecting a disk drive configuration that closely matches your workload needs. The key objective in selecting disk drives is to find a configuration that meets your performance needs at the lowest drive count, i.e. with the highest drive capacity possible without the drives getting too busy. It is important to realize that this is not only a matter of selecting a drive type, but that the selection of the most suitable RAID type is equally important as we will discuss in detail. Leiden, February 2009 Dr. Gilbert Houtekamer Els Das Second Edition, July 2010 In July of 2010 IntelliMagic renamed the end user version of Disk Magic to IntelliMagic Direction to distinguish from the IBM version. Additionally, RMF Magic was renamed to IntelliMagic Vision to better reflect the expanded scope which now includes support for open systems storage environments. These were the main reasons for this second edition. Southlake, TX USA, July 2010 Brett Allison Page iv IntelliMagic 2010
5 Watts Green Storage: Reduce Power not Performance Drive Type Power Requirements The topic of this white paper is how to look at power usage and cost of operation for disk drives within high-end disk subsystems or disk arrays. Why would we only look at the drives themselves, and not at the other disk subsystem components that use power? Within a disk subsystem, the drives themselves take up a large portion of the total power consumption. For a full box typically two-thirds of the power consumption is used by the drives alone. Moreover, the drives are a place where you can make choices that influence the power consumption a great deal; for many of the other components you have no configuration options. We will start with showing the base numbers for power usage for the common drive types that large companies use in their disk subsystems. Figure 1: Power Usage per Drive Type shows the approximate power requirements per drive for the three product families most relevant to highend disk subsystems: SATA, Fibre 10k RPM and Fibre 15K RPM. Note that the power usage per drive for the least energy efficient drive is only 1.7 times the power consumption for the most energy efficient drive. Thus the number of drives is the first rough approximation if you want to assess the total power consumption of a configuration. Figure 1: Power Usage per Drive Type SATA 7.2k RPM drives Fibre 10k RPM drives Fibre 15k RPM drives Drive capacity (GB) As you can imagine, a higher RPM means higher power requirements, because the disk spins faster. Larger drives also use more power than smaller drives of the same technology because the platters and the read/write head assembly constitute more mass that must be moved. IntelliMagic 2010 Page 1
6 Note that the power usage of a drive also depends on its activity: more I/Os per second mean more head movements and therefore more power usage. This relationship is almost linear and constant over all disk types, about 0.03 Watt per I/O per sec. Therefore the I/O-related power consumption is of little influence to the choices that you can make in the disk configuration and we will not talk about this again in this whitepaper. The most economical drive to run if you would solely look at the energy consumption per disk drive ( arm ) would be the 73GB 10,000 RPM Fibre drive in above chart. However, to assess which drive is the most economical to run in your storage systems, the power usage per disk arm is not the number to use. In the next paragraph we will show different ways to look at it. Capacity or Throughput? When reviewing the energy usage for disk drives, it is important to realize that there are two very different but equally important requirements for the number of disk drives. One requirement is the amount of net GBs used by the applications, and the other requirement is the throughput in IOs/second required by the applications. If you solely configure your drives based on the net capacity needed, you can express the power consumption of the different options in Watts per GB. However, many configurations with higher activity level need to take into account minimum throughput requirements when they select a disk configuration. If you need to configure the drives based on the required throughput rather than on GBs alone, you can express the energy cost of your options in Watts per I/Os per second. Cost of Capacity Requirements As we stated in the first section, the drive count (number of arms) is a good first-order approximation when you compare the power usage of different potential configurations. The energy consumption to provide the application with a certain amount of net GBs obviously becomes lower if you use higher capacity drives, because you need fewer drives. Additionally, the energy cost of a slower RPM drive is lower than for an equal-capacity higher RPM drive. So if you look at energy cost to configure a certain net capacity you can be sure that the larger the disks and the slower the RPM, the better. In fact, the lowest Watts per GB number is achieved with non-moving magnetic storage: disks that are switched off and tape cartridges. For enterprise disk subsystem usage, SATA drives provide the highest capacity and lowest RPM and therefore the lowest energy cost for the configured capacity. However, sizing your disk configuration on capacity requirements alone may not give you the performance and throughput that you need. Cost of Throughput Requirements Because of the slower mechanical components, the slower rotation speed, and the simpler ATA controller, the peak number of I/O capability on one SATA disk is much lower than for one Fibre disk. This means that you may need to buy more SATA drives than you need for the net capacity if your applications have high throughput requirements. If you need to configure many extra arms to satisfy these throughput requirements, SATA drives do not provide a very low cost alternative. Actually, for a given throughput you need less Fibre drives than you need SATA Page 2 IntelliMagic 2010
7 drives, so for most data base or active workloads, smaller and faster Fibre drives will provide a more energy efficient solution than SATA drives. So for high throughput-workloads, the GBs required are not the limiting factor in choosing the disk configuration, and the energy cost of the different configurations should be expressed in Watts per I/Os per second instead. Watts per GB or Watts per IO/sec Table 1: Energy per GB compared to Energy per IO below shows the metrics discussed for both SATA and Fibre Channel drives. In this table we set the maximum I/O rate per arm to the I/O rate corresponding to 50% HDD busy. As you can see, you can achieve the lowest energy consumption to satisfy capacity requirements with SATA (green area in the top three lines), and the lowest energy consumption for satisfying throughput requirements with Fibre (remaining green lines). A 73 GB 15K RPM drive is the fastest drive available, and therefore provides the highest number of I/Os per second per disk arm, which can be translated to the lowest energy consumption for configurations with very high throughput requirements. Technolog y SATA Fibre Ultrasta r A7K100 0 SATA 15K300 4 Gbs 10K300 4 Gbs Table 1: Energy per GB compared to Energy per IO Capacit y Limited I/O rate : Ratio per arm Watt/ Watt/ (compare GB (*) arm GB d to best) Throughp ut Limited: Watt/ IO/s Ratio (compare d to best) * Random disk I/O rate corresponding to 50% busy IntelliMagic 2010 Page 3
8 An additional type of over-provisioning may be needed for SATA and FATA drives. Because of the lower-cost components they are not recommended to be used at the high sustained utilizations that you can use for Fibre drives. The vendors use terms like lower duty cycle or reduced duty cycle to describe the storage environments in which SATA and FATA drives should be used. This can be translated to a 20% - 30% HDD busy, instead of the 50% - 80% HDD busy that you could use for Fibre drives. This means that you would need to configure even more SATA/FATA drives to get the necessary throughput without loading the HDDs higher than recommended by the vendor. Flash Drives It should be noted that when even higher throughputs are required, flash drives become a viable alternative. Not many workloads, however, need that level of performance and throughput. However, by moving some of your most active workload to Flash drives you may be able to put the remaining workload on high capacity Fibre or FATA drives. With the rapidly dropping costs of flash drives, it is only a matter of time before they will become commonly used in large storage environments as well as in your laptop. Access Density Because it would be a complex, labor-intensive effort to review the workload for each individual logical or physical drive, a common concept used to describe the intensity of the I/O workload is the term access density, which is defined as the average number of I/O operations per second per GB of data stored. When configuring disk subsystems, it is very important to distinguish between front-end and back-end access density. Front-end Access Density The front-end access density is determined by the number of host I/O operations to a disk subsystem or logical volume. Example access density at the front-end: 20 x 54 GB Logical Volumes handling 500 I/Os per second yield 0.46 I/Os per second per net GB This is the access density that we normally talk about and that is relevant to the performance of the disk subsystem as a whole. However, when you want to look at the performance and throughput of the disk drives specifically, you need to use the back-end access density as defined below. Back-end Access Density Back-end access density is the number of operations to the hard disk drives. This is the number that is crucial to the HDD utilization, throughput and performance. Example access density at the back-end: Page 4 IntelliMagic 2010
9 An array group of 8 * 146 GB drives (=1168 GB) handling 250 HDD accesses per second yields 0.21 accesses per second per physical GB of hard disk What we need to know to compute the back-end access density for a particular workload is how many of the front-end I/Os result in disk accesses. Read hit I/Os do not result in a back-end access at all. Random read misses result in synchronous disk accesses, and sequential reads result in asynchronous disk accesses because of the prestaging. Destaged writes result in asynchronous disk accesses, typically multiple per write operation, depending on the RAID scheme used. The back-end I/O rate is not available from measurement tools directly; it must be computed or estimated from the front-end I/O Rates and cache statistics, as well as from the RAID scheme used. Applications like IntelliMagic Direction and IntelliMagic Vision do this for you. Relation between Front-end and Back-end Access Density To illustrate the potential differences between front-end and back-end I/O rates let us look at the charts below in Figure 2. These charts show the access densities for 5 disk subsystems over the course of a day. In the lefthand chart you can clearly see a break-up of the five disk subsystems into two categories: two are relatively inactive on the front-end; the other three are much more active. The right-hand chart shows the back-end HDD reads and writes. Note that this second chart shows a very different picture: the subsystems are more or less equally loaded in terms of back-end activity. Figure 2: Front-end and Back-end Access Density for the same workload and hardware Data and charts from IntelliMagic Vision Front-end Access Densities for five Disk Subsystems Back-end Access Densities for the same Disk Subsystems :00 AM 2:00 AM 4:00 AM 6:00 AM 8:00 AM 10:00 AM 12:00 PM 2:00 PM 4:00 PM 6:00 PM 8:00 PM 10:00 PM DMX #1 DMX #2 DMX #3 DMX #4 DMX # :00 AM 2:00 AM 4:00 AM 6:00 AM 8:00 AM 10:00 AM 12:00 PM 2:00 PM 4:00 PM 6:00 PM 8:00 PM 10:00 PM It is this back-end access density where selection of the drive type really matters, both in terms of speed and in terms of size. DMX #1 DMX #2 DMX #3 DMX #4 DMX #5 Disk Response Time by Access Density Combining the access densities with a response time curve, the drive capacity makes a large difference in terms of performance. For example, when the back-end I/O density is 1 I/O per sec per GB, a 73 GB drive has to handle 73 I/Os per second, where a 144 GB drive for the same IntelliMagic 2010 Page 5
10 HDD Response time (ms) Green Storage: Reduce Power not Performance workload must handle 144 I/Os per second. Since the HDD response time increases when there are more I/Os per disk arm, the response time for such a workload on a 144 GB drive will be much higher. Figure 3: Physical Disk Response time as a function of Back-end Access Density GB 10K Fibre 73 GB 15K Fibre 146 GB 10K Fibre 146 GB 15K Fibre 300 GB 10K Fibre 300 GB 15K Fibre Back-end Density (HDD I/O per GB) Looking at the chart in Figure 3, you can see that the 15K RPM drives all start at a low response time for low access densities, only around 6 ms. However, the response time for the 300 GB 15K RPM drive quickly exceeds the response time for the 73 GB 10K RPM drive when the access density increases. Table 2: Access Density Recommendations Drive Max. Back-end Access Density 73 GB, 15K RPM GB, 10K RPM GB, 15K RPM GB, 10K RPM GB, 15kRPM GB, 10K RPM 0.2 You can use the previous chart to create recommendations for the maximum I/O density supported by each drive given a certain minimum performance requirement of, say, 15ms per back-end access. As you can see from the table on the left, the maximum recommended access densities are vastly different for the different types of Fibre drives. Note that using a universal 15 ms cut-off point may not be fair to the 10k drives because of their higher base service times; on the other hand the application users do not care what the drives are capable of in terms of base service time, they just care about the response times that they are experiencing. Page 6 IntelliMagic 2010
11 The back-end access densities for the various disk types, and the resulting HDD busy and frontend response times can be easily modeled using a software tool like IntelliMagic Direction. Picking a RAID Scheme When comparing RAID-10 to RAID-5, it is immediately clear that a RAID-10 scheme for the same net capacity uses more power, as there are as many mirror disks as data disks, while a RAID-5 implementation typically uses one parity disk for every 7 data disks. So a RAID-5 scheme uses just over half the power of the RAID-10 scheme if computed on a per GB basis. Read Miss Throughputs As we have seen, however, power per GB is not always the most important metric. Performance and throughput limitations are factors that need to be evaluated too. For a lightly loaded system, RAID-5 and RAID-10 will give the same response time, as the only host operation that requires a synchronous disk access is a read-miss. Both RAID-5 and RAID-10 systems will need to read the record. In a RAID-10 system you need to configure more drives to get the same net capacity, which automatically gives you more arms to spread the work over. For that reason a RAID-10 scheme will be able to process more read-miss I/Os per second per net GB than a RAID-5 scheme for the same net capacity. For high access densities, these extra arms may be just what you need to get the required throughput. However, this higher read miss throughput could also be obtained by simply over-configuring the number of RAID-5 RAID groups such that you have as many drives as a RAID-10 setup would have. So how then do you decide whether to use RAID-10 or RAID-5? For that we need to look at the writes, not just at the reads. RAID-10 and RAID-5 behave very differently for write operations. Random and Sequential Writes Random write operations, as in a database workload, result in two writes for RAID-10 (primary and mirror copy), but in four operations for RAID-5. This high number of RAID-5 operations is needed because to compute the new parity, the old data and old parity information need to be read first. After computing the parity, the data and parity are written, hence the 4 operations. Therefore, RAID-10 supports a higher throughput per RAID rank for random write operations: double the RAID-5 random write throughput. For sequential workloads the RAID-5 scheme is more efficient, as all data and parity information in a RAID group will be written in a single logical operation without having to read the existing data and parity first. So the overhead of the RAID-5 scheme for sequential is 1 parity block that needs to be written once for every 7 blocks of data, a overhead of 1/7 th. RAID-10 still requires two writes for every application write, regardless of it being random or sequential. Therefore, RAID-5 supports a higher throughput for sequential write operations per RAID group: almost double the RAID-10 sequential write throughput. IntelliMagic 2010 Page 7
12 Mixed Workloads Real workloads are a varying mix of reads and writes, cache hits and misses, random and sequential I/O, so the number of back-end I/O operations to the disks can go either way: more for a RAID-5 implementation, more for a RAID-10 implementation, or almost the same for both. The figures below show the number of disk operations for a RAID-5 versus RAID-10 implementation for a measured workload on two existing disk subsystems over the course of a day. For the disk subsystem in Figure 4, the red line for RAID-10 has higher peaks than the blue line that represents a RAID-5 implementation. This shows that for this disk subsystem, a RAID-10 implementation results in a higher back-end rate. Of course a RAID-10 implementation has a higher minimum number of disks to get the required net capacity as well. So, for this workload, a RAID-5 implementation will be more economical both from the point of view of capacity requirements as from the point of view of the throughput requirements. Figure 4: HDD accesses for disk subsystem 1 on RAID-10 vs. RAID :59 PM 2:14 AM 4:29 AM 6:44 AM 8:59 AM 11:14 AM 1:29 PM 3:44 PM 5:59 PM 8:14 PM 10:29 PM RAID 10 HDD Rate RAID 5 HDD Rate Figure 5 shows much higher peaks for the RAID-5 HDD rate. The random write content for this workload is so high that for the highest peak, the HDD rate on RAID-5 is almost twice as high as for a RAID-10 scheme. This means that RAID-10 may be a more economical choice, if the access density is high enough that there are more disks needed than for the capacity alone. Page 8 IntelliMagic 2010
13 Figure 5: HDD accesses for disk subsystem 2 on RAID-10 vs. RAID Over-configuring 0 11:59 PM 2:14 AM 4:29 AM 6:44 AM 8:59 AM 11:14 AM 1:29 PM 3:44 PM 5:59 PM 8:14 PM 10:29 PM RAID 10 HDD Rate RAID 5 HDD Rate If we want to know which RAID type is more cost efficient for our workload we need to take into account how many disks are minimally needed to satisfy the capacity requirements, as well as how many drives are needed to support the peak back-end throughputs for RAID-5 and RAID-10. For some workloads with a large random write fraction and a large throughput, you would need to over-configure the number of RAID-5 groups so much that you actually need more drives than for a RAID-10 implementation. It is a very big effort to compute these types of scenarios by hand; a tool like IntelliMagic Direction will help you with this. Example Let us define a workload profile as all the ratio s and percentages that are relevant to computing the back-end disk accesses: the read/write ratio, read hit %, the destage % (or write efficiency), and the random/sequential write ratio. For each workload profile and for each RAID and disk type you can then draw a picture that shows the number of required RAID groups as a function of the number of front-end I/Os or, equivalently, of the front-end access density. For very low access densities, the required number of RAID groups is simply the number of RAID groups that are needed to configure the net capacity of application data. In the picture that shows the needed number of RAID groups as a function of access density, this number shows up as a horizontal line; one that is higher for RAID-10 than for RAID-5. However, when the access density gets higher, more RAID groups need to be configured to make sure that the HDD busy doesn t go beyond what the drives support. At this point, that is where the throughput requirements take over from the capacity requirements, the horizontal line changes to a growing step function line. Depending on the exact workload profile, the two curves for RAID-5 and RAID-10 cross at a different point or do not cross at all, showing that for some workload profiles RAID-5 uses less IntelliMagic 2010 Page 9
14 Number of RAID groups (8 disks) Green Storage: Reduce Power not Performance drives than RAID-10 regardless of the access density, whereas for other workloads high access densities make RAID-10 more economical. Figure 6 shows such a picture for RAID-5 and RAID-10, for a synthetic 50% read/50% write workload of which 50% of the writes are sequential. The drives used for this picture are 300 GB 15k drives. In this picture you can see that for this workload profile the higher the access density is, the more economical a RAID-10 configuration will be. Figure 6: Required number of RAID groups for growing access density Req. RAID-5 RAID Groups Req. RAID-10 RAID Groups Access Density Migration Options In most cases you are not designing a disk configuration from scratch, but there is an existing configuration that will be replaced. A very common case is a migration from 10K RPM drives to twice-as-big 15K RPM drives with the same RAID scheme. This will result in better response times at a low level of activity, but also in a somewhat lower maximum throughput when the drives are the limiting factor. This potential throughput reduction may be mitigated to some extent by a faster, next generation disk subsystem. The power savings for a migration like this are less than 50% (which is the drive count reduction), because 15K RPM drives use more power than the 10K RPM drives. Migrating from 73GB 10K RPM drives to 146GB 15K RPM drives provides a power saving of around 30%. When migrating from 146GB 10K RPM drives to 300GB 15K RPM drives the savings are about 25%. Page 10 IntelliMagic 2010
15 If you convert from RAID-10 to RAID-5 there will also be power savings because of the reduced drive count, but the downside is that this configuration can result in a lower maximum throughput. As in the previous example, this effect can be mitigated with faster drives. Migrating from RAID-10 73GB 10K RPM drives to RAID-5 73GB 15K RPM drives will save about 30% in HDD power requirements. Migrating to 146GB 15K PRM drives will result in approximately 60% power savings, but the maximum throughput will also be lower, potentially making this migration not viable for disk-intensive workloads. Again, a next generation disk subsystem can offset this lower throughput to some extent. Conclusion When you look at reducing energy consumption for the disk drive configuration in your disk subsystem, it may seem initially that the most reduction potential results from decreasing the number of drives by increasing the drive capacity, or by choosing RAID-5 over RAID-10. However, for high I/O rates, you may need extra drives to get the throughput needed. Depending on the read to write ratio and random to sequential ratio, RAID-10 may become more effective. So for some workload profiles smaller drives are a more economical choice, and for some workloads RAID-10 is a more economical choice than RAID-5, whereas for other workloads simply over-configuring the number of RAID-5 groups may be sufficient. IntelliMagic Direction can help you decide which scheme is the best choice for your workloads. IntelliMagic 2010 Page 11
The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)
The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920
Q & A From Hitachi Data Systems WebTech Presentation:
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This
W W W. Z J O U R N A L. C O M o c t o b e r / n o v e m b e r 2 0 0 9 INSIDE
T h e R e s o u r c e f o r U s e r s o f I B M M a i n f r a m e S y s t e m s W W W. Z J O U R N A L. C O M o c t o b e r / n o v e m b e r 2 0 0 9 W h e r e, W h e n, A N D H o w t o D e p l o y INSIDE
BROCADE PERFORMANCE MANAGEMENT SOLUTIONS
Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while
Quiz for Chapter 6 Storage and Other I/O Topics 3.10
Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [6 points] Give a concise answer to each
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
White Paper. Educational. Measuring Storage Performance
TABLE OF CONTENTS Introduction....... Storage Performance Metrics.... Factors Affecting Storage Performance....... Provisioning IOPS in Hardware-Defined Solutions....... Provisioning IOPS in Software-Defined
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
The IntelliMagic White Paper: SMI-S for Data Collection of Storage Performance Metrics. December 2010
The IntelliMagic White Paper: SMI-S for Data Collection of Storage Performance Metrics December 2010 Summary: This paper examines the fundamentals of SMI-S by providing a definition of SMI-S, an overview
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array
Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash
Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage
Technical Paper Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Paper Title Table of Contents Introduction... 1 BRIEF OVERVIEW OF VMAX ARCHITECTURE... 1 PHYSICAL STORAGE DISK TYPES, FA PORTS,
Sistemas Operativos: Input/Output Disks
Sistemas Operativos: Input/Output Disks Pedro F. Souto ([email protected]) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array
Evaluation Report: Supporting Microsoft Exchange on the Lenovo S3200 Hybrid Array Evaluation report prepared under contract with Lenovo Executive Summary Love it or hate it, businesses rely on email. It
Application Note RMF Magic 5.1.0: EMC Array Group and EMC SRDF/A Reporting. July 2009
Application Note RMF Magic 5.1.0: EMC Array Group and EMC SRDF/A Reporting July 2009 Summary: This Application Note describes the new functionality in RMF Magic 5.1 that enables more effective monitoring
Disk Storage Shortfall
Understanding the root cause of the I/O bottleneck November 2010 2 Introduction Many data centers have performance bottlenecks that impact application performance and service delivery to users. These bottlenecks
Solid State Drive Architecture
Solid State Drive Architecture A comparison and evaluation of data storage mediums Tyler Thierolf Justin Uriarte Outline Introduction Storage Device as Limiting Factor Terminology Internals Interface Architecture
Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.
Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds
Performance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology
SSDs and RAID: What s the right strategy Paul Goodwin VP Product Development Avant Technology SSDs and RAID: What s the right strategy Flash Overview SSD Overview RAID overview Thoughts about Raid Strategies
Price/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
RAID technology and IBM TotalStorage NAS products
IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID
The next generation of hard disk drives is here
Next Generation Mobile Hard Disk Drives The next generation of hard disk drives is here The evolution to smaller form factors is a natural occurrence in technology and, indeed, in many industries. This
Outline. CS 245: Database System Principles. Notes 02: Hardware. Hardware DBMS ... ... Data Storage
CS 245: Database System Principles Notes 02: Hardware Hector Garcia-Molina Outline Hardware: Disks Access Times Solid State Drives Optimizations Other Topics: Storage costs Using secondary storage Disk
Hitachi Dynamic Tiering: Overview, Managing and Best Practices
Hitachi Dynamic Tiering: Overview, Managing and Best Practices WebTech Q&A Session, November 6 and 13, 2013 1. What is a typical tier distribution (from a percentage perspective) in an HDT pool? In other
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment
LSI MegaRAID FastPath Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and IT
EqualLogic PS Series Architecture: Hybrid Array Load Balancer
Technical Report EqualLogic PS Series Architecture: Hybrid Array Load Balancer Abstract The Dell EqualLogic PS Series architecture is a virtualized, self-tuning system that uses intelligent load balancers
Difference between Enterprise SATA HDDs and Desktop HDDs. Difference between Enterprise Class HDD & Desktop HDD
In order to fulfil the operational needs, different web hosting providers offer different models of hard drives. While some web hosts provide Enterprise HDDs, which although comparatively expensive, offer
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015
Best Practices for Optimizing Storage for Oracle Automatic Storage Management with Oracle FS1 Series Storage ORACLE WHITE PAPER JANUARY 2015 Table of Contents 0 Introduction 1 The Test Environment 1 Best
The Effect of Priorities on LUN Management Operations
Abstract This white paper describes the effect of each of the four Priorities (ASAP, High, Medium, and Low) on overall EMC CLARiiON performance in executing. The LUN Management Operations are migrate,
Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips
Technology Update White Paper High Speed RAID 6 Powered by Custom ASIC Parity Chips High Speed RAID 6 Powered by Custom ASIC Parity Chips Why High Speed RAID 6? Winchester Systems has developed High Speed
DELL SOLID STATE DISK (SSD) DRIVES
DELL SOLID STATE DISK (SSD) DRIVES STORAGE SOLUTIONS FOR SELECT POWEREDGE SERVERS By Bryan Martin, Dell Product Marketing Manager for HDD & SSD delltechcenter.com TAB LE OF CONTENTS INTRODUCTION 3 DOWNFALLS
Redpaper. Performance Metrics in TotalStorage Productivity Center Performance Reports. Introduction. Mary Lovelace
Redpaper Mary Lovelace Performance Metrics in TotalStorage Productivity Center Performance Reports Introduction This Redpaper contains the TotalStorage Productivity Center performance metrics that are
Provisioning Technology for Automation
Provisioning Technology for Automation V Mamoru Yokoyama V Hiroshi Yazawa (Manuscript received January 17, 2007) Vendors have recently been offering more products and solutions for IT system automation
Assessing RAID ADG vs. RAID 5 vs. RAID 1+0
White Paper October 2001 Prepared by Industry Standard Storage Group Compaq Computer Corporation Contents Overview...3 Defining RAID levels...3 Evaluating RAID levels...3 Choosing a RAID level...4 Assessing
Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
Implementing Storage Virtualization: Elements of a Successful Project
Implementing Storage Virtualization: Elements of a Successful Project 800.937.4688 www.itsolutions.conres.com 2012 Continental Resources. THE IMPRESSIVE TRACK RECORD OF STORAGE VIRTUALIZATION If you re
All-Flash Storage Solution for SAP HANA:
All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Preface 3 Why SanDisk?
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
RAID Performance Analysis
RAID Performance Analysis We have six 500 GB disks with 8 ms average seek time. They rotate at 7200 RPM and have a transfer rate of 20 MB/sec. The minimum unit of transfer to each disk is a 512 byte sector.
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
WHITE PAPER Addressing Enterprise Computing Storage Performance Gaps with Enterprise Flash Drives
WHITE PAPER Addressing Enterprise Computing Storage Performance Gaps with Enterprise Flash Drives Sponsored by: Pliant Technology Benjamin Woo August 2009 Matthew Eastwood EXECUTIVE SUMMARY Global Headquarters:
CPS104 Computer Organization and Programming Lecture 18: Input-Output. Robert Wagner
CPS104 Computer Organization and Programming Lecture 18: Input-Output Robert Wagner cps 104 I/O.1 RW Fall 2000 Outline of Today s Lecture The I/O system Magnetic Disk Tape Buses DMA cps 104 I/O.2 RW Fall
Comparison of Hybrid Flash Storage System Performance
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
Qsan Document - White Paper. Performance Monitor Case Studies
Qsan Document - White Paper Performance Monitor Case Studies Version 1.0 November 2014 Copyright Copyright@2004~2014, Qsan Technology, Inc. All rights reserved. No part of this document may be reproduced
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Technical White Paper Integration of ETERNUS DX Storage Systems in VMware Environments
White Paper Integration of ETERNUS DX Storage Systems in ware Environments Technical White Paper Integration of ETERNUS DX Storage Systems in ware Environments Content The role of storage in virtual server
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
NEXSAN NST STORAGE FOR THE VIRTUAL DESKTOP
NST STORAGE FOR THE VIRTUAL DESKTOP Nexsan s innovative product, the NST5000, is a hybrid storage system with unified protocols and highly dense storage for a combination of great performance, low cost,
COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (http://www.cs.princeton.edu/courses/cos318/)
COS 318: Operating Systems Storage Devices Kai Li Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Today s Topics Magnetic disks Magnetic disk performance
SOLID STATE DRIVES AND PARALLEL STORAGE
SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage
Solid State Storage in a Hard Disk Package. Brian McKean, LSI Corporation
Solid State Storage in a Hard Disk Package Brian McKean, LSI Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may
WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer
Configuring ThinkServer RAID 500 and RAID 700 Adapters Lenovo ThinkServer October 4, 2011 Contents Overview... 4 RAID 500 features... 4 RAID 700 features... 4 RAID Overview... 4 Choosing the RAID Level...
LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE. Matt Kixmoeller, Pure Storage
LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (SSDs)
WHITE PAPER Improving Microsoft Exchange Performance Using SanDisk Solid State Drives (s) Hemant Gaidhani, SanDisk Enterprise Storage Solutions [email protected] 951 SanDisk Drive, Milpitas,
RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection
Technical Report RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection Jay White & Chris Lueth, NetApp May 2010 TR-3298 ABSTRACT This document provides an in-depth overview of the NetApp
COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University
COS 318: Operating Systems Storage Devices Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Today s Topics! Magnetic disks!
EMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Reference Architecture Copyright 2009 EMC Corporation.
Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer
Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance
Total Cost of Solid State Storage Ownership
Total Cost of Solid State Storage Ownership An In-Depth Analysis of Many Important TCO Factors SSSI Members and Authors: Esther Spanjer, SMART Modular Technologies Dan Le, SMART Modular Technologies David
VTrak 15200 SATA RAID Storage System
Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data
Hydra esata. 4-Bay RAID Storage Enclosure. User Manual January 16, 2009 - v1.0
4-Bay RAID Storage Enclosure User Manual January 16, 2009 - v1.0 EN Table of Contents 1 Introduction... 1 1.1 System Requirements... 1 1.1.1 PC Requirements... 1 1.1.2 Mac Requirements... 1 1.1.3 Supported
Flash-optimized Data Progression
A Dell white paper Howard Shoobe, Storage Enterprise Technologist John Shirley, Product Management Dan Bock, Product Management Table of contents Executive summary... 3 What is different about Dell Compellent
Managing Storage Space in a Flash and Disk Hybrid Storage System
Managing Storage Space in a Flash and Disk Hybrid Storage System Xiaojian Wu, and A. L. Narasimha Reddy Dept. of Electrical and Computer Engineering Texas A&M University IEEE International Symposium on
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
Introduction to I/O and Disk Management
Introduction to I/O and Disk Management 1 Secondary Storage Management Disks just like memory, only different Why have disks? Memory is small. Disks are large. Short term storage for memory contents (e.g.,
IBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%
openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive
Chapter 10: Mass-Storage Systems
Chapter 10: Mass-Storage Systems Physical structure of secondary storage devices and its effects on the uses of the devices Performance characteristics of mass-storage devices Disk scheduling algorithms
Optimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL
MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL Dr. Allon Cohen Eli Ben Namer [email protected] 1 EXECUTIVE SUMMARY SANRAD VXL provides enterprise class acceleration for virtualized
Optimizing SQL Server Storage Performance with the PowerEdge R720
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
RAID Levels and Components Explained Page 1 of 23
RAID Levels and Components Explained Page 1 of 23 What's RAID? The purpose of this document is to explain the many forms or RAID systems, and why they are useful, and their disadvantages. RAID - Redundant
3.5 Dual Bay USB 3.0 RAID HDD Enclosure
3.5 Dual Bay USB 3.0 RAID HDD Enclosure User Manual August 11, 2011 v1.1 MFG Part # MT2U3-MP BARCODE Introduction 1 Introduction 1.1 System Requirements 1.1.1 PC Requirements Minimum Intel Pentium III
OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS
W H I T E P A P E R OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS By: David J. Cuddihy Principal Engineer Embedded Software Group June, 2007 155 CrossPoint Parkway
Energy aware RAID Configuration for Large Storage Systems
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa [email protected] Miyuki Nakano [email protected] Masaru Kitsuregawa [email protected] Abstract
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either
Record Storage and Primary File Organization
Record Storage and Primary File Organization 1 C H A P T E R 4 Contents Introduction Secondary Storage Devices Buffering of Blocks Placing File Records on Disk Operations on Files Files of Unordered Records
Implementing EMC CLARiiON CX4 with Enterprise Flash Drives for Microsoft SQL Server 2008 Databases
Implementing EMC CLARiiON CX4 with Enterprise Flash Drives for Microsoft SQL Server 2008 Databases Applied Technology Abstract This white paper examines the performance considerations of placing SQL Server
Understanding endurance and performance characteristics of HP solid state drives
Understanding endurance and performance characteristics of HP solid state drives Technology brief Introduction... 2 SSD endurance... 2 An introduction to endurance... 2 NAND organization... 2 SLC versus
LLamasoft K2 Enterprise 8.1 System Requirements
Overview... 3 RAM... 3 Cores and CPU Speed... 3 Local System for Operating Supply Chain Guru... 4 Applying Supply Chain Guru Hardware in K2 Enterprise... 5 Example... 6 Determining the Correct Number of
