1 White Paper XtremIO DATA PROTECTION (XDP) Flash-Specific Data Protection, Provided by XtremIO (Ver..0) Abstract This white paper introduces the and discusses its benefits and advantages over RAID, with special consideration given to the unique requirements of enterprise flash storage arrays. April 0
2 Copyright 0 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided "as is". EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC, EMC, the EMC logo, and the RSA logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Copyright 0 EMC Corporation. All rights reserved. Published in the USA. Part Number H06-0 (Rev. 0)
3 Table of Contents Abstract... Executive Summary... A Brief History of RAID... RAID (Mirroring)... 6 RAID 0 (Striped)... 7 RAID 0 (Mirroring + Striped)... 8 RAID (Disk Striping with Parity)... 9 RAID 6 (Disk Striping with Double Parity)... 0 Comparing RAID 0, and 6... An Introduction to XtremIO's Advanced Data Protection Scheme... Efficient Stripe Updates with XDP... Efficient Rebuilds... 9 XDP Administrative Benefits... Conclusion... How to Learn More...
4 Executive Summary Every enterprise storage system utilizes multiple forms of redundancy in order to protect data against loss when an inevitable array component failure occurs. RAID (Redundant Array of Independent Disks) was invented to allow data to be spread across multiple drives for performance, but also to protect the data should a disk drive (and in later iterations, multiple disk drives) fail. RAID implementations have always been with the assumption of spinning disk as the media choice, imposing undesirable trade-offs between desired levels of performance, capacity overhead and data protection. Rather than simply adopting pre-existing RAID algorithms, XtremIO has reinvented data protection from the ground up to leverage the specific properties of flash media. The resulting data protection algorithm (XtremIO Data Protection or XDP) simultaneously combines the most desirable traits of traditional RAID algorithms, avoids their pitfalls, and brings entirely new and previously impossible capabilities to the XtremIO Storage Array. XtremIO Data Protection also significantly enhances the endurance of the underlying flash media compared to any previous RAID algorithm, an important consideration for an enterprise flash array.
5 A Brief History of RAID Disk failures remain one of the primary failure mechanisms and a potential source of data loss in modern storage systems. In the world of spinning hard disks, failures typically mean the simultaneous loss of ability to read and write. RAID was invented specifically to recover from spinning disk failures by segmenting data across multiple physical drives and, in most RAID algorithms, calculating and storing redundant data that can be used to reassemble the information lost on one or more failed drives. However, with solid-state drives (SSDs) the failure mechanisms often differ. SSD failures may be complete, partial or transient, and an SSD data protection scheme must address all situations. The topic of why SSDs fail and how they behave exactly upon failures is beyond the scope of this document, but XtremIO has considered all SSD failure mechanisms in the design of XDP. This document begins with a review of the most common existing RAID schemes (RAID, 0, 0, and 6). By analyzing their space efficiency and performance overhead, one can discover why they are not best-suited for protecting against SSD failures. Next, the document discusses how XDP compares against these algorithms and innovates in dimensions not possible with spinning disks.
6 RAID (Mirroring) This scheme mirrors all writes to two different disks in parallel. Every user write is translated into two simultaneous writes by the storage array, and reads can be served from either copy. When one of the mirrored disks fails, no information is lost since the remaining healthy disk holds a full copy of data and reads can be served from it. When the failed disk is replaced, data is rebuilt by simply copying it from the healthy disk to the new disk. Disk: M M Data: A A A A A A Figure : RAID All incoming writes are mirrored to two separate disks. This scheme has good rebuild performance, yet requires 0% capacity overhead. In the diagram, two disks are mirroring three blocks of data. Benefit: Drawback: Capacity overhead: User writes per update: Block update I/O overhead: Data protection with efficient rebuild. High cost, due to capacity overhead. Very high. Usable capacity is only 0% of raw capacity. Two writes, one for each side of the mirror. One write for each user write. 6
7 RAID 0 (Striped) This scheme splits data evenly across two or more disks (striped) without parity information. RAID 0 provides no data redundancy and is normally used to increase performance, although it can also be used as a way to create a large logical disk out of two or more physical ones (a minimum of two disks). Data can be served in parallel from all the disks in the stripe, resulting in high performance for both read and write actions. RAID 0 has no parity calculation (same as in RAID 0) and its write performance is the fastest. Disk: D0 D Data: A A D A A A7 A A8 A6 A9 Figure : RAID 0 All incoming writes are striped across two or more disks. In the diagram, nine blocks of data are striped across three disks. Benefit: Drawback: Capacity overhead: User writes per update: Block update I/O overhead: Excellent performance (as blocks are striped). No redundancy (no mirror and no parity). Should not be used for critical systems. None. Usable capacity is 00% of raw capacity. One write. None. 7
8 RAID 0 (Mirroring + Striped) This scheme combines RAID 0 and RAID and is implemented as a striped array whose segments are RAID arrays. RAID 0 has the same fault tolerance and capacity overhead as RAID. Striping RAID segments achieves high I/O rates. Under certain circumstances, a RAID 0 array can sustain multiple simultaneous drive failures. M M Disk: D0 D Data: A A D A D A A A A A A A A6 A6 Figure : RAID 0 All incoming writes are mirrored to two separate disks and then striped across two or more disks. In the diagram, six blocks of data are mirrored and striped across four disks. Benefit: Drawback: Capacity overhead: User writes per update: Block update I/O overhead: Best performance and best rebuild time. Supports very high I/O rates. High cost, high overhead and limited scalability at high cost. Very high. Usable capacity is only 0% of raw capacity. Two writes, one for each side of the mirror. One write for each user write. 8
9 RAID (Disk Striping with Parity) A RAID "stripe" is made up of blocks of data spread across a number of disks and a single parity block (calculated by performing logical XOR operations against the data blocks) stored on an additional disk. The disk containing the parity block is rotated with each new stripe written. RAID stripe sizes are often described as N+ (where N is the number of data disks in the stripe), with + and + being the most common. With RAID, the simultaneous loss of two or more drives results in data loss, which limits the practical stripe width. The high probability of two disks failing simultaneously, limits the practical stripe width. D0 D D D A A A Ap Stripe: Bp B B B C Cp C C N = Figure : A + RAID arrangement Each stripe consists of three data blocks and a parity block. Benefit: Drawback: Capacity overhead: User writes per update: Block update I/O overhead: Good overall performance with reasonable capacity overhead. Not the best at anything. RAID 0 has better performance. RAID 6 has better data protection. Calculated as /(N+). Thus a + RAID has % capacity overhead and a + RAID has 6.7% capacity overhead. N+ writes (N new data blocks and parity block update), assuming a full stripe write. /N writes (where N is the number of data disks in a stripe), assuming a full stripe write. 9
10 RAID 6 (Disk Striping with Double Parity) RAID 6 is similar to RAID, with the exception that each stripe contains not one, but two parity blocks. In RAID 6, one parity block is calculated by performing logical XOR operations across the data columns and the other parity block is calculated by encoding diagonals in a matrix of stripes. This gives RAID 6 the additional benefit of being able to survive two simultaneous drive failures. RAID 6 was originally conceived for large capacity (TB or more) SATA drives, where long rebuild times left RAID storage arrays in an extended degraded state, resulting in an unacceptably high probability of data loss. RAID 6 stripe sizes are often described as N+ (where N is the number of data disks in the stripe). With RAID 6, the simultaneous loss of three or more drives results in data loss. D0 D D D D P Q A A A A A Ap Aq B B B B B Bp Bq C C C C C Cp Cq N = Figure : A + RAID 6 arrangement Each stripe consists of five data blocks and two parity blocks. Benefit: Drawback: Capacity overhead: User writes per update: Block update I/O overhead: Strong data protection. High computational overhead can lead to lower performance. Calculated as /(N+). Thus a + RAID 6 has 8.6% capacity overhead and an 8+ RAID 6 has 0% capacity overhead. N+ writes (N new data blocks and two parity blocks update), assuming a full stripe write. /N writes (where N is the number of data disks in a stripe), assuming a full stripe write. 0
11 Comparing RAID 0, and 6 Storage administrators are often required to make a hard choice between capacity, data protection level, and performance. Performance-oriented workloads are typically provisioned on RAID 0, but at the high cost of 0% capacity overhead. Less sensitive workloads can use RAID, and large data sets with lower performance needs can be highly protected with RAID 6. The challenge is to dynamically adapt to the changing types of stored data. A choice made today may leave data on a less-than-optimal RAID level in the future. Although some storage systems allow live migrations between RAID levels, this requires proactive administration and may need to be repeated as data continues to evolve. In light of these challenges, storage administration is conceived as an "art" rather than an exact science. Instead of simply adopting one (or more) of the existing RAID algorithms and implementing them on SSDs, XtremIO chose to develop a new type of data protection scheme that combines the best attributes of pre-existing RAID levels while avoiding their drawbacks. Furthermore, since flash endurance is a special consideration in an all-flash array, XtremIO deemed it critical to develop a data protection algorithm that requires fewer write cycles. This maximizes the service life of the array's SSDs while also delivering higher performance, since more I/O cycles are available for host writes (front-end I/Os), as compared to internal array operations (back-end I/Os). Good Capacity Utilization High Performance Superior Protection Figure 6: Existing RAID schemes make many tradeoffs. XDP combines the best attributes of various RAID levels and extends them, providing unique new capabilities previously not possible.
12 An Introduction to XtremIO's Advanced Data Protection Scheme XtremIO's Data Protection scheme is very different from RAID in several ways. Since XDP is always working within an all-flash storage array, several design criteria are important: Ultra-low capacity overhead Flash capacity is more expensive than disk capacity. Thus it is desirable to use very wide striping to lower the capacity overhead. XDP uses a + stripe width *, equating to capacity overhead of just 8%. High levels of data protection XDP is an N+ scheme, making it tolerant of two simultaneous SSD failures in each X-Brick. Rapid rebuild times XDP allows for very fast rebuilds, not just because flash is a fast underlying media technology, but also because XtremIO's content-aware architecture only needs to rebuild the written space on a drive. Empty spaces are detected and skipped. Furthermore, XDP has flash-specific parity encoding algorithms (described later in this document) that enable rebuilds to occur with fewer I/O cycles to the drives. In addition, rebuild is performed simultaneously on all remaining drives, which further accelerates the rebuild process. Flash endurance XDP requires fewer writes per stripe update than any RAID algorithm. This extends flash endurance to up to.x longer than with standard RAID implementations. Performance By requiring fewer I/O operations per stripe update, XDP leaves more drive I/O cycles available for host (front-end) I/O, resulting in superior array performance. How does XDP achieve these seemingly contradictory goals simultaneously? The answer lies in the algorithm's ability to place and access data in any location on any SSD. Past RAID algorithms had to consider how to keep data contiguous so as to avoid disk drive head seeks. XDP presumes that random access media such as flash is present in the array, and thus it can lay out data and read it back in highly efficient ways that would heavily penalize a disk-based RAID algorithm, but with no ill effects in XtremIO's all-flash architecture. XDP uses a variation of N+ row and diagonal parity. Refer to Figure 7 below for a sample of XDP's data layout. * For a 0TB Starter X-Brick (TB), XDP uses an + stripe width.
13 D0 D D D D P Q P Q k- P P Q Q P Q Q k = (prime) Figure 7: Sample XDP data layout. XDP makes use of both row (P) and diagonal (Q) parity calculations. Here a + stripe is shown, but in actuality XDP uses a + stripe. Each XDP protected stripe uses columns and 8 rows. In Figure 7, XDP's row-based parity is shown by the red rectangle and the parity block is stored in the P column. XDP's diagonal parity is shown by the blue rectangle and is stored in the Q column. The location of the Q parity block corresponds to the numbering scheme (number four in the diagram) of the diagonal. In order to effectively calculate the diagonal parity (which spans multiple row-based stripes), XDP writes data in stripe size of *8 = 6 data blocks, allowing the diagonal parity to be calculated while all data is still in memory.
14 Efficient Stripe Updates with XDP But how does XDP's data layout benefit the XtremIO array as compared to traditional RAID? The answer lies in the I/O overhead required to perform a stripe update. With all parity-based RAID schemes, it is much faster (due to lower I/O overhead) to do full stripe writes (writing data to an empty location on disk) than to update a single block within a stripe (updating data in an existing location). A full stripe write simply requires that parity be calculated and the data be written. However, an update to an existing stripe requires data and parity blocks in the stripe to first be read, then new parity is calculated based on the updated data and the resulting data and parity blocks are written. For example, RAID 6 incurs three reads (the existing data block and both parity blocks) and writes (the new data block and the recalculated parity blocks) for each single block update. The way to lower the average I/O overhead for writing to a RAID 6 volume is to try to do as many full stripe writes, and as few single block updates, as possible. However, this becomes increasingly difficult as the array becomes full. This is one of the reasons why array performance tends to degrade substantially as capacity runs low. The array can no longer find empty full stripes and must resort to performing multiple partial-stripe updates. This incurs much higher levels of back-end I/O, consuming disk IOPS that otherwise would have been available to service front-end host requests. Full Stripes vs. Stripe Updates In any parity-based RAID algorithm (including XDP), there is less I/O overhead for writing a full stripe than in updating an existing stripe with new data. A full stripe write only incurs the computational overhead of calculating parity, and the drives themselves need only write the new data and associated parity blocks. An update requires reading existing data blocks and parity blocks, computing new parity, and writing the new data and updated parity. Thus, full stripe writes are desirable, and when an array is empty, easy to perform. However, as an array fills, most writes will be overwrites of existing data blocks since empty space in the array to perform a full stripe write becomes harder to find, forcing more stripe updates to occur and lowering performance. The common strategy for handling this situation is to implement a garbage collection function that looks for stale data in existing stripes, and re-packs it into full stripes, allowing space to be reclaimed so that there are (hopefully) always full stripes available for new writes. In a flash array this is an undesirable approach, because the act of garbage collection leads to unpredictable performance (the garbage collection function itself consumes I/O cycles and array computational overhead) and wears the flash sooner (picking up data from one location and rewriting it to a new location amplifies the number of writes the flash media endures. XtremIO engineered a completely unique solution to this problem. Rather than garbage collection, the XtremIO array works with the assumption that stripe updates will be predominant as the array is in service over time. XDP's algorithms efficiently place new writes into the emptiest stripes in the array while providing consistent and predictable performance as the array is filled. XDP avoids the write amplification problem that garbage collection creates. The result is that the XtremIO array delivers consistent performance at all times while extending flash endurance up to.x longer than with traditional RAID implementations.
15 The XtremIO array and XDP work in a fundamentally different way. The XtremIO array stores blocks based not on their logical address, but on their individual content fingerprint. At the physical layer, the XtremIO array has total freedom on where to write new unique data blocks (this capability only being feasible in an all-flash design). Traditional arrays update logical block addresses in the same physical location on disk (causing the high I/O overhead of a stripe update). All-flash arrays typically have a garbage collection function that attempts to free up stale data into full, empty stripes. However, an update to a logical block address on XtremIO is written to a new location on the disk, based on the content's fingerprint (if the content happens to already exist in the array, the block is simply deduplicated). Like with traditional RAID, XDP will try to do as many full stripe writes as possible by bundling new/changed blocks and writing them to empty stripes available in the array. However, with XDP the unavailability of a full stripe does not cause the high levels of partial stripe update overhead as found in traditional RAID nor the garbage collection present in other flash arrays, because XtremIO does not update data in place. Rather, the array always places data in the emptiest stripe available. And, as explained below, XDP's I/O overhead for a partial stripe update is typically far lower than with traditional RAIDs. The net result is that XtremIO almost never incurs the full RAID 6 I/O overhead of a stripe update. XtremIO's average I/O overhead for such an operation is much closer to that of a full stripe write. In fact, XtremIO's average update performance is nearly 0% better than that in RAID 0 the RAID level with the highest performance. To demonstrate how this works, consider the situation where the XtremIO array is 80% full, which means there is 0% free space. This example is used because it illustrates how XDP maintains performance even when the array is nearly filled. This is a condition that causes problems for other flash array designs because they rely on garbage collection mechanisms to move data around in order to create a completely empty RAID stripe for new incoming data. This becomes increasingly hard to do as the array fills, especially in the face of heavy write workloads. In contrast, XDP does not require a full stripe. XDP was conversely designed to excel when dealing with partial stripe writes, which becomes the normal, steady-state operating condition when an array has been in service for some time. When new data enters the XtremIO array, XDP chooses the emptiest stripe, because it incurs the lowest overhead of parity updates and disk accesses. After the update this stripe will be full, while, on average, the rest of the stripes will be slightly more than 0% free. Repeating this write/delete scenario on an array that is 80% full creates an even distribution of free space across the stripes that ranges from zero (full stripe) to x Array Free Space, which in this case is 0% ( x 0% free).
16 Figure 8: XDP ranks all stripes based on how full or empty they are and always writes to the emptiest stripe. In this simplified view there are ten data drives (columns) and ten stripes (rows). The emptiest stripes are ranked at the bottom. The example shows an array that is 80% full. Since XDP has 6 data blocks, and on average a stripe update is performed with 0% free space in the stripe (on an 80% full array), it means that XDP has 6 0% * 7 data blocks in the stripe available per user write. When the stripe is updated, there will be an overhead of parity re-write where XDP has 8 blocks for row parity and 9 blocks for diagonal parity, for a total of = write operations. write operations store 7 blocks of user data is a write overhead of. I/Os. XDP's read overhead per stripe update is identical at. I/Os, reflecting the need to read existing data and parity blocks in order to perform the updated parity calculations. * Because of the way XDP works, on average the emptiest stripe in XDP has twice as many free blocks compared to the system level free space. 6
17 Figure 9: XDP always writes new data into the stripe with the emptiest space (the one at the bottom of the stack). In this example, four blocks were free (two were empty and two were available to be overwritten). Through this method, XDP amortizes the stripe update I/O overhead as efficiently as possible. Figure 0: Every time a stripe is updated, XDP re-ranks all stripes so that the next update will occur into the emptiest stripe. As the hosts overwrite logical addresses, their old locations on SSD are marked as free to be overwritten. 7
18 Table : Comparison of the I/O overhead of different RAID schemes against XDP. Algorithm Reads per Stripe Update Traditional Algorithm Read Disadvantage Writes per Stripe Update XDP.. RAID 0 0.6x RAID.6x.6x RAID 6.x.x Traditional Algorithm Write Disadvantage In a flash-based system, the disadvantage of traditional RAID manifests itself not only in lower performance, but also in reduced endurance of the flash media. Since flash drives have a finite number of write cycles they can endure, XDP allows XtremIO arrays to not only perform better, but to last longer. This is a significant advantage of high value in an enterprise storage array. Consider XtremIO against another product that implements RAID 6. Both provide N+ data protection, but XtremIO allows the flash to last. times longer while delivering. times better I/O performance. XDP's advantage holds true not only for updates, but also for other important storage operations such as rebuilds upon SSD failures. 8
19 Efficient Rebuilds XDP's innovative diagonal-based parity scheme plays an important role during drive rebuilds. While rebuilding a failed drive after a single drive failure can be performed naively, using row-based parity information only (which requires reading K * blocks per failed block), this is not the method XDP uses. D0 D D D D P Q rows columns Figure : With RAID, a single drive failure (D0) can be rebuilt, using row-based parity (P). This requires reading K blocks (D, D, D, D and P) to rebuild each missing data block on the failed drive. XDP is far more efficient than this method. XDP accelerates rebuilds by making use of both row and diagonal parities. Assuming that D0 fails, XDP recovers the first two blocks (D0- and D0-), using their row parity. With traditional RAID, the row-based parity rebuild process continues with remaining rows. * K: Number of data columns in a stripe or overall number of disks in the stripe, excluding parity disks. In XtremIO Storage Array K=. 9
20 D0 D D D D P Q Figure : XDP's hyper-efficient rebuild process. In this example, the first two rows are rebuilt using the conventional row-based parity. However, with XDP, as the system reads these two rows, it keeps some of their data blocks in memory because they can be leveraged to perform diagonal-based parity rebuilds. In Figure, the blocks D-, D-, D- and D- are retained in this fashion. With this information already available, XDP does not need to read all of the remaining rows in order to complete the rebuild because many of the diagonal parity stripes can be completed with just a few additional blocks. As a result, the number of read I/Os to the SSD is minimized and the overall rebuild process time and efficiency improves. For example, the diagonal stripe for the blocks numbered (D0-, which must be rebuilt, D-, D-, D- and Q-) already has blocks D- and D- in memory from the prior row-based rebuild operations. D0 D D D D P Q Figure : XDP has used row-based parity to rebuild D0- and D0-. It has also retained the blocks marked in green in memory. Therefore, they do not need to be reread to complete the rebuild. This data is used to complete the rebuild using the diagonal parity (Q). 0
21 D0 D D D D P Q Figure : XDP's diagonal-based parity enables rebuilds to occur with fewer I/O operations. Here, D0- is rebuilt using a combination of information retained during a row-based rebuild of D0- and D0-, along with diagonal information contained in D- and Q-. Thus, rebuilding D0- only requires two reads (D- and Q-) whereas in traditional RAIDs it would require five reads (D-, D-, D-, D- and P). Similarly, rebuilding D0- only requires reading D- and Q-. The rebuild process is carried out simultaneously on all remaining drives. The system rebuilds a large number of stripes at the same time, using all of the remaining SSDs in XDP. This accelerates the process, resulting in reduced rebuild time. This method does not provide any advantage when rebuilding lost parity columns, and thus it requires a little more than K/ reads on average (recall that each disk contains both data and parity columns in a rotating parity scheme). It is easy to see that such a scheme also balances the rebuild reads evenly across the surviving disks.
22 Table : Comparison of XDP and RAID algorithms. Algorithm Reads to Rebuild a Failed Disk Stripe of Width K * Traditional Algorithm Disadvantage XDP K/ RAID 0 None RAID K % RAID 6 K % XDP's efficiency advantage leads both to faster rebuild times as well as better array performance during rebuilds since fewer back-end I/Os are required to complete the rebuild, leaving more front-end I/Os available for user data. * K: Number of data columns in a stripe or overall number of disks in the stripe, excluding parity disks. In XtremIO Storage Array K=.
23 XDP Administrative Benefits XDP's benefits extend beyond superior data protection, performance, capacity utilization, and flash endurance. The algorithm is completely implemented in software (rather than in a hardware controller, such as an ASIC or FPGA), allowing tremendous flexibility as future X-Brick designs are rolled out. This flexibility is demonstrated in how XDP responds dynamically to common fault conditions. A standard XtremIO X-Brick contains SSDs, with for data and for parity. When one of the SSDs in an X-Brick fails, XDP quickly rebuilds the failed drive, while dynamically reconfiguring incoming new writes into a + stripe size to maintain N+ double failure protection for all new data written to the array. Once the rebuild completes and the failed drive is replaced, incoming writes will once again be written with the standard + stripe. This adaptability allows XDP to tolerate a sequence of SSD failures without having to rush to the data center to replace the failed SSDs. XDP maintains a reserve capacity of approximately % to perform a drive rebuild (this reserve is factored out of XtremIO's usable capacity) and can continue to rebuild failed drives again and again while serving user I/Os, as long as there is still available capacity in the array up to failures for a full X-Brick and failures for a 0TB Starter X-Brick (TB). The array will have less available capacity (until the faulty drives are replaced) and may be a little slower because of fewer working drives, but will remain healthy and protected. This is a very unique feature that allows administrators to defer drive replacements until a convenient time, which is especially important in remote, secure, and "lights out" data centers. A side benefit of this is that the array does not require hot spares, so every slot in the system contains an SSD that actively stores data and adds to the array's performance.
24 Conclusion XtremIO's Data Protection scheme represents a leap forward in storage array technology. By designing for and taking advantage of the unique properties of flash storage, XDP enables XtremIO arrays to perform better and last longer while being easier to use and costing less. XDP's benefits include: N+ data protection Incredibly low capacity overhead of 8% Performance superior to any RAID algorithm Flash endurance superior to any RAID algorithm Faster rebuild times than traditional parity-based RAID algorithms Superior robustness with adaptive algorithms that fully protect incoming data, even when failed drives exist in the system Administrative ease through fail-in-place support Utilizing "Free Space" as opposed to "Hot Spare" for SSD rebuild, increasing endurance and ensuring the rebuild of at least a single SSD failure even if the array is full With these advantages, the XtremIO all-flash array gives storage administrators the best of all worlds. There is no longer any need to guess which RAID scheme to use for a particular data set, and no trade-offs to make. With XtremIO data gets the best protection and the best performance at the same time.
25 How to Learn More For a detailed presentation, explaining the XtremIO Storage Array capabilities and how it substantially improves performance, operational efficiency, ease-of-use, and total cost of ownership, please contact XtremIO at We will schedule a private briefing in person or via web meeting. XtremIO provides benefits in many environments, but is particularly effective for virtual servers, virtual desktops, and database applications. Contact Us To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local representative or authorized reseller or visit us at EMC, EMC, the EMC logo, XtremIO and the XtremIO logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. VMware is a registered trademark of VMware, Inc., in the United States and other jurisdictions. Copyright 0 EMC Corporation. All rights reserved. Published in the USA. 0/ EMC White Paper EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
FLASH IMPLICATIONS IN ENTERPRISE STORAGE ARRAY DESIGNS ABSTRACT This white paper examines some common practices in enterprise storage array design and their resulting trade-offs and limitations. The goal
RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E
SSDs and RAID: What s the right strategy Paul Goodwin VP Product Development Avant Technology SSDs and RAID: What s the right strategy Flash Overview SSD Overview RAID overview Thoughts about Raid Strategies
FLASH 15 MINUTE GUIDE DELIVER MORE VALUE AT LOWER COST WITH XTREMIO ALL- FLASH ARRAY Unparalleled performance with in- line data services all the time OVERVIEW Opportunities to truly innovate are rare.
ASKING THESE 20 SIMPLE QUESTIONS ABOUT ALL-FLASH ARRAYS CAN IMPACT THE SUCCESS OF YOUR DATA CENTER ROLL-OUT 1 PURPOSE BUILT FLASH SPECIFIC DESIGN: Does the product leverage an advanced and cost effective
RAID Implementation for StorSimple Storage Management Appliance Alpa Kohli June, 2012 KB-00008 Document Revision 1 StorSimple knowledge base articles are intended to provide customers with the information
Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
Flash Media Management Through RAIN Scott Shadley, Senior roduct Marketing Manager Micron Technology, Inc. Technical Marketing Brief What Is RAIN, and Why Do SSDs Need It? This brief compares redundant
StorTrends RAID Considerations MAN-RAID 04/29/2011 Copyright 1985-2011 American Megatrends, Inc. All rights reserved. American Megatrends, Inc. 5555 Oakbrook Parkway, Building 200 Norcross, GA 30093 Revision
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for
Madis Pärn Senior System Engineer EMC firstname.lastname@example.org 1 History February 2000? What Intel CPU was in use? First 15K HDD 32-bit, 1C (4C/server), 1GHz, 4GB RAM What about today? 64-bit, 15C (120C/server),
technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
A Detailed Review Abstract This white paper discusses the EMC CLARiiON RAID 6 implementation available in FLARE 26 and later, including an overview of RAID 6 and the CLARiiON-specific implementation, when
ebook 100% KBs/sec 12% GBs Written SSD Performance Tips: Avoid The Write Cliff An Inexpensive and Highly Effective Method to Keep SSD Performance at 100% Through Content Locality Caching Share this ebook
Benefits of Intel Matrix Storage Technology White Paper December 2005 Document Number: 310855-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,
Disk drives are an integral part of any computing system. Disk drives are usually where the operating system and all of an enterprise or individual s data are stored. They are also one of the weakest links
Technical White paper RAID Protection and Drive Failure Fast Recovery RAID protection is a key part of all ETERNUS Storage Array products. Choices of the level chosen to meet customer application requirements
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
an analysis of RAID 5DP a qualitative and quantitative comparison of RAID levels and data protection hp white paper for information about the va 7000 series and periodic updates to this white paper see
STORAGE SOLUTIONS WHITE PAPER Best Practices RAID Implementations for Snap Servers and JBOD Expansion Contents Introduction...1 Planning for the End Result...1 Availability Considerations...1 Drive Reliability...2
Leveraging EMC Fully Automated Storage Tiering (FAST) and FAST Cache for SQL Server Enterprise Deployments Applied Technology Abstract This white paper introduces EMC s latest groundbreaking technologies,
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
RAID Level Descriptions RAID 0 (Striping) Offers low cost and maximum performance, but offers no fault tolerance; a single disk failure results in TOTAL data loss. Businesses use RAID 0 mainly for tasks
Understanding the Economics of Flash Storage By James Green, vexpert Virtualization Consultant and Scott D. Lowe, vexpert Co-Founder, ActualTech Media February, 2015 Table of Contents Table of Contents...
White Paper October 2000 Prepared by Storage Products Group Compaq Computer Corporation Contents Introduction...2 What customers can expect from Compaq RAID ADG solution...3 RAID ADG Features and Benefits...3
The Pitfalls of Deploying Solid-State Drive RAIDs Nikolaus Jeremic 1, Gero Mühl 1, Anselm Busse 2 and Jan Richling 2 Architecture of Application Systems Group 1 Faculty of Computer Science and Electrical
All-Flash Storage Solution for SAP HANA: Storage Considerations using SanDisk Solid State Devices WHITE PAPER 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Preface 3 Why SanDisk?
Technical Report RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection Jay White & Chris Lueth, NetApp May 2010 TR-3298 ABSTRACT This document provides an in-depth overview of the NetApp
White Paper INTRODUCTION TO THE EMC XtremIO STORAGE ARRAY (Ver. 4.0) A Detailed Review Abstract This white paper introduces the EMC XtremIO Storage Array. It provides detailed descriptions of the system
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
FLASH GAINS GROUND AS ENTERPRISE STORAGE OPTION With new management functions placing it closer to parity with hard drives, as well as new economies, flash is gaining traction as a standard media for mainstream
Using RAI6 for Advanced ata Protection 2006 Infortrend Corporation. All rights reserved. Table of Contents The Challenge of Fault Tolerance... 3 A Compelling Technology: RAI6... 3 Parity... 4 Why Use RAI6...
For Data Centers Redundant Array of Independent Disks (RAID) Combine and control multiple storage devices for better performance and reliability Application note 2014 Samsung Electronics Co. 1. What is?
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
Microsoft SQL Server 2014 Fast Track 34-TB Certified Data Warehouse 103-TB Maximum User Data Tegile Systems Solution Review 2U Design: Featuring Tegile T3800 All-Flash Storage Array http:// www.tegile.com/solutiuons/sql
White Paper October 2001 Prepared by Industry Standard Storage Group Compaq Computer Corporation Contents Overview...3 Defining RAID levels...3 Evaluating RAID levels...3 Choosing a RAID level...4 Assessing
RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.
RAID 6 with HP Advanced Data Guarding technology: a cost-effective, fault-tolerant solution technology brief Abstract... 2 Introduction... 2 Functions and limitations of RAID schemes... 3 Fault tolerance
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
EMC XTREMIO AND MICROSOFT EXCHANGE DATABASES Preliminary findings: Efficiency of various production samples Market overview and adoption of all-flash arrays Techniques for estimating efficiency EMC Solutions
EMC SOLUTION FOR SPLUNK Splunk validation using all-flash EMC XtremIO and EMC Isilon scale-out NAS ABSTRACT This white paper provides details on the validation of functionality and performance of Splunk
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
A Dell white paper Howard Shoobe, Storage Enterprise Technologist John Shirley, Product Management Dan Bock, Product Management Table of contents Executive summary... 3 What is different about Dell Compellent
200 Chapter 7 (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.) 7.2 RAID Disks are potential bottlenecks for system performance and
All-Flash Arrays: Not Just for the Top Tier Anymore Falling prices, new technology make allflash arrays a fit for more financial, life sciences and healthcare applications EXECUTIVE SUMMARY Real-time financial
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
Business white paper Invest in the right flash storage solution A guide for the savvy tech buyer Business white paper Page 2 Introduction You re looking at flash storage because you see it s taking the
Overview of RAID Let's first address, "What is RAID and what does RAID stand for?" RAID, an acronym for "Redundant Array of Independent Disks, is a storage technology that links or combines multiple hard
White Paper EMC VNXe HIGH AVAILABILITY Overview Abstract This white paper discusses the high availability (HA) features in the EMC VNXe system and how you can configure a VNXe system to achieve your goals
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper HP 3PAR Thin Deduplication: A Competitive Comparison Printed in the United States of America Copyright 2014 Edison
WHITE PAPER Drobo TM Hybrid Storage TM Table of Contents Introduction...3 What is Hybrid Storage?...4 SSDs Enable Hybrid Storage...4 One Pool, Multiple Tiers...5 Fully Automated Tiering...5 Tiering Without
Technical white paper Solid State Drive Technology Differences between SLC, MLC and TLC NAND Table of contents Executive summary... 2 SLC vs MLC vs TLC... 2 NAND cell technology... 2 Write amplification...
RAID Technology Overview HP Smart Array RAID Controllers HP Part Number: J6369-90050 Published: September 2007 Edition: 1 Copyright 2007 Hewlett-Packard Development Company L.P. Legal Notices Copyright
Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper HP 3PAR Adaptive Flash Cache: A Competitive Comparison Printed in the United States of America Copyright 2014 Edison
Identifying the Hidden Risk of Data De-duplication: How the HYDRAstor Solution Proactively Solves the Problem October, 2006 Introduction Data de-duplication has recently gained significant industry attention,
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
WHITE PAPER VERITAS Volume Management Technologies for Windows V E R I T A S W H I T E P A P E R The Next Generation of Disk Management for Windows Platforms Windows 2000 and Windows Server 2003 1 TABLE
White Paper EMC DATA DOMAIN DATA INVULNERABILITY ARCHITECTURE: ENHANCING DATA INTEGRITY AND RECOVERABILITY A Detailed Review Abstract No single mechanism is sufficient to ensure data integrity in a storage
1 WWW.FUSIONIO.COM FUSION iocontrol HYBRID STORAGE ARCHITECTURE Contents Contents... 2 1 The Storage I/O and Management Gap... 3 2 Closing the Gap with Fusion-io... 4 2.1 Flash storage, the Right Way...
Dynamic Disk Pools Technical Report A Dell Technical White Paper Dell PowerVault MD3 Dense Series of Storage Arrays 9/5/2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
White Paper EMC DATA DOMAIN ENCRYPTION A Detailed Review Abstract The proliferation of publicized data loss, coupled with new governance and compliance regulations, is driving the need for customers to
TECHNOLOGY BRIEF Moving Beyond RAID DXi and Dynamic Disk Pools NOTICE This Technology Brief contains information protected by copyright. Information in this Technology Brief is subject to change without
Dell Systems Getting Started With RAID www.dell.com support.dell.com Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A
RAID Levels and Components Explained Page 1 of 23 What's RAID? The purpose of this document is to explain the many forms or RAID systems, and why they are useful, and their disadvantages. RAID - Redundant
Maximizing the Benefits from Flash In Your SAN Storage Systems Dot Hill Storage Hybrid Arrays integrate flash and HDDs in Optimal Configurations Maximizing the Benefits of Flash WP 1 INTRODUCTION EXECUTIVE
Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices
White Paper Answering the Requirements of Flash-Based SSDs in the Virtualized Data Center Provide accelerated data access and an immediate performance boost of businesscritical applications with caching
Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for
RAID OPTION ROM USER MANUAL Version 1.6 RAID Option ROM User Manual Copyright 2008 Advanced Micro Devices, Inc. All Rights Reserved. Copyright by Advanced Micro Devices, Inc. (AMD). No part of this manual
Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................