A Performance Comparison of Open-Source Erasure Coding Libraries for Storage Applications
|
|
|
- Dinah Price
- 9 years ago
- Views:
Transcription
1 A Performance Comparison of Open-Source Erasure Coding Libraries for Storage Applications Catherine D. Schuman James S. Plank Technical Report UT-CS Department of Electrical Engineering and Computer Science University of Tennessee August 8, 28 The home for this paper is The current version of this paper is not intented for publication, but to provide the seeds for collaboration with other erasure code developers and users. Abstract Erasure coding is a fundemental technique to prevent data loss in storage systems composed of multiple disks. Recently, there have been multiple open-source implementations of a variety of erasure codes. In this work, we present a comparison of the performance of various codes and implementations, concentrating on encoding and decoding. It is hard to draw overarching conclusions from a single performance study. However, performance data is important to gain an understanding of the real-life performance ramifications of code properties and implementation decisions. The significance of this paper is to guide those who use and design codes, so that they may be able to predict what performance to expect when using an erasure code. One important, although obvious, conclusion is that reducing cache misses is more important than reducing XOR operations. 1 Introduction In recent years, erasure codes have moved to the fore to prevent data loss in storage systems composed of multiple disks. Storage companies such as [6], Data Domain [32], Network Appliance [15] and Panasas [27] are delivering products that use erasure codes for data availability. Academic projects such as Oceanstore [24], LoCI [2] and Pergamum [26] are doing the same. And equally important, major technology corporations such as HP [29], IBM [8, 9] and Microsoft [11, 12] are performing active research on erasure codes for storage systems. Along with proprietary implementations of erasure codes, there have been numerous open source implementations of a variety of erasure codes that are available for download [6, 13, 16, 18, 28]. While one cannot expect the same level of performance and functionality from an open source implementation that one gets with a proprietary implementation, it is the intent of some of these projects to provide storage system developers with high quality tools. As such, there is a need to understand how these codes and implementations perform. In this paper, we seek to gain such an understanding. We have installed all of the above erasure coding implementations on machines at the University of Tennessee, and have performed two bench- 1
2 marking tests on them. The first performs a fundamental operation of taking a large file (in this case, a video file), breaking it up into slices for multiple storage nodes and encoding it into other slices for other storage nodes. The second performs another fundamental operation which is to recompute the contents of the file from remaining the slices when some of the slices have failed. The conclusions that we draw are somewhat obvious. The coding technique, implementation and word size all affect the performance of coding. There is a great deal of variation in encoding and decoding performance of multiple implementations of the same coding technique. For standard Reed- Solomon coding, the Zfec implementation [28] performs the fastest. The Cauchy code implementation of the library [18], which optimizes the encoding matrices, vastly outperforms the other implementations. In any performance study, effects due to the memory hierarchy must be observed, and a final experiment demonstrates clearly that the encoding parameters should take account of the machine s cache size to achieve best performance. 2 Nomenclature and Basic Erasure Coding It is an unfortunate consequence of the history of erasure coding research that there is no unified nomenclature for erasure coding. We borrow terminology mostly from Hafner et al [1], but try to conform to more classic coding terminology (e.g. [4, 14]) when appropriate. Our storage system is composed of an array of n disks, each of which is the same size. Of these n disks, k of them hold data and the remaining m hold coding information, often termed parity. We label the data disks D,..., D k 1 and the parity disks C,..., C m 1. An example system is pictured in Figure 1. Figure 1: An example storage system with erasure coding. There are k data disks and m coding disks. Disks are composed of sectors, which are the smallest units of I/O to and from a disk. Sector sizes typically range from 512 to 496 bytes and are always powers of two. Most erasure codes are defined such that disks are logically organized to hold bits; however, when they are implemented, all the bits on one sector are considered as one single entity called an element. When a code specifies that bit on drive C equals the exclusive-or of bits on drives D and D 1, that really means that a sector on drive C will be calculated to be the parity (bitwise exclusive-or) of corresponding sectors on drives D and D 1. We will thus use the terms bit, sector and element interchageably in this paper. We also use the term packet to denote a unit logically equivalent to a sector, but that can be much bigger. A code typically has a word size w, which means that for the purposes of coding, each disk is partitioned into strips of w sectors each. Each of these strips is encoded and decoded independently 2
3 from the other strips on the same disk. The collection of strips from all disks on the array that encode and decode together is called a stripe. Thus, to define an erasure code, one must specify the word size w, and then one must describe the way in which sectors on the parity strips are calculated from the sectors on the data strips in the same stripe. For small values of m, Maximum Distance Seperable (MDS) codes are highly desirable. These are codes having the property that when the number of failed disks in the is less than or equal to m, the original data can always be reconstructed. In this work, we will only consider MDS codes. We also only consider horizontal codes, where disks hold exclusively data or parity, rather than vertical codes (e.g. [9, 3, 31]) that require disks to hold both. 2.1 Standard MDS Codes The grandfather of erasure codes is the set of codes [23]. These are general purpose MDS codes that may be constructed for any values of k and m. Their classic presentation is for noisy communication channels, where serial data must be transmitted in a way that tolerates arbitrary (and often bursty) bit flips. These are denoted errors. Storage systems do not fail with errors, but with erasures, where entire disks or sectors become unusable. Erasures differ from errors in that their failures are identifiable. Thus the treatment of coding for storage differs somewhat from its classical treatment. Tutorial instructions for employing codes for storage systems are available [17, 21], and there are several open-source implementations [16, 18, 28]. The crux of coding is to use linear algebra. In particular, encoding is defined by a matrix-vector product where the vector is composed of k words of data, and the product is composed of m words of encoded data. The matrix is called a generator matrix, although some sources call it a distribution matrix. The words are w bits long, where k +m 2 w. A special kind of arithmetic, termed Galois Field arithmetic, GF (2 w ), is used to perform the matrix-vector product. In this arithmetic, addition is equal to bitwise exclusive-or (XOR), and multiplication/division may be implemented in a variety of ways, all of which are much more expensive than XOR. Classic coding systems do not follow Figure 1, but instead treat each disk as a collection of words, and encode each set of words independently from each other set. As such, it is extremely convenient to have words fall on machine word boundaries, meaning that for classic Reed- Solomon coding, w is restricted to be 8, 16, 32 or 64. In 1995, Blomer et al presented Cauchy (CRS) codes [5]. The major difference with CRS codes is that they expand the generator matrix by a factor of w in each dimension, converting each matrix element into a w w bit matrix. Additionally, they split the multiplicand and product vectors into k and m sets of w packets. This allows the product to be computed using only XOR operations, and it also frees w from the restriction of having to align on word boundaries. With CRS coding, the system looks exactly as in Figure 1, with each strip consisting of w packets. Stripes are each encoded and decoded independently. The performance of CRS depends directly on the number of ones in the expanded generator matrix. In 26, Plank and Xu made the observation that there are multiple ways of constructing the generator matrix for CRS, and that significant savings may be achieved by choosing Good matrices [22]. The library extends this by massaging the matrix further [18]. In this work, we will refer to matrices, which are those defined by Blomer et al, and Good matrices, which are the improved matrices derived in the library. Besides the library, CRS coding has been implemented in open-source form by [13] and by [6]. Both use matrices for coding. 2.2 RAID-6 Codes RAID-6 systems are restricted storage systems for which m = 2. In a RAID-6 system, the two parity drives are called P and Q. Currently, most commercial storage systems that implement high 3
4 availability use RAID-6 [15, 27, 32]. There are several specialized coding algorithms for RAID-6. First, there is a clever tweak to coding that improves encoding performance for RAID-6 [1]. Next, there are two algorithms for RAID-6 that make use of diagonal parity construtions. These are EVENODD [3] and RDP [7] codes. RDP in particular features optimal encoding and decoding performance for certain values of k. Both codes are laid out as in Figure 1, where w must be a number such that w + 1 is prime and either k w + 1 (EVENODD) or k w (RDP). A final class of RAID-6 codes are Minimal Density codes, which work using an generator bitmatrix like CRS coding. However, unlike CRS coding, these matrices have a provably minimal number of ones. Three different constructions of Minimal Density exist: Liberation codes [2] for when w is prime, Blaum-Roth codes [4] for when w + 1 is prime, and the Liber8tion code [19] for w = 8. In all cases, k w. The theoretical performance of minimal density codes lies between RDP and EVENODD when k is near w. It outperforms both when k is much smaller than w. Finally, techniques to optimize the calculation of the bitmatrix-vector product have been explored by several researchers [12, 1, 2]. The library implements the Code-specific Hybrid Reconstruction algorithm of Hafner et al [1], which can drastically reduce the number of XORs required for CRS encoding/decoding, and for decoding when using the minimal density techniques [19, 2]. 3 Description of Open Source Libraries Tested We used the following open source erasure coding libraries for our tests. LUBY: CRS coding was developed by et al in for use in the PET project at ICSI. It were developed to be fast enough to run on real-time on small blocks. The library is written C and is available for free download [13]. SCHIFRA: In July, 2, the first version of the library was made avaiable for free download. The library is written in C++, with a robust API and support [16]. There is a license for documentation of the library and for a high-performance version. Thus, the version we have tested does not represent the best performance of. However, it does represent the performance a developer can expect from the freely available download. ZOOKO: The Zfec library for erasure coding has been in development since 24. The latest version (1.4.) was posted in January, 28. The library is programmable and portable. It includes command-line tools and APIs in C, Python and Haskell [28]. After evaluating several coding methodologies, the authors based their implementation of a well-known implementation of classic coding by Rizzo [25]. JERASURE: In September, 27, the first version of the library was made available [18]. The library is in C, and its intent is to be very flexible, implementing a wide variety of coding techniques and support for coding. Standard coding is implemented, along with CRS coding, including the improved the generator matrices, and the minimal density RAID-6 codes. It is released under the GNU LGPL. CLEVERSAFE: In May, 28, exported the first open source version of its dispersed storage system [6]. Written entirely in Java, it supports the same API as s proprietary system, which is notable as it is the first commercial distributed storage system to implement availability beyond RAID-6. For this paper, J. Resch of stripped out the erasure coding part of the open source distribution. It is based on s original CRS implementation. EVENODD/RDP: The implementation of EVENODD and RDP coding is a private version developed by Plank for comparison with Liberation codes [2]. Since EVENODD and RDP codes are patented, this implementation is not available to the public, as its sole intent is for performance comparison. 4
5 4 Main Experimental Setup The scenario that we wish to explore is this: Suppose one wants to take a large file and distribute it to n = k + m storage servers so that it may be reconstructed when up to m of the servers are down or unavailable. We explore this scenario with two experimental tests. The first test is an encoding test. With this test, we take a large video file (736 MB), divide it into k data files and encode it into m coding files using the various methods, libraries, and word sizes. Each of the data and coding files is of size 736/k MB. We do not perform the act of spreading the various files among storage servers, since that activity has the same performance regardless of the coding method or implementation. To test the performance, we implemented a dummy program that performs all of the file activities (reading the video file and creating the output files) but performs no coding the m coding files simply contain zeroes. We subtract the running time of this program from the running time of the other encoders to remove the time for file activities and localize the performance of encoding. We then present the performance of encoding as the rate in which all encoding was completed. For example, suppose k = 6 and m = 2. Our dummy implementation breaks the video file into six data files of MB each, and creates two coding files, also of size MB, which contain all zeros. On average, this takes 53.8 seconds. Now, suppose we wish to test the performance of s implementation of CRS coding with w = 8. We run our encoding program, which creates the same six data files and two coding files (which actually encode this time). This takes 9.3 seconds on average. Thus the encoding portion of the test takes 39.5 = seconds, and we report the performance to be 736/39.5 = 18.7 MB/sec. The second test is a decoding test, where we randomly delete m of the n = k + m files that we created in encoding and reconstruct them from the remaining k pieces. As with encoding, we use a dummy program to localize the performance of decoding, and record the performance in MB/sec. The parameters of the main experiment include the erasure coding technique, the library and the word size (w). All of the libraries except for implement only one coding technique, and all except and have one preset word size. and also allow the programmer to set the packet size. After some initial performance tests, we selected a value of 1KB for the packet size in these tests. selects an optimum slice size based on the other parameters. Encoding Speed (MB/sec) Standard Cauchy Parity Array RDP EVENODD Minimal Density Figure 2: Encoding performance of a RAID-6 system with k = 6 (m = 2). Finally, the encoder and decoder allow for the reading and writing to be performed in stages, with a fixed size buffer in memory. This is to avoid having to store the entire video file and coding information 5
6 in memory. The machine for testing has a 2.6 GHz Intel Pentium 4 processor, 2 GB of RAM and a cache size of 512 KB. It runs Debian Linux, version 3.1. Each data point in the graphs that follow is the average of five runs. 5 Results from the Main Experiment We present results from four sets of experiments. These are two RAID-6 tests (k = 6 and k = 14), plus two higher availability tests where the total number of disks numbers 16. These are {k = 12, m = 4} and {k = 1, m = 6}. The latter set of parameters is the default set used by in their commercial storage system. Standard Cauchy Parity Array Minimal Density Decoding Speed (MB/sec) RDP EVENODD Figure 3: Decoding performance of a RAID-6 system with k = 6 (m = 2). Figures 2 and 3 show the performance of the various implementations, codes and word sizes on a RAID-6 system with six data disks. The values of w are ones that are likely to be used in practice. For the minimal density codes, Liberation codes are employed for w {7, 17}, the Blaum-Roth code is used for w = 16, and the Liber8tion code is used for w = 8. The most glaring feature of these graphs is the extremely wide range of performances, from under 5 MB/sec for and, to roughly 9 MB/sec for the minimal density codes. Of course, all parameters impact perforamnce code type, implementation and word size. However, it is interesting that some de facto assumptions that are made in the community, for example that CRS always outperforms standard coding, clearly are not true. Of the standard coding implementations, performs better than the others by a factor of two at the smallest. The library performs the slowest, which may be attributed to the fact that its open source library is meant to demonstrate functionality and not performance. The CRS implemtations display a very high range in performance, with the implementation performing the best. The generator matrix clearly has an important effect on performance in, the good matrices perform roughly a factor of two better than their original counterparts. With regard to the specialized RAID-6 codes, it is surprising that the minimal density codes outperform RDP and EVENODD in encoding. We surmise that this is due to cache effects, or perhaps because the code is meant to be exported and used by others, while the RDP/EVENODD implementations were written for a paper evaluation. The performance of minimal density decoding is worse than encoding, as expected [19, 2]. 6
7 We include the remainder of the performance results in the Appendix. They mirror the results above. The performance of minimal density decoding when k = 14 and m = 2 is much worse than encoding; however, this is to be expected as k grows [2]. 6 Packet and Cache Size Ramifications Figuring out the sources of performance improvements and penalties is a difficult task on a real machine and application. As the graphs from section 5 show, counting XORs alone is not enough. Understanding the caching behavior of a program is difficult, but to a coarse degree, we can modify a code s packet size in an attempt to optimize cache behavior. We performed two brief experiments to assess this effect at a high level. In Figure 4, we test the effect of modifying the packet size on the encoding rate in four scenarios: 1. k = 14, m = 2, w = 16, Blaum-Roth coding. 2. k = 14, m = 2, w = 7, CRS coding (good matrix). 3. k = 1, m = 6, w = 7, CRS coding (good matrix). 4. k = 1, m = 6, w = 16, CRS coding (good matrix). All use the implementation, but plot only one or two runs per data point. The left graph shows an exploration of small packet sizes and the right graph shows an exploration of large packet sizes. Encoding Rate (MB/sec) Encoding Rate (MB/sec) k=14, m=2, w=7, CRS k=14, m=2, w=16, Blaum-Roth k=1, m=6, w=7, CRS k=1, m=6, w=16, CRS Packet Size (Bytes) Small Packet Sizes Packet Size (KBytes) Big Packet Sizes Figure 4: The effect of modifying the packet size in four coding scenarios. As the graphs show, the effect of modifying the packet size is drastic. When packets are very small, the performance suffers because the inner loops peforming XOR operations is too small. As such, increasing the packet size improves performance to a point where the overhead of cache misses starts to penalize performance. As the right graph shows, this effect continues until the performance is nearly an order of magnitude worse than the best performance. This underscores the importance of considering caching behavior when performing erasure coding. 7
8 For a given packet size, a larger value of w means a larger stripe size. Accordingly, the stripe sizes in Figure 4 for w = 16 are 16 7 larger than those for w = 7. This effect is reflected in the left graph of Figure 4, where the peak performance of the two codes when w = 16 is achieved at a smaller packet size than the codes for w = 7. We acknowledge that this experiment is sketchy at best more data needs to be taken, and more experimentation on this topic needs to be performed. 7 Concluding Remarks This paper has compared the performance of several open source erasure coding libraries. Their performance runs the gamut from slow to fast, with factors being the erasure coding technique used, optimization of the underlying coding structure (i.e. the generator matrix in CRS coding), and attention to cache behavior. While this work should be useful in helping storage practitioners evaluate and use erasure coding libraries, it is clear that more work can and should be done. First, the tests should be performed on multiple machines so that individual machine quirks do not impact results. Second, in the XOR codes, one may schedule the individual XOR operations in an exponential number of ways doing so to improve cache utilization may yield further improvements in performance. It will be a challenge to perform this additional work, yet digest the results in a coherent way. We look forward to that challenge. 8 Acknowledgements This material is based upon work supported by the National Science Foundation under grant CNS The authors gratefully acknowledge Ilya Volvovski and Jason Resch from for so generously providing the erasure coding core of s open source implementation so that we could integrate it into our environment. Additionally, the authors acknowledge Lihao Xu and Jianqiang Luo for helpful interaction, and Wilcox-O Hearn for encouraging communication over the past five years. References [1] Anvin, H. P. The mathematics of RAID-6. raid6.pdf, 27. [2] Beck et al, M. Logistical computing and internetworking: Middleware for the use of storage in communication. In Third Anual International Workshop on Active Middleware Services (AMS) (San Francisco, August 21). [3] Blaum, M., Brady, J., Bruck, J., and Menon, J. EVENODD: An efficient scheme for tolerating double disk failures in RAID architectures. IEEE Transactions on Computing 44, 2 (February 1995), [4] Blaum, M., and Roth, R. M. On lowest density MDS codes. IEEE Transactions on Information Theory 45, 1 (January 1999), [5] Blomer, J., Kalfane, M., Karpinski, M., Karp, R.,, M., and Zuckerman, D. An XOR-based erasure-resilient coding scheme. Tech. Rep. TR-95-48, International Computer Science Institute, August [6], Inc. Dispersed Storage. Open source code distribution: cleversafe.org/downloads, 28. 8
9 [7] Corbett, P., English, B., Goel, A., Grcanac, T., Kleiman, S., Leong, J., and Sankar, S. Row diagonal parity for double disk failure correction. In 4th Usenix Conference on File and Storage Technologies (San Francisco, CA, March 24). [8] Hafner, J. L. HoVer erasure codes for disk arrays. Tech. Rep. RJ1352 (A57-15), IBM Research Division, July 25. [9] Hafner, J. L. WEAVER Codes: Highly fault tolerant erasure codes for storage systems. In FAST-25: 4th Usenix Conference on File and Storage Technologies (San Francisco, December 25), pp [1] Hafner, J. L., Deenadhayalan, V., Rao, K. K., and Tomlin, A. Matrix methods for lost data reconstruction in erasure codes. In FAST-25: 4th Usenix Conference on File and Storage Technologies (San Francisco, December 25), pp [11] Huang, C., Chen, M., and Li, J. Pyramid codes: Flexible schemes to trade space for access efficienty in reliable data storage systems. In NCA-7: 6th IEEE International Symposium on Network Computing Applications (Cambridge, MA, July 27). [12] Huang, C., Li, J., and Chen, M. On optimizing XOR-based codes for fault-tolerant storage applications. In ITW 7, Information Theory Workshop (Tahoe City, CA, September 27), IEEE, pp [13], M. Code for Cauchy coding. Uuencoded tar file: berkeley.edu/~luby/cauchy.tar.uu, [14] MacWilliams, F. J., and Sloane, N. J. A. The Theory of Error-Correcting Codes, Part I. North-Holland Publishing Company, Amsterdam, New York, Oxford, [15] Nisbet, B. FAS storage systems: Laying the foundation for application availability. Network Appliance white paper: February 28. [16] Partow, A. ECC Library. Open source code distribution: schifra.com/downloads.html, [17] Plank, J. S. A tutorial on coding for fault-tolerance in RAID-like systems. Software Practice & Experience 27, 9 (September 1997), [18] Plank, J. S. A library in C/C++ facilitating erasure coding for storage applications. Tech. Rep. CS-7-63, University of Tennessee, September 27. [19] Plank, J. S. A new minimum density RAID-6 code with a word size of eight. In NCA-8: 7th IEEE International Symposium on Network Computing Applications (Cambridge, MA, July 28). [2] Plank, J. S. The RAID-6 Liberation codes. In FAST-28: 6th Usenix Conference on File and Storage Technologies (San Jose, February 28), pp [21] Plank, J. S., and Ding, Y. Note: Correction to the 1997 tutorial on coding. Software Practice & Experience 35, 2 (February 25), [22] Plank, J. S., and Xu, L. Optimizing Cauchy codes for fault-tolerant network storage applications. In NCA-6: 5th IEEE International Symposium on Network Computing Applications (Cambridge, MA, July 26). [23] Reed, I. S., and Solomon, G. Polynomial codes over certain finite fields. Journal of the Society for Industrial and Applied Mathematics 8 (196),
10 [24] Rhea, S., Wells, C., Eaton, P., Geels, D., Zhao, B., Weatherspoon, H., and Kubiatowicz, J. Maintenance-free global data storage. IEEE Internet Computing 5, 5 (21), [25] Rizzo, L. Effective erasure codes for reliable computer communication protocols. ACM SIG- COMM Computer Communication Review 27, 2 (1997), [26] Storer, M. W., Greenan, K. M., Miller, E. L., and Voruganti, K. Pergamum: Replacing tape with energy efficient, reliable, disk-based archival storage. In FAST-28: 6th Usenix Conference on File and Storage Technologies (San Jose, February 28), pp [27] Welch, B., Unangst, M., Abbasi, Z., Gibson, G., Mueller, B., Small, J., Zelenka, J., and Zhou, B. Scalable performance of the panasas parallel file system. In FAST-28: 6th Usenix Conference on File and Storage Technologies (San Jose, February 28), pp [28] Wilcox-O Hearn, Z. Zfec Open source code distribution: pypi/zfec, 28. [29] Wylie, J. J., and Swaminathan, R. Determining fault tolerance of XOR-based erasure codes efficiently. In DSN-27: The International Conference on Dependable Systems and Networks (Edinburgh, Scotland, June 27), IEEE. [3] Xu, L., Bohossian, V., Bruck, J., and Wagner, D. Low density MDS codes and factors of complete graphs. IEEE Transactions on Information Theory 45, 6 (September 1999), [31] Xu, L., and Bruck, J. X-Code: MDS array codes with optimal encoding. IEEE Transactions on Information Theory 45, 1 (January 1999), [32] Zhu, B., Li, K., and Patterson, H. Avoiding the disk bottleneck in the Data Domain deduplication file system. In FAST-28: 6th Usenix Conference on File and Storage Technologies (San Jose, February 28), pp
11 Appendix: Performance Results for the other Parameters Standard Cauchy ParityMinimal ArrayDensity Encoding Speed (MB/sec) RDP EVENODD Figure 5: Encoding performance when k = 14 and m = 2. Standard Cauchy ParityMinimal ArrayDensity Decoding Speed (MB/sec) RDP EVENODD Figure 6: Decoding performance when k = 14 and m = 2. 11
12 Standard Cauchy Encoding Speed (MB/sec) Figure 7: Encoding performance when k = 12 and m = 4. Standard Cauchy Decoding Speed (MB/sec) Figure 8: Decoding performance when k = 12 and m = 4. 12
13 Standard Cauchy Encoding Speed (MB/sec) Figure 9: Encoding performance when k = 1 and m = 6. Standard Cauchy Decoding Speed (MB/sec) Figure 1: Decoding performance when k = 1 and m = 6. 13
A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries For Storage
A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries For Storage James S. Plank University of Tennessee [email protected] Lihao Xu Wayne State University Jianqiang Luo Wayne
How To Encrypt Data With A Power Of N On A K Disk
Towards High Security and Fault Tolerant Dispersed Storage System with Optimized Information Dispersal Algorithm I Hrishikesh Lahkar, II Manjunath C R I,II Jain University, School of Engineering and Technology,
Distributed Storage Networks and Computer Forensics
Distributed Storage Networks 5 Raid-6 Encoding Technical Faculty Winter Semester 2011/12 RAID Redundant Array of Independent Disks Patterson, Gibson, Katz, A Case for Redundant Array of Inexpensive Disks,
Algorithms and Methods for Distributed Storage Networks 5 Raid-6 Encoding Christian Schindelhauer
Algorithms and Methods for Distributed Storage Networks 5 Raid-6 Encoding Institut für Informatik Wintersemester 2007/08 RAID Redundant Array of Independent Disks Patterson, Gibson, Katz, A Case for Redundant
Note: Correction to the 1997 Tutorial on Reed-Solomon Coding
Note: Correction to the 1997 utorial on Reed-Solomon Coding James S Plank Ying Ding University of ennessee Knoxville, N 3799 [plank,ying]@csutkedu echnical Report U-CS-03-504 Department of Computer Science
Data Corruption In Storage Stack - Review
Theoretical Aspects of Storage Systems Autumn 2009 Chapter 2: Double Disk Failures André Brinkmann Data Corruption in the Storage Stack What are Latent Sector Errors What is Silent Data Corruption Checksum
Rethinking Erasure Codes for Cloud File Systems: Minimizing I/O for Recovery and Degraded Reads
Rethinking Erasure Codes for Cloud File Systems: Minimizing I/O for Recovery and Degraded Reads Osama Khan, Randal Burns Department of Computer Science Johns Hopkins University James Plank, William Pierce
HDP Code: A Horizontal-Diagonal Parity Code to Optimize I/O Load Balancing in RAID-6
HDP Code: A Horizontal-Diagonal Parity Code to Optimize I/O Load Balancing in RAID-6 Chentao Wu, Xubin He, Guanying Wu Shenggang Wan, Xiaohua Liu, Qiang Cao, Changsheng Xie Department of Electrical & Computer
A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like Systems
A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like Systems James S Plank Department of Computer Science University of Tennessee February 19, 1999 Abstract It is well-known that Reed-Solomon
How To Understand And Understand The Power Of Aird 6 On Clariion
A Detailed Review Abstract This white paper discusses the EMC CLARiiON RAID 6 implementation available in FLARE 26 and later, including an overview of RAID 6 and the CLARiiON-specific implementation, when
AONT-RS: Blending Security and Performance in Dispersed Storage Systems
AONT-RS: Blending Security and Performance in Dispersed Storage Systems Jason K. Resch Development Cleversafe, Inc. 222 S. Riverside Plaza, Suite 17 Chicago, IL 666 [email protected] James S. Plank
Practical Data Integrity Protection in Network-Coded Cloud Storage
Practical Data Integrity Protection in Network-Coded Cloud Storage Henry C. H. Chen Department of Computer Science and Engineering The Chinese University of Hong Kong Outline Introduction FMSR in NCCloud
Using Shared Parity Disks to Improve the Reliability of RAID Arrays
Using Shared Parity Disks to Improve the Reliability of RAID Arrays Jehan-François Pâris Department of Computer Science University of Houston Houston, TX 7704-010 [email protected] Ahmed Amer Department
Online Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
Surviving Two Disk Failures. Introducing Various RAID 6 Implementations
Surviving Two Failures Introducing Various RAID 6 Implementations Notices The information in this document is subject to change without notice. While every effort has been made to ensure that all information
RAID Triple Parity. Peter Corbett. Atul Goel ABSTRACT. Categories and Subject Descriptors. General Terms. Keywords
RAID Triple Parity Atul Goel NetApp Inc [email protected] Peter Corbett NetApp Inc [email protected] ABSTRACT RAID triple parity (RTP) is a new algorithm for protecting against three-disk failures.
RAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array
ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com [email protected] RAID Overview: Identifying What RAID Levels
Data Backup and Archiving with Enterprise Storage Systems
Data Backup and Archiving with Enterprise Storage Systems Slavjan Ivanov 1, Igor Mishkovski 1 1 Faculty of Computer Science and Engineering Ss. Cyril and Methodius University Skopje, Macedonia [email protected],
A Reed-Solomon Code for Disk Storage, and Efficient Recovery Computations for Erasure-Coded Disk Storage
A Reed-Solomon Code for Disk Storage, and Efficient Recovery Computations for Erasure-Coded Disk Storage M. S. MANASSE Microsoft Research, Mountain View, California, USA C. A. THEKKATH Microsoft Research,
Disk Storage & Dependability
Disk Storage & Dependability Computer Organization Architectures for Embedded Computing Wednesday 19 November 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition,
Magnus: Peer to Peer Backup System
Magnus: Peer to Peer Backup System Naveen Gattu, Richard Huang, John Lynn, Huaxia Xia Department of Computer Science University of California, San Diego Abstract Magnus is a peer-to-peer backup system
IBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
Functional-Repair-by-Transfer Regenerating Codes
Functional-Repair-by-Transfer Regenerating Codes Kenneth W Shum and Yuchong Hu Abstract In a distributed storage system a data file is distributed to several storage nodes such that the original file can
Filing Systems. Filing Systems
Filing Systems At the outset we identified long-term storage as desirable characteristic of an OS. EG: On-line storage for an MIS. Convenience of not having to re-write programs. Sharing of data in an
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels
technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California
How To Write A Hexadecimal Program
The mathematics of RAID-6 H. Peter Anvin First version 20 January 2004 Last updated 20 December 2011 RAID-6 supports losing any two drives. syndromes, generally referred P and Q. The way
Operating Systems. RAID Redundant Array of Independent Disks. Submitted by Ankur Niyogi 2003EE20367
Operating Systems RAID Redundant Array of Independent Disks Submitted by Ankur Niyogi 2003EE20367 YOUR DATA IS LOST@#!! Do we have backups of all our data???? - The stuff we cannot afford to lose?? How
RAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
PIONEER RESEARCH & DEVELOPMENT GROUP
SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for
STAR : An Efficient Coding Scheme for Correcting Triple Storage Node Failures
STAR : An Efficient Coding Scheme for Correcting Triple Storage Node Failures Cheng Huang, Member, IEEE, and Lihao Xu, Senior Member, IEEE Abstract Proper data placement schemes based on erasure correcting
Exercise 2 : checksums, RAID and erasure coding
Exercise 2 : checksums, RAID and erasure coding Sébastien Ponce May 22, 2015 1 Goals of the exercise Play with checksums and compare efficiency and robustness Use hand written version of RAID systems and
The mathematics of RAID-6
The mathematics of RAID-6 H. Peter Anvin 1 December 2004 RAID-6 supports losing any two drives. The way this is done is by computing two syndromes, generally referred P and Q. 1 A quick
Reliability and Fault Tolerance in Storage
Reliability and Fault Tolerance in Storage Dalit Naor/ Dima Sotnikov IBM Haifa Research Storage Systems 1 Advanced Topics on Storage Systems - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom
Guide to SATA Hard Disks Installation and RAID Configuration
Guide to SATA Hard Disks Installation and RAID Configuration 1. Guide to SATA Hard Disks Installation... 2 1.1 Serial ATA (SATA) Hard Disks Installation... 2 2. Guide to RAID Configurations... 3 2.1 Introduction
CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY
White Paper CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY DVTel Latitude NVMS performance using EMC Isilon storage arrays Correct sizing for storage in a DVTel Latitude physical security
AirWave 7.7. Server Sizing Guide
AirWave 7.7 Server Sizing Guide Copyright 2013 Aruba Networks, Inc. Aruba Networks trademarks include, Aruba Networks, Aruba Wireless Networks, the registered Aruba the Mobile Edge Company logo, Aruba
Secure Way of Storing Data in Cloud Using Third Party Auditor
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 12, Issue 4 (Jul. - Aug. 2013), PP 69-74 Secure Way of Storing Data in Cloud Using Third Party Auditor 1 Miss.
RAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1
RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.
Guide to SATA Hard Disks Installation and RAID Configuration
Guide to SATA Hard Disks Installation and RAID Configuration 1. Guide to SATA Hard Disks Installation...2 1.1 Serial ATA (SATA) Hard Disks Installation...2 2. Guide to RAID Configurations...3 2.1 Introduction
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
BrightStor ARCserve Backup for Windows
BrightStor ARCserve Backup for Windows Tape RAID Option Guide r11.5 D01183-1E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for the end user's
Hardware Configuration Guide
Hardware Configuration Guide Contents Contents... 1 Annotation... 1 Factors to consider... 2 Machine Count... 2 Data Size... 2 Data Size Total... 2 Daily Backup Data Size... 2 Unique Data Percentage...
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique Jyoti Malhotra 1,Priya Ghyare 2 Associate Professor, Dept. of Information Technology, MIT College of
How To Write A Disk Array
200 Chapter 7 (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.) 7.2 RAID Disks are potential bottlenecks for system performance and
Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer
Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance
RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection
Technical Report RAID-DP: NetApp Implementation of Double- Parity RAID for Data Protection Jay White & Chris Lueth, NetApp May 2010 TR-3298 ABSTRACT This document provides an in-depth overview of the NetApp
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs
Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Load Balancing in Fault Tolerant Video Server
Load Balancing in Fault Tolerant Video Server # D. N. Sujatha*, Girish K*, Rashmi B*, Venugopal K. R*, L. M. Patnaik** *Department of Computer Science and Engineering University Visvesvaraya College of
SSDs and RAID: What s the right strategy. Paul Goodwin VP Product Development Avant Technology
SSDs and RAID: What s the right strategy Paul Goodwin VP Product Development Avant Technology SSDs and RAID: What s the right strategy Flash Overview SSD Overview RAID overview Thoughts about Raid Strategies
Mean time to meaningless: MTTDL, Markov models, and storage system reliability
Mean time to meaningless: MTTDL, Markov models, and storage system reliability Kevin M. Greenan ParaScale, Inc. James S. Plank University of Tennessee Jay J. Wylie HP Labs Abstract Mean Time To Data Loss
Benefits of Intel Matrix Storage Technology
Benefits of Intel Matrix Storage Technology White Paper December 2005 Document Number: 310855-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,
Introduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
Intel and Qihoo 360 Internet Portal Datacenter - Big Data Storage Optimization Case Study
Intel and Qihoo 360 Internet Portal Datacenter - Big Data Storage Optimization Case Study The adoption of cloud computing creates many challenges and opportunities in big data management and storage. To
How To Recover From A Single Node Failure With A Single-Node Failure
A Cost-based Heterogeneous Recovery Scheme for Distributed Storage Systems with RAID-6 Codes Yunfeng Zhu, Patrick P. C. Lee, Liping Xiang, Yinlong Xu, Lingling Gao School of Computer Science and Technology,
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage
Best Practices for Deploying Citrix XenDesktop on NexentaStor Open Storage White Paper July, 2011 Deploying Citrix XenDesktop on NexentaStor Open Storage Table of Contents The Challenges of VDI Storage
Input / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1
Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Price/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
An Introduction to RAID. Giovanni Stracquadanio [email protected] www.dmi.unict.it/~stracquadanio
An Introduction to RAID Giovanni Stracquadanio [email protected] www.dmi.unict.it/~stracquadanio Outline A definition of RAID An ensemble of RAIDs JBOD RAID 0...5 Configuring and testing a Linux
3PAR Fast RAID: High Performance Without Compromise
3PAR Fast RAID: High Performance Without Compromise Karl L. Swartz Document Abstract: 3PAR Fast RAID allows the 3PAR InServ Storage Server to deliver higher performance with less hardware, reducing storage
RAID OPTION ROM USER MANUAL. Version 1.6
RAID OPTION ROM USER MANUAL Version 1.6 RAID Option ROM User Manual Copyright 2008 Advanced Micro Devices, Inc. All Rights Reserved. Copyright by Advanced Micro Devices, Inc. (AMD). No part of this manual
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, [email protected] Abstract 1 Interconnect quality
Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays
1 Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays Farzaneh Rajaei Salmasi Hossein Asadi Majid GhasemiGol [email protected] [email protected] [email protected]
Kodo: An Open and Research Oriented Network Coding Library Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank Hanns Paul
Aalborg Universitet Kodo: An Open and Research Oriented Network Coding Library Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank Hanns Paul Published in: Lecture Notes in Computer Science DOI (link
IDENTIFYING AND OPTIMIZING DATA DUPLICATION BY EFFICIENT MEMORY ALLOCATION IN REPOSITORY BY SINGLE INSTANCE STORAGE
IDENTIFYING AND OPTIMIZING DATA DUPLICATION BY EFFICIENT MEMORY ALLOCATION IN REPOSITORY BY SINGLE INSTANCE STORAGE 1 M.PRADEEP RAJA, 2 R.C SANTHOSH KUMAR, 3 P.KIRUTHIGA, 4 V. LOGESHWARI 1,2,3 Student,
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
Technical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
Today s Papers. RAID Basics (Two optional papers) Array Reliability. EECS 262a Advanced Topics in Computer Systems Lecture 4
EECS 262a Advanced Topics in Computer Systems Lecture 4 Filesystems (Con t) September 15 th, 2014 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley Today
A Deduplication File System & Course Review
A Deduplication File System & Course Review Kai Li 12/13/12 Topics A Deduplication File System Review 12/13/12 2 Traditional Data Center Storage Hierarchy Clients Network Server SAN Storage Remote mirror
Filesystems Performance in GNU/Linux Multi-Disk Data Storage
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical
An Area-Efficient Architecture of Reed-Solomon Codec for Advanced RAID Systems Min-An Song, I-Feng Lan, and Sy-Yen Kuo
An Area-Efficient Architecture of Reed-olomon Codec for Advanced RAID ystems Min-An ong, I-Feng Lan, and y-yen Kuo Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan E-mail:
RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
Distributed Systems (5DV147) What is Replication? Replication. Replication requirements. Problems that you may find. Replication.
Distributed Systems (DV47) Replication Fall 20 Replication What is Replication? Make multiple copies of a data object and ensure that all copies are identical Two Types of access; reads, and writes (updates)
Definition of RAID Levels
RAID The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds
With Verified Erasure Coding
With Verified Erasure Coding Superior Availability Integrity Performance Economy 1 Critical Big Data requirements are: High Data Availability (survive multiple drive failures even during rebuild) Perfect
IncidentMonitor Server Specification Datasheet
IncidentMonitor Server Specification Datasheet Prepared by Monitor 24-7 Inc October 1, 2015 Contact details: [email protected] North America: +1 416 410.2716 / +1 866 364.2757 Europe: +31 088 008.4600
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Multi-level Metadata Management Scheme for Cloud Storage System
, pp.231-240 http://dx.doi.org/10.14257/ijmue.2014.9.1.22 Multi-level Metadata Management Scheme for Cloud Storage System Jin San Kong 1, Min Ja Kim 2, Wan Yeon Lee 3, Chuck Yoo 2 and Young Woong Ko 1
RAID-DP : NETWORK APPLIANCE IMPLEMENTATION OF RAID DOUBLE PARITY FOR DATA PROTECTION A HIGH-SPEED IMPLEMENTATION OF RAID 6
RAID-DP : NETWORK APPLIANCE IMPLEMENTATION OF RAID DOUBLE PARITY FOR DATA PROTECTION A HIGH-SPEED IMPLEMENTATION OF RAID 6 Chris Lueth, Network Appliance, Inc. TR-3298 [12/2006] ABSTRACT To date, RAID
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System André Brinkmann, Michael Heidebuer, Friedhelm Meyer auf der Heide, Ulrich Rückert, Kay Salzwedel, and Mario Vodisek Paderborn
Ensuring Data Storage Security in Cloud Computing
Ensuring Data Storage Security in Cloud Computing Cong Wang 1, Qian Wang 1, Kui Ren 1, and Wenjing Lou 2 1 ECE Department, Illinois Institute of Technology 2 ECE Department, Worcester Polytechnic Institute
