Exploring RAID Configurations
|
|
- Bartholomew Cross
- 8 years ago
- Views:
Transcription
1 Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID performance. We found that by dedicating more disks to hosting popular data and fewer disks to hosting less popular data, RAID was able to handle 20% more concurrent streams. However, we have not found a conclusive explanation for this phenomenon. 1. Introduction Improvements upon modern computers have enabled CPU, memory, and network speeds to increase exponentially. However, the speed of mechanical disks has not improved nearly as much, resulting in a system wide bottleneck. Although memorybased storage is becoming a strong contender for disks, disk based storage will remain important for high capacity mass storage in the foreseeable future because it is 100 times cheaper per byte compared to memory based storage. To overcome the performance problem of disks, researchers have invented ways to spread I/O loads across multi disk RAID devices to exploit hardware parallelism, and conventional wisdom suggests that balancing the load across disks in a RAID is essential for good performance. However, we observe that perfect load balancing does not equate to maximum performance. The intuition is that if each request is served by all disks whenever possible, a disk needs to multiplex often when serving many simultaneous request streams. Alternatively, each disk can focus on serving a subset of request streams to reduce the per disk multiplexing overhead. To investigate this hypothesis, we explored techniques to reduce the level of multiplexing for each disk in RAIDs. Our results show that for a few concurrent request streams, performance can be improved. For a higher number of concurrent streams, reducing the multiplexing within each disk yields little difference in performance. However, we found that by dedicating more disks to hosting frequently accessed data and fewer disks to hosting less frequently accessed data, the system can handle 20% more concurrent request streams. Our results show several plausible explanations, but the exact cause is still inconclusive. 2. Background Disks: Disks are mechanical storage consisting of a comb of disk heads for reads and writes and rotating platters coated with magnetic recording material. Each platter surface consists of concentric tracks, which are divided into the minimum
2 storage unit of sectors. Therefore, the disk access time consists of seek time (to move the head over a track), rotational delay (to rotate the sector under a head), and transfer time (to access the requested data). According to Seagate, their top ofthe line Savvio 15K RPM drive is able to produce an average read/write seek time of 2.9msec/3.3msec respectively, an average rotational delay of 2msec, while producing a transfer rate up to 112MB/s [6]. RAIDs: RAID stands for redundant array of inexpensive disks. [4] The basic idea is to spread, or stripe, a piece of data uniformly across disks (typically in units called chunks), so that a large request can be served by multiple disks in parallel. RAIDs mainly reduce the time to transfer data, while the disk in a RAID that incurs the longest seek and rotation delays can slow down the completion of a striped request. Since the use of multiple disks reduces the reliability of a RAID, different RAID configurations, or levels, are invented to overcome this constraint. Popular RAID levels are RAID 0, RAID 1, and RAID 5. RAID 0 involves simple striping of data across disks. Therefore, a single disk failure can result in data loss. In a RAID 1, the content of each disk is replicated to another disk. Therefore, data loss occurs only when two disks holding the same data fail simultaneously. RAID 5 reduces the overhead to store redundant information to one disk, but it can only survive a single disk failure. Since this research focuses on the performance aspect, we only explored the RAID 0 configuration for various experiments. RAID optimizations: Knowing that the slowest disk can slow down the entire RAID, load balancing techniques [5] try to migrate popular items away from the slowest disk to improve the overall RAID performance. However, balanced load does not equate to maximum performance automatically, since there are many ways to achieve balanced loads across disks. For example, each disk in an N disk RAID can focus on serving one request stream. Alternatively, each disk can multiplex among N request streams by serving only 1/N h of each stream. Both schemes can plausibly achieve load balancing, but the one without multiplexing can save more time on seeks. Clustering and declustering are techniques in which data is combined or separated in ways that aim to maximize parallel transfers and minimize the seek and rotational delays. In the Shekar et al. paper [7], declustering was done through the analysis of spatial data, meaning that they required an on disk distance metric to analyze the data and separate it in a way that would minimize concurrent requests to the same disk. One drawback of this approach is developing an accurate on disk distance metric, which is very difficult considering the complexity of modern disks [1]. Studies on RAID configurations explore the relationships among stripe granularities, the number of disks in a stripe, and the number of concurrent request streams. Under artificial workloads, Chen et al. [2] recommends the striping chunk size to be ½ * the average access time * transfer rate of a disk. Although the one size fits all 2
3 recommendation eases system administration, the question of whether approaches based on nonuniform treatments of files will lead to better performance. The limits of various approaches prompted us to investigate the possibility of reducing the level of per disk multiplexing to increase RAID performance. 3. Feasibility Studies To quantify the potential benefits of reducing per disk multiplexing, we conducted experiments with 10 concurrent request streams to five disks. In the first RAID 0 configuration, we created an ext2 file system on a RAID 0 device with a chunk size of 64 KB. We then populated 250 files of the same size on the file system. We also varied the size from 10 KB to 100 MB to see the effect of file sizes. Therefore, each disk potentially needs to multiplex up to ten concurrent requests. In the second configuration, files are not striped. We created an ext2 file system on each drive and placed 50 files of the same size on each drive. Therefore, on average, each disk multiplexes between two concurrent request streams (Figure 3.1). Figure 3.1: Feasibility test data layouts. We ran each experiment 5 times, and the numbers were analyzed at the 90% confidence interval. For all experiments, the machine was rebooted between each run to clear the system and disk cache. A script would then read 100 files randomly. The elapsed time to read each request would then be recorded. Processor Server Intel Xeon 2.80 GHz 3
4 16 KB L1 Cache 2 MB L2 Cache Memory 1 GB DDR2 400 Disks Fujitsu MAW3073NC 73.5 GB 10K RPM SCSI Ultra 320 [3] 8 MB on board cache 1 disk for booting 5 disks for experiments Operating System Linux Table 3.1: Experimental settings. Figure 3.2 shows that at a concurrency level of 10, multiplexing reduction would yield a bandwidth improvement for file sizes > 100KB. Therefore, for our subsequent data layout schemes, we explored the use of video on demand workloads. Figure 3.2: Concurrency test results. 4
5 4. Multimedia Tests For multimedia experiments, five disks were populated with 20 GB of 30 MB media files. A multimedia request stream is represented and emulated by a process, which reads 512 KB per second from a chosen file. Once the request reaches the end of the file, the process chooses another file. The elapsed time of each 512 KB read request is recorded. For each experiment, the system began with one request stream process. A new request stream process was spawned after each second, until one of the processes missed five one second deadlines. A message would then be sent to all the processes signifying the end of the experiment. The experiments were run on the same system as used in the feasibility tests (Table 3.1). The data read from the five disk device was written to an ext2 partition stored on a separate disk, and the system was then rebooted to clear disk and system cache. Each experiment set was run five times; we then averaged and reported the results up to the lowest number of concurrent streams supported. The primary metrics used were the number of concurrent streams a system can support and the aggregate bandwidth, computed by dividing the number of bytes accessed by the busy time of a disk or a RAID Test Set 1 Initial Layout Tests RAID 0: The base case configuration was the original RAID 0 disk layout, with 64 KB chunks (Figure 4.1.1). We emulated the common skewed access frequency distributions by having 80% of references uniformly distributed to 20% of files (also referred to as hot files), and 20% of references to 80% of files (also referred to as cold files). Figure 4.1.1: RAID 0. Hot/cold blocks are randomly spread across the entire array. 1Hot4Cold: To explore the idea of reducing per disk multiplexing, we used one disk to hold hot files (4 GB total), and a four disk RAID 0 to hold cold files (4 GB per disk). The idea is that the hot disk only needs to multiplex among the 20% of files that are hot; the cold disks only need to multiplex among the 80% of files that are cold. 5
6 Figure 4.1.2: 1Hot4Cold. Cold files are stored on the RAID 0 device, while hot files are stored on a single drive. 2Partitions: The third configuration divides each of five disks into two partitions (Figure 4.1.3). A RAID 0 is formed with partitions nearest the outer edge of the platters to store hot files. Another RAID 0 with partitions next to the previous partitions stores cold files. This configuration aims to keep the disk head within the hot partition most of the time to reduce the seek times. Figure 4.1.3: 2Partitions. Data is separated into two partitions. The partition near the outer edge of the disk holds the hot files, while the adjacent partition holds the cold files. Figure 4.1.4: Test results for RAID 0, 1Hot4Cold, and 2Partitions. 6
7 Figure shows two surprising findings: (1) The 2Partitions case suggests that the seek time reduction contributed little to the bandwidth difference. One explanation is that we did not use a big enough working set. For 20 GB of files spread across five disks, each disk only stores 4 GB of data. Out of 73.5 GB of storage capacity, the average seek time for 4 GB is about 0.16 msec. With hot files stored in a separate partition, the average seek time is reduced to about 0.05 msec, which is rather insignificant compared to the average rotational delay of 2 msec. We will try a bigger working set as our future work. (2) Reducing the level of multiplexing actually contributes negatively to the performance. Based on the achieved aggregate bandwidth of 45 MB/sec vs. 250 MB/sec peak bandwidth of RAID 0, the disk containing hot files clearly became a bottleneck. 1Cold4Hot: Out of curiosity, we tried 1Cold4Hot where cold files would reside on the independent drive, while the hot files were on the four disk RAID 0 (Figure 4.1.5). This arrangement achieves a number of effects: (1) the per disk load becomes balanced, as each disk receives 20% of references on average. (2) The level of multiplexing is reduced for hot files as well. Each disk in the RAID 0 stored about 1 GB of files, while the cold disk stored 16 GB. (3) The average seek distance for RAID 0 is reduced further. (4) Serving hot files is isolated from the interference of serving cold files. Figure 4.1.5: 1Cold4Hot. Hot files reside on a RAID 0 device, while cold files populate a single drive. 7
8 Figure 4.1.6: Test results for RAID 0, 1Hot4Cold, 2Partitions, and 1Cold4Hot. Figure superimposes 1Cold4Hot over prior results and shows a surprising finding. Although 1Cold4Hot does not achieve the same peak bandwidth, the scheme can support 20% more concurrent streams compared to RAID 0. Therefore, it became clear that bandwidth was not the primary performance attribute for comparison, since a peak aggregate bandwidth of 250 MB/sec (or 8.3 MB/sec per stream) would help little in improving request streams that only need 512KB/sec. This finding also led us to question why 1Cold4Hot obtained 20% scaling improvement, which is not explainable with simple models of seek times, rotational delays, and transfer times. The plausible causes were changes in seek times, the level of multiplexing, load balancing, and the separation of hot and cold files. The following test sets were designed to the sensitivity of each cause in turn. We used the 1Cold4Hot results as the baseline Test Set 2 Increased Seek This test set was designed to observe the effect of an increased seek distance on a layout similar to the 1Cold4Hot experiment. We increased the size of hot files by 10% to 33MB and left the number of hot files the same (Figure 4.2.1). By doing so, we forced the disk head to seek further for hot disks. 8
9 Figure 4.2.1: Increased Seek. This layout is similar to the 1Cold4Hot layout, but the hot file size on the RAID 0 device is increased by 10%. As can be seen in Figure 4.2.2, increasing the seek distance decreased the scaling performance slightly, and created little deviation on the bandwidth performance. From these results, we ruled out reduced seek distance as the main cause for the 1Cold4Hot s performance gains. This finding is also consistent with the 2Partitions case Test Set 3 Reduced Multiplexing Figure 4.2.2: Test results for increased seek distance. Another possible reason for the scaling increase could be the reduction of multiplexing on the entire system. By separating the hot from the cold files, we were able to decrease the level of multiplexing by 80% for the four disk RAID 0 device. This test was designed to reduce the level of multiplexing even further to see if the system can serve more concurrent requests. The disk layout is identical to the 1Cold4Hot experiment, except that only a fraction of hot files (10% and 50%) were 9
10 available to be accessed. Therefore, by limiting accesses to fewer hot files, we effectively reduced the level of multiplexing for RAID 0. Figure 4.3.1: Further reduced multiplexing for hot files by 10%. Figure 4.3.2: Further reduced multiplexing for hot files by 50%. Figure shows that decreasing multiplexing does not increase scaling and seemed to have little effect on either bandwidth or scaling performance. Figure 4.3.3: Test results for further reduced multiplexing for hot files by 10%/50% Test Set 4 Hot/Cold File Separation The next experiment set investigated various probabilities to access hot and cold files. For prior experiments, we used 80% of references to hot files and 20% of 10
11 references to cold files By changing the probabilities (Table 4.4.1), we hoped to see a definite change in scaling performance. For all probability changes, 20% of files are hot, and 80% of files are cold. Probabilities Tested Hot Cold 90% 10% 70% 30% 50% 50% Table 4.4.1: Tested probabilities to reference hot and cold files. 1Mixed4Mixed: In addition, we altered the original 1Cold4Hot configuration, so that hot and cold files were mixed (Figure 4.4.1). This test would show the effect of not separating hot and cold files. Figure 4.4.1: 1Mixed4Mixed. Hot and cold files are spread between a 4 disk RAID 0 device and an independent device according to the 1Cold4Hot distribution. Figure 4.4.2: Test results for hot/cold file separation Figure illustrates that the skewness of file popularity concentration has a significant effect on the performance of the system. By changing the probability 11
12 ratio of hot and cold file references from 80/20 to 70/30, we can see that there is a considerable negative effect from redirecting 10% of traffic from hot to cold. It would seem that this change is enough to overload the independent drive to degrade scaling. In addition, evening out the probability to 50/50 degraded scaling by nearly half in comparison to the 1Cold4Hot results. Alternately, increasing the probability to reference hot files to 90% from 80% did not increase scaling performance, but rather decreased it slightly. From these results, it would seem that the 80% hot file probability was a sweet spot for data separation. For the 1Mixed4Mixed case, the single disk would receive about 80% of references, which is proportional to the amount of data stored on the disk due to the lack of hot and cold separation. This layout yields the lowest performance for both bandwidth and scaling Test Set 5 Load Balancing As for load balancing, the original RAID 0 was well balanced in the sense that each disk would receive 20% of references. Thus, load balance does not explain the scaling for the 1Cold4Hot case. 2Cold3Hot Imbalanced: As a sanity check, we also tried to use a two disk RAID 0 to store cold files and a three disk RAID 0 to store hot files. We anticipated performance decrease due to load imbalance. We maintained the 80/20 request ratios for hot and cold files. 2Cold3Hot Balanced: We also ran another instance of this test with a 60/40 request ratio to hot and cold files to balance the per disk load. We expected to see the load balanced setting of 2Cold3Hot outperform the imbalanced version. Figure 4.5.1: 2Cold3Hot. Hot and cold files reside on a 3 disk RAID 0 and a 2 disk RAID 0 respectively. 12
13 Figure 4.5.2: Load balancing test results. As expected, Figure shows that the balanced 2Cold3Hot outperforms the imbalanced 2Cold3Hot, and their performance is between the RAID 0 and 1Cold4Hot case. 5. Future Work Due to time constraints, we have not tracked down definitive explanations for why separating hot and cold files would lead to better scaling. In the future, we plan to explore the factors of larger working sets, I/O scheduling queues, and real world workloads and develop explanations for various findings. 6. Conclusions Through exploring various RAID configurations, we discovered many surprising findings. We were unable to show that reduced multiplexing could lead to better performance with many concurrent streams. On the other hand, we found that the separation of hot and cold files could lead to 20% better scaling. Although we have conducted extensive experiments, we have yet to find simple explanations. Overall, we found that modern storage subsystems are complex. Our lack of understanding suggests many open opportunities for future explorations. 7. Acknowledgements 13
14 I want to thank to Dr. Andy Wang for his support and guidance on this project, and the work leading to it, and for his friendship. I would also like to thank Dr. Ted Baker and Dr. Piyush Kumar for being on the project committee, as well as contributing to an excellent educational experience at Florida State University. I thank my family for all of their support, both financially and emotionally. I thank my friends for keeping my spirits high when times looked dire. I thank the amazing OS and Storage Systems group for their camaraderie and support. Finally, I would like to send a special thanks to my Macaroni for giving me the motivation and support to finish this project. References [1] Anderson D. You Don't Know Jack about Disks. Queue 1(4), pp , [2] Chen PM, Lee EK. Striping in a RAID Level 5 Disk Array. Proceedings of the 1995 ACM SIGMETRICS Conference on Measurement and Modeling of Comptuer Systems [3] Fujitsu. MAW3 NC Series Obsolete Product Information Specifications, nc maw3300nc.html, [4] Patterson DA, Gibson G, Katz RH. A Case for Redundant Arrays of Inexpensive Disks (RAID). ACM SIGMOD International Conference on Management of Data [5] Scheuermann, P, Gerhard W, Zabback P. Data Partitioning and Load Balancing in Parallel Disk Systems. The International Journal on Very Large Data Bases,7(1):48 66, Feburary [6] Seagate Technology LCC. Savvio 15K Data Sheet, [7] Shekhar S,Ravada S, Kumar V, Chubb D, Turner G. Load Balanacing in High Performance GIS: Declustering Polygonal Maps. Proceedings of the 4th International Symposium on Large Spatial Databases
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
More informationCloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
More informationEnergy aware RAID Configuration for Large Storage Systems
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa norifumi@tkl.iis.u-tokyo.ac.jp Miyuki Nakano miyuki@tkl.iis.u-tokyo.ac.jp Masaru Kitsuregawa kitsure@tkl.iis.u-tokyo.ac.jp Abstract
More informationTechnical White Paper. Symantec Backup Exec 10d System Sizing. Best Practices For Optimizing Performance of the Continuous Protection Server
Symantec Backup Exec 10d System Sizing Best Practices For Optimizing Performance of the Continuous Protection Server Table of Contents Table of Contents...2 Executive Summary...3 System Sizing and Performance
More informationSIMULATION AND MODELLING OF RAID 0 SYSTEM PERFORMANCE
SIMULATION AND MODELLING OF RAID 0 SYSTEM PERFORMANCE F. Wan N.J. Dingle W.J. Knottenbelt A.S. Lebrecht Department of Computing, Imperial College London, South Kensington Campus, London SW7 2AZ email:
More informationPrice/performance Modern Memory Hierarchy
Lecture 21: Storage Administration Take QUIZ 15 over P&H 6.1-4, 6.8-9 before 11:59pm today Project: Cache Simulator, Due April 29, 2010 NEW OFFICE HOUR TIME: Tuesday 1-2, McKinley Last Time Exam discussion
More informationManaging Storage Space in a Flash and Disk Hybrid Storage System
Managing Storage Space in a Flash and Disk Hybrid Storage System Xiaojian Wu, and A. L. Narasimha Reddy Dept. of Electrical and Computer Engineering Texas A&M University IEEE International Symposium on
More informationStoring Data: Disks and Files
Storing Data: Disks and Files (From Chapter 9 of textbook) Storing and Retrieving Data Database Management Systems need to: Store large volumes of data Store data reliably (so that data is not lost!) Retrieve
More informationOnline Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
More informationAn Analysis on Empirical Performance of SSD-based RAID
An Analysis on Empirical Performance of SSD-based RAID Chanhyun Park, Seongjin Lee, and Youjip Won Department of Computer and Software, Hanyang University, Seoul, Korea {parkch0708 insight yjwon}@hanyang.ac.kr
More informationQ & A From Hitachi Data Systems WebTech Presentation:
Q & A From Hitachi Data Systems WebTech Presentation: RAID Concepts 1. Is the chunk size the same for all Hitachi Data Systems storage systems, i.e., Adaptable Modular Systems, Network Storage Controller,
More informationStorage Layout and I/O Performance in Data Warehouses
Storage Layout and I/O Performance in Data Warehouses Matthias Nicola 1, Haider Rizvi 2 1 IBM Silicon Valley Lab 2 IBM Toronto Lab mnicola@us.ibm.com haider@ca.ibm.com Abstract. Defining data placement
More informationWHITE PAPER Optimizing Virtual Platform Disk Performance
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
More informationS 2 -RAID: A New RAID Architecture for Fast Data Recovery
S 2 -RAID: A New RAID Architecture for Fast Data Recovery Jiguang Wan*, Jibin Wang*, Qing Yang+, and Changsheng Xie* *Huazhong University of Science and Technology, China +University of Rhode Island,USA
More informationLos Angeles, CA, USA 90089-2561 [kunfu, rzimmerm]@usc.edu
!"$#% &' ($)+*,#% *.- Kun Fu a and Roger Zimmermann a a Integrated Media Systems Center, University of Southern California Los Angeles, CA, USA 90089-56 [kunfu, rzimmerm]@usc.edu ABSTRACT Presently, IP-networked
More informationHow To Improve Performance On A Single Chip Computer
: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings
More informationPOSIX and Object Distributed Storage Systems
1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome
More informationRAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
More informationDisk Storage & Dependability
Disk Storage & Dependability Computer Organization Architectures for Embedded Computing Wednesday 19 November 14 Many slides adapted from: Computer Organization and Design, Patterson & Hennessy 4th Edition,
More informationPARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
More informationSistemas Operativos: Input/Output Disks
Sistemas Operativos: Input/Output Disks Pedro F. Souto (pfs@fe.up.pt) April 28, 2012 Topics Magnetic Disks RAID Solid State Disks Topics Magnetic Disks RAID Solid State Disks Magnetic Disk Construction
More informationOracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
More informationWHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
More informationRAID Performance Analysis
RAID Performance Analysis We have six 500 GB disks with 8 ms average seek time. They rotate at 7200 RPM and have a transfer rate of 20 MB/sec. The minimum unit of transfer to each disk is a 512 byte sector.
More informationStriping in a RAID Level 5 Disk Array
Proceedings of the 1995 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems Striping in a RAID Level 5 Disk Array Peter M. Chen Computer Science Division Electrical Engineering and
More informationFile System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
More informationV:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System André Brinkmann, Michael Heidebuer, Friedhelm Meyer auf der Heide, Ulrich Rückert, Kay Salzwedel, and Mario Vodisek Paderborn
More information1 Storage Devices Summary
Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious
More informationRAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
More informationDeep Dive: Maximizing EC2 & EBS Performance
Deep Dive: Maximizing EC2 & EBS Performance Tom Maddox, Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved What we ll cover Amazon EBS overview Volumes Snapshots
More informationPartition Alignment Dramatically Increases System Performance
Partition Alignment Dramatically Increases System Performance Information for anyone in IT that manages large storage environments, data centers or virtual servers. Paragon Software Group Paragon Alignment
More informationCommunicating with devices
Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.
More informationCS 153 Design of Operating Systems Spring 2015
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations Physical Disk Structure Disk components Platters Surfaces Tracks Arm Track Sector Surface Sectors Cylinders Arm Heads
More informationIntroduction to I/O and Disk Management
Introduction to I/O and Disk Management 1 Secondary Storage Management Disks just like memory, only different Why have disks? Memory is small. Disks are large. Short term storage for memory contents (e.g.,
More informationStorage Sizing Issue of VDI System
, pp.89-94 http://dx.doi.org/10.14257/astl.2014.49.19 Storage Sizing Issue of VDI System Jeongsook Park 1, Cheiyol Kim 1, Youngchang Kim 1, Youngcheol Kim 1, Sangmin Lee 1 and Youngkyun Kim 1 1 Electronics
More informationChapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
More informationEvaluating HDFS I/O Performance on Virtualized Systems
Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang xtang@cs.wisc.edu University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing
More informationLecture 36: Chapter 6
Lecture 36: Chapter 6 Today s topic RAID 1 RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for
More informationRAID Basics Training Guide
RAID Basics Training Guide Discover a Higher Level of Performance RAID matters. Rely on Intel RAID. Table of Contents 1. What is RAID? 2. RAID Levels RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 0+1 RAID 1E
More informationCPS104 Computer Organization and Programming Lecture 18: Input-Output. Robert Wagner
CPS104 Computer Organization and Programming Lecture 18: Input-Output Robert Wagner cps 104 I/O.1 RW Fall 2000 Outline of Today s Lecture The I/O system Magnetic Disk Tape Buses DMA cps 104 I/O.2 RW Fall
More informationIBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
More informationInput / Ouput devices. I/O Chapter 8. Goals & Constraints. Measures of Performance. Anatomy of a Disk Drive. Introduction - 8.1
Introduction - 8.1 I/O Chapter 8 Disk Storage and Dependability 8.2 Buses and other connectors 8.4 I/O performance measures 8.6 Input / Ouput devices keyboard, mouse, printer, game controllers, hard drive,
More informationMuse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0
Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without
More informationIntroduction. What is RAID? The Array and RAID Controller Concept. Click here to print this article. Re-Printed From SLCentral
Click here to print this article. Re-Printed From SLCentral RAID: An In-Depth Guide To RAID Technology Author: Tom Solinap Date Posted: January 24th, 2001 URL: http://www.slcentral.com/articles/01/1/raid
More informationOptimizing LTO Backup Performance
Optimizing LTO Backup Performance July 19, 2011 Written by: Ash McCarty Contributors: Cedrick Burton Bob Dawson Vang Nguyen Richard Snook Table of Contents 1.0 Introduction... 3 2.0 Host System Configuration...
More informationHow To Create A Multi Disk Raid
Click on the diagram to see RAID 0 in action RAID Level 0 requires a minimum of 2 drives to implement RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written
More informationWindows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
More informationDependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs
Dependable Systems 9. Redundant arrays of inexpensive disks (RAID) Prof. Dr. Miroslaw Malek Wintersemester 2004/05 www.informatik.hu-berlin.de/rok/zs Redundant Arrays of Inexpensive Disks (RAID) RAID is
More informationUsing Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
More informationVMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
More informationReliability and Fault Tolerance in Storage
Reliability and Fault Tolerance in Storage Dalit Naor/ Dima Sotnikov IBM Haifa Research Storage Systems 1 Advanced Topics on Storage Systems - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom
More informationSummer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
More informationMass Storage Structure
Mass Storage Structure 12 CHAPTER Practice Exercises 12.1 The accelerating seek described in Exercise 12.3 is typical of hard-disk drives. By contrast, floppy disks (and many hard disks manufactured before
More informationRAID technology and IBM TotalStorage NAS products
IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID
More informationIncreasing the capacity of RAID5 by online gradual assimilation
Increasing the capacity of RAID5 by online gradual assimilation Jose Luis Gonzalez,Toni Cortes joseluig,toni@ac.upc.es Departament d Arquiectura de Computadors, Universitat Politecnica de Catalunya, Campus
More informationOverview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
More informationOptimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
More informationAccelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
More informationDELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
More informationPerformance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
More informationDistribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
More informationtechnology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels
technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California
More informationRAID HARDWARE. On board SATA RAID controller. RAID drive caddy (hot swappable) SATA RAID controller card. Anne Watson 1
RAID HARDWARE On board SATA RAID controller SATA RAID controller card RAID drive caddy (hot swappable) Anne Watson 1 RAID The word redundant means an unnecessary repetition. The word array means a lineup.
More informationAnalysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
More informationUsing VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
More informationLSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and
More informationRAID Overview 91.520
RAID Overview 91.520 1 The Motivation for RAID Computing speeds double every 3 years Disk speeds can t keep up Data needs higher MTBF than any component in system IO Performance and Availability Issues!
More informationWilliam Stallings Computer Organization and Architecture 7 th Edition. Chapter 6 External Memory
William Stallings Computer Organization and Architecture 7 th Edition Chapter 6 External Memory Types of External Memory Magnetic Disk RAID Removable Optical CD-ROM CD-Recordable (CD-R) CD-R/W DVD Magnetic
More informationDisks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer
Disks and RAID Profs. Bracy and Van Renesse based on slides by Prof. Sirer 50 Years Old! 13th September 1956 The IBM RAMAC 350 Stored less than 5 MByte Reading from a Disk Must specify: cylinder # (distance
More informationRAID Overview: Identifying What RAID Levels Best Meet Customer Needs. Diamond Series RAID Storage Array
ATTO Technology, Inc. Corporate Headquarters 155 Crosspoint Parkway Amherst, NY 14068 Phone: 716-691-1999 Fax: 716-691-9353 www.attotech.com sales@attotech.com RAID Overview: Identifying What RAID Levels
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationThe IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000
The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000 Summary: This document describes how to analyze performance on an IBM Storwize V7000. IntelliMagic 2012 Page 1 This
More informationGeoGrid Project and Experiences with Hadoop
GeoGrid Project and Experiences with Hadoop Gong Zhang and Ling Liu Distributed Data Intensive Systems Lab (DiSL) Center for Experimental Computer Systems Research (CERCS) Georgia Institute of Technology
More informationUsing Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
More informationRAID Storage Systems with Early-warning and Data Migration
National Conference on Information Technology and Computer Science (CITCS 2012) RAID Storage Systems with Early-warning and Data Migration Yin Yang 12 1 School of Computer. Huazhong University of yy16036551@smail.hust.edu.cn
More informationOracle Database 10g: Performance Tuning 12-1
Oracle Database 10g: Performance Tuning 12-1 Oracle Database 10g: Performance Tuning 12-2 I/O Architecture The Oracle database uses a logical storage container called a tablespace to store all permanent
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More informationIntroduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.
Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds
More informationDelivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
More informationStreaming and Virtual Hosted Desktop Study: Phase 2
IT@Intel White Paper Intel Information Technology Computing Models April 1 Streaming and Virtual Hosted Desktop Study: Phase 2 Our current findings indicate that streaming provides better server loading
More informationDatabase Management Systems
4411 Database Management Systems Acknowledgements and copyrights: these slides are a result of combination of notes and slides with contributions from: Michael Kiffer, Arthur Bernstein, Philip Lewis, Anestis
More informationCapacity Planning Process Estimating the load Initial configuration
Capacity Planning Any data warehouse solution will grow over time, sometimes quite dramatically. It is essential that the components of the solution (hardware, software, and database) are capable of supporting
More informationPerformance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
More informationCase for storage. Outline. Magnetic disks. CS2410: Computer Architecture. Storage systems. Sangyeun Cho
Case for storage CS24: Computer Architecture Storage systems Sangyeun Cho Computer Science Department Shift in focus from computation to communication & storage of information Eg, Cray Research/Thinking
More informationStreaming and Virtual Hosted Desktop Study
White Paper Intel Information Technology Streaming, Virtual Hosted Desktop, Computing Models, Client Virtualization Streaming and Virtual Hosted Desktop Study Benchmarking Results As part of an ongoing
More informationDeconstructing Storage Arrays
Deconstructing Storage Arrays Timothy E. Denehy, John Bent, Florentina I. opovici, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau Department of Computer Sciences, University of Wisconsin, Madison
More informationDIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
More informationOPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS
W H I T E P A P E R OPTIMIZING VIRTUAL TAPE PERFORMANCE: IMPROVING EFFICIENCY WITH DISK STORAGE SYSTEMS By: David J. Cuddihy Principal Engineer Embedded Software Group June, 2007 155 CrossPoint Parkway
More informationThe IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000)
The IntelliMagic White Paper on: Storage Performance Analysis for an IBM San Volume Controller (SVC) (IBM V7000) IntelliMagic, Inc. 558 Silicon Drive Ste 101 Southlake, Texas 76092 USA Tel: 214-432-7920
More informationRecommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
More informationTableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
More informationIncidentMonitor Server Specification Datasheet
IncidentMonitor Server Specification Datasheet Prepared by Monitor 24-7 Inc October 1, 2015 Contact details: sales@monitor24-7.com North America: +1 416 410.2716 / +1 866 364.2757 Europe: +31 088 008.4600
More informationUsing Synology SSD Technology to Enhance System Performance. Based on DSM 5.2
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
More informationVIRTUALIZATION, The next step for online services
Scientific Bulletin of the Petru Maior University of Tîrgu Mureş Vol. 10 (XXVII) no. 1, 2013 ISSN-L 1841-9267 (Print), ISSN 2285-438X (Online), ISSN 2286-3184 (CD-ROM) VIRTUALIZATION, The next step for
More informationUsing Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
More informationData Storage - II: Efficient Usage & Errors
Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina
More informationChapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
More informationNetBackup Performance Tuning on Windows
NetBackup Performance Tuning on Windows Document Description This document contains information on ways to optimize NetBackup on Windows systems. It is relevant for NetBackup 4.5 and for earlier releases.
More informationQuiz for Chapter 6 Storage and Other I/O Topics 3.10
Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [6 points] Give a concise answer to each
More informationCOSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network
More information