Developing NAND-memory SSD based Hybrid Filesystem



Similar documents
Exploiting Self-Adaptive, 2-Way Hybrid File Allocation Algorithm

hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System

Data Storage Framework on Flash Memory using Object-based Storage Model

A PRAM and NAND Flash Hybrid Architecture for High-Performance Embedded Storage Subsystems

p-oftl: An Object-based Semantic-aware Parallel Flash Translation Layer

HFM: Hybrid File Mapping Algorithm for SSD Space Utilization

Recovery Protocols For Flash File Systems

Flash-Friendly File System (F2FS)

On Benchmarking Popular File Systems

UBI with Logging. Brijesh Singh Samsung, India Rohit Vijay Dongre Samsung, India

Managing Storage Space in a Flash and Disk Hybrid Storage System

An Analysis on Empirical Performance of SSD-based RAID

DualFS: A New Journaling File System for Linux

A Data De-duplication Access Framework for Solid State Drives

SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

The Design and Implementation of a Log-Structured File System

File Systems for Flash Memories. Marcela Zuluaga Sebastian Isaza Dante Rodriguez

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

Design of a NAND Flash Memory File System to Improve System Boot Time

OSDsim - a Simulation and Design Platform of an Object-based Storage Device

XFS File System and File Recovery Tools

Don t Let RAID Raid the Lifetime of Your SSD Array

COS 318: Operating Systems

CSAR: Cluster Storage with Adaptive Redundancy

Energy aware RAID Configuration for Large Storage Systems

Buffer-Aware Garbage Collection for NAND Flash Memory-Based Storage Systems

Flexible Storage Allocation

Everything you need to know about flash storage performance

CS 153 Design of Operating Systems Spring 2015

Ryusuke KONISHI NTT Cyberspace Laboratories NTT Corporation

Original-page small file oriented EXT3 file storage system

Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

A File-System-Aware FTL Design for Flash-Memory Storage Systems

On Benchmarking Embedded Linux Flash File Systems

Offline Deduplication for Solid State Disk Using a Lightweight Hash Algorithm

File Systems Management and Examples

Speeding Up Cloud/Server Applications Using Flash Memory

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

DFS: A File System for Virtualized Flash Storage

DualFS: a New Journaling File System without Meta-Data Duplication

HybridLog: an Efficient Hybrid-Mapped Flash Translation Layer for Modern NAND Flash Memory

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

File-System Implementation

NV-DIMM: Fastest Tier in Your Storage Strategy

RAMCloud and the Low- Latency Datacenter. John Ousterhout Stanford University

Encrypt-FS: A Versatile Cryptographic File System for Linux

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

Object-based SCM: An Efficient Interface for Storage Class Memories

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

The Logical Disk: A New Approach to Improving File Systems

A Virtual Storage Environment for SSDs and HDDs in Xen Hypervisor

Empirical Inspection of IO subsystem for Flash Storage Device at the aspect of discard

Using Synology SSD Technology to Enhance System Performance Synology Inc.

FlashLite: A User-Level Library to Enhance Durability of SSD for P2P File Sharing

Using Synology SSD Technology to Enhance System Performance Synology Inc.

How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server

NetApp FAS Hybrid Array Flash Efficiency. Silverton Consulting, Inc. StorInt Briefing

FASS : A Flash-Aware Swap System

Linux kernel support to exploit phase change memory

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

PARALLELS CLOUD STORAGE

File System Management

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

POSIX and Object Distributed Storage Systems

Database Hardware Selection Guidelines

Computer Engineering and Systems Group Electrical and Computer Engineering SCMFS: A File System for Storage Class Memory

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

FAWN - a Fast Array of Wimpy Nodes

Data Distribution Algorithms for Reliable. Reliable Parallel Storage on Flash Memories

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Accelerating Server Storage Performance on Lenovo ThinkServer

CSE 120 Principles of Operating Systems

Block-level Inline Data Deduplication in ext3

Benchmarking Cassandra on Violin

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Data Backup and Archiving with Enterprise Storage Systems

Future Prospects of Scalable Cloud Computing

SALSA Flash-Optimized Software-Defined Storage

FRASH: Exploiting Storage Class Memory in Hybrid File System for Hierarchical Storage

Transcription:

214 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 Developing NAND-memory SSD based Hybrid Filesystem Jaechun No 1 1 College of Electronics and Information Engineering, Sejong University, Seoul, Korea Abstract As the technology of NAND flash memory rapidly grows, SSD is becoming an excellent alternative for storage solutions, because of its high random I/O throughput and low power consumption. These SSD potentials have drawn great attention from IT enterprises that seek for better I/O performance. However, high SSD cost per capacity makes it less desirable to construct a large-scale storage subsystem solely composed of SSD devices. An alternative is to build a hybrid storage subsystem where both HDD and SSD devices are incorporated in an economic manner, while employing the strengths of both devices. This paper presents a hybrid file system, called HybridFS, that attempts to utilize the advantages of HDD and SSD devices, to provide a single, virtual address space by integrating both devices. HybridFS not only proposes an efficient implementation for the file management in the hybrid storage subsystem but also suggests an experimental framework for making use of the excellent features of existing file systems. Several performance evaluations were conducted to verify the effectiveness and suitability of HybridFS. Keywords: flash memory, hybrid system, SSD, data layout, extent 1. Introduction The rapid growth of flash memory technology has led to the advent of SSD (Solid-State Device) in the storage platform. Due to the fact that SSD does not need the mechanical overhead, such as seek time, to locate the desire data, it has drawn great attention from IT markets that seek for improved I/O performance. These days SSD is becoming an excellent storage solution, to provide optimal I/O performance. For instance, database produces a large number of writes to store data for the purpose of update, rollback, and log. In this case, SSD offers a good opportunity to achieve desirable I/O performance. Despite its performance potentials, the common usage of SSD is currently restricted to small-size memory devices, such as mobile equipments. The key obstacle to the widening SSD adoption to large-scale storage subsystems is its high cost per capacity ($3/GB for SSD, whereas $0.3/GB for HDD) [1]. Even though the cost of flash memory becomes decrease, the price of SSD is still much higher, compared to that of HDD. Such a high cost/capacity ratio makes it less desirable to build large-scale storage subsystems solely composed of SSD devices. An alternative storage solution is to build a hybrid storage subsystem where both SSD and HDD devices are combined in a cost-effective way, while making use of the strengths of both devices. In this paper, we present a hybrid file system, called HybridFS, developed for exploiting the hybrid storage structure. HybridFS is capable of generating comparable performance to the file system installed on SSD devices, while offering much larger storage capacity at less cost. This is achieved by integrating vast, low-cost HDD storage space with a small portion of SSD space through several optimization schemes, to address the strengths and the shortcomings of both devices. This paper is organized as follows: In Section 2, we discuss the design motivations and related studies. In Section 3, we analyze and compare the performance evaluation for HDD and SSD. In Section 4, we present the design of HybridFS and in Section 5, we conclude with a summary. 2. Related Studies As pointed out by [2], [3], the major drawbacks of flash memory are write latency due to erasure per block and cleaning problems. Many flash file systems [4], [5], [6] take the out-of-place approach proposed by log-structured file system [7], to reduce the semiconductor overhead of flash memory. The well-known characteristic of log-structured file system is its sequential, out-of-place update using logs. The logs are divided into segments, to maintain large disk free area and to avoid space fragmentation. The garbage collection is performed per segment, by copying live data out of a segment. This update approach has been adopted to most flash file systems, to minimize the block erasure overhead. JFFS and JFFS2 [6] maintain file metadata logs for the sequential writing. To mount file system, they require to scan all the logs in flash memory, which may take considerably long time to build. Also, both flash file systems were designed for embedded equipments with small-size NAND flash memory. TFFS [5] is an optimized file system which suits for the small embedded systems with less than 4KB of RAM. TFFS is targeted for NOR devices and provides several useful functions, such as tailored API for the embedded device and concurrent transaction recoveries. ELF [4] is built for sensor nodes and keeps a log entry for each flash page. ELF tries to optimize the logging overhead by reducing the number of log entries. However, most of flash file systems have been developed for small-size flash memory and therefore is not appropriate for a large-scale data storage.

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 215 As HybridFS does with SSD devices, there are several interesting attempts to make use of storage class memories. LiFS [8] intends to store file system metadata to smaller, faster storage class memory, such as MRAM. Conquest [9] uses persistent RAM for storing small files and file system metadata. Only the remaining data of large files are stored in disk. MRAMFS [10] goes further by adding compression, to overcome the limited size of NVRAM. HeRMES [11] proposed to keep all file metadata in MRAM and to apply compression to minimize the required space for metadata. hfs [12] attempts to combine the strengths of both FFS and log-structured file system, by separating file data and file metadata into two partitions. Besides, Agrawal et. al. [13] describes interesting SSD design issues using Samsung s NAND-flash. They also suggest several ways of obtaining improved I/O performance on SSD, by analyzing a variety of I/O traces extracted from real systems. However, there is little file system works for hybrid storage subsystems where HDD and SSD are incorporated to provide a large-scale, virtualized address space. 3. Performance Evaluation for HDD and SSD To design a new filesystem, we were interested in which storage produces a good I/O performance on distributedsmall files or sequential-big files. The first condition is the analysis of metadata access patterns and the second is the one of data access patterns. To experiment these interesting storage issues, we tested HDD and SSD with three kinds of filesystems, using Postmark and IOzone [16]. The filesystems selected for this test are XFS, NILFS, and Ext2 [14], [15]. These three filesystems manage blocks using a quite different procedure from each other. XFS is a journaling filesystem designed to support the fast crash recovery. To allow large I/O requests to a file, it allocates the file as contiguously as possible. This is because the size of a request to the underlying drives is limited by the range of contiguous blocks in the file being read or written. We choose XFS to test the performance of storages with sequential access patterns. NILFS is a new log-structured filesystem for Linux. Instead of overwriting existing blocks, it continuously appends the consistent sets of modified or newly created blocks into segmented disk regions. It is noted that many random accesses are needed to read a file consisted of distributed blocks. We use NILFS to test the performance of storages with random access patterns. Ext2 does not support any of schemes to sequentially allocate blocks. In this experiment, Ext2 filesystem is just standard for comparing with other file systems. The machine used for these performance measurements has a Intel Xeon 3GHz CPU, 3Gbyte of memory, 7200rpm IDE hard drives with 32-Mbytes disk cache and SATAconnected Mtron SSD. The Linux kernel version used for the measurement was 2.6.23. Both the disk block and page sizes are 4Kbytes. 3.1 Postmark Benchmark Fig. 1: I/O bandwidth for the file create operations. Fig. 2: I/O bandwidth for the file read operations. Fig. 3: I/O bandwidth for the file delete operations.

216 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 We used Postmark to evaluate the I/O performance of small-files. Postmark is a benchmark to measure the performance of the ephemeral small-files, such as electronic mail, net news and web-based commerce. It first creates a specified number of files. Then it performs a mixture of creation, deletion, read, and append operations on them. Finally, all files are deleted. We used the default setting of Post Mark v1.5 in which the block size to read and write is 512 bytes, the read/append and create/delete ratios are 5 and the file size range is 500 bytes 9.77 kilobytes. In this test, the number of files is set to 10000 and the number of transactions is set to 50000. Figure 1 shows the performance for creating files. As can be seen in the Figure, Ext2 and NILFS produce better write performance with SSD than with HDD. An Ext2 file consists of metadata and contents of it. The metadata of Ext2 filesystem consists of block bitmap, inode bitmap, super block, inode, and so on. Modifications to these metadata require to relocate a disk arm because Ext2 filesystem stores it in a fixed area for each metadata. It manages data blocks of a file by inode structures which have 12-direct block list and 3-indirect block list. Because of this design, the job access to data leads to the random access for the file size over 48Kbytes, in case of 4Kbytes of block size. These random characteristics in accessing an Ext2 file cause significant overheads in relocating the disk head to search metadata or data many times. However, with SSD, the operations being performed randomly to access a file no longer need mechanical actions. This results in much better performance in Ex2 filesystem installed on SSD than on HDD. NILFS produces better performance than Ext2 filesystem on the whole. It is a log-structured filesystem which allocates blocks as appendix and stores inode and data as a group, thereby causing better performance than Ext2 filesystem. XFS shows the worst I/O performance in the Figure 1, even with SSD. Presently, we do not have a clear idea why XFS shows such a worst performance with Postmark. Possibly, SSD may not be suitable for the sequential block allocation, as in XFS. Figure 2 shows the read performance. As can be seen in the results of file create operations, SSD produces better performance than HDD in Ext2 and NILFS. As mentioned above, Ext2 needs to read data from disk using the inode structure, resulting in the disk seek time with HDD. This does not happen with SSD. In order to minimize the disk seek overhead, NILFS groups an inode and data for a file, thus resulting in much better read performance than in Ext2. This is also true with SSD. Figure 3 shows the performance of file delete operations. To delete a file, Ext2 sets the associated data block and inode bitmaps of the file to 0. Because of this procedure, the experiment causes many random accesses. The performance degradation with HDD reflects these random accesses both in Ext2 and NILFS. However, it is noted that SSD does not suffer from these random accesses. 3.2 IOzone Benchmark IOzone shows different performance patterns from Postmark because it generates files of sharply different sizes. Postmark generates tiny files, between 500 bytes and 1kbyte, but IOZone produces large files, between 4K and 2Gbytes. We tested SSDs using Write/Re-write, Read/Re-read and Random-write/Random-read of IOzone. Write test measures the performance of writing a new file. When a new file is written, both the data and metadata must be stored in the disk. Re-write test measures the performance of writing a file that already exists. When a file is re-written, the I/O overhead to be required is clearly less than in the file write operation because the associated metadata already exists. Read test measures the performance of reading an existing file. Re-read test measures the performance of reading a file that was recently read. In general, the performance for re-read operations is higher than that for read operations because of caching data recently read. Random-read test and Random-write test measure the performance of reading a file or writing to a file with random access patterns. The performance of the random I/O operations can be affected by several factors, such as the size of file cache, number of disks and seek latencies. Fig. 4: Write performance on HDD As can be seen in Figures 4 and 5, three file systems illustrate similar performance with HDD and SSD. Especially, NILFS shows almost same behavior for all the file sizes. Ext2 and XFS generate similar write performance on SSD and HDD until 64Mbytes of file size. When the file size increases above 128Mbytes, the write performance on SSD is sharply decreased. In these results, we can guess that SSD does not appropriate to write big files either with distributed blocks as in Ext2, or with sequential blocks as in XFS. In a filesystem, many blocks are overwritten several times, for instance, an inode structure is always updated when a file

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 217 Fig. 5: Write performance on SSD Fig. 8: Random-write performance on HDD Fig. 6: Re-write performance on HDD Fig. 9: Random-write performance on SSD Fig. 7: Re-write performance on SSD Fig. 10: Read performance on HDD

218 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 is accessed. Therefore, the re-write performance is important to design a filesystem. The Figures 6 and 7 show the rewrite performance in HDD and SSD. The I/O behavior of the measurement is much similar with the write performance, but the re-write test generates better performance than the write test because the re-write operation does not need to write metadata. A similar performance decline with SSD happens in XFS and Ext2, too. SSD does not support the data modifications which are frequently requested in re-write operations. To modify data, SSD deletes the desired blocks at first and writes the modified data to them. This complicated operation is supposed to be the reason of decreasing I/O performance. Figures 8 and 9 show the random-write performance. In these Figures, NILFS produces better performance with HDD than with SSD because, as mentioned above, SSD does not support data modification. Also, SSD uses a different block unit for the write and delete operations. These two features cause the less performance with SSD than with HDD. Furthermore, the random-write test includes the overhead to repeatedly delete and write to random locations within the file, resulting in the performance degradation in SSD. Figures 10 and 11 show the read performance on HDD and SSD. In these Figures, the performance of NILFS shows the similar I/O behavior with Ext2, unlike in the write performance. NILFS is a log-structured filesystem in which the metadata and data are sequentially stored as a group in the same location. By this feature of NILFS, the read operation with HDD shows the similar result with SSD. The result for the sequential read access on HDD is also much similar with on SSD. The re-read and random-read graphs show similar shape with the read operation. The re-read performance is better than the read performance, as in Figures 10 and 11. When compared the read performances with the re-read, SSD generates higher I/O bandwidth than HDD with the file size of less than 256Kbytes. The random-read performance on SSD Fig. 11: Read performance on SSD shows better performance than HDD in general, especially with 4Kbytes and over the 8Mbytes. In this test, HDD needs seek operations to access random locations within a file, but SSD does not. This storage feature contributes to the difference of the random-read performance. 3.3 Performance Review The test for small files - from 500 bytes to 9.77kbytes - performed in section 3.1 shows that SSD is appropriate to read, write and delete many small files with random accesses. The other test for general files performed in section 3.2 shows that the performance with HDD is almost same with SSD for less than 64bytes of file size. With files of larger than 64bytes, SSD shows the sharp performance decline in Ext2 and XFS, except for random-read operations. It suggests that HDD generates good I/O performance with general files and is not related to the access pattern of filesystems. If a filesystem manages the data sequentially, HDD could handle a file over 4bytes more efficiently than SSD. 4. Design of a Hybrid Filesystem Fig. 12: A prototype of HybridFS The key idea of HybridFS is the separation of data and metadata by storing them in a different type of storage, according to their characteristics. As mentioned before, SSD is good for storing distributed, small files, thus HybridFS retains metadata in SSD. On the other hand, HDD is good for sequential, big files, so HybridFS is designed to store data contiguously in HDD. Figure 12 shows the disk layout of HybridFS The HybridFS creation is started by duplicating the metadata stored in HDD into SSD, such as superblock, group descriptor table, block and inode bitmap and inode table. We added the new location field in HDD super block since both SSD and HDD partitions should know the block locations of each other. After that, using the information stored in the new location filed, HybridFS is mounted on SSD partition. The task for using HybridFS is consisted of two steps. The first step is to format two devices for storing system metadata and real data. In the second step, we configure several metadata including superblock and also add the extra

Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 219 field in the superblock to make both HDD and SSD partitions recognize the data locations of each other. To duplicate the file system metadata to SSD partition, we first allocate SSD data block where those metadata are stored and then create the location table where each position denotes the SSD metadata location. The HybridFS mount is performed on SSD partition by calling sys_mount(). In the function, HybridFS accesses file system metadata from SSD partition by referring to the location table, while recording the necessary system metadata to HDD partition. Figure 13 shows the location table for storing file system metadata to SSD partition. The rest steps for HybridFS are now being implemented. 5. Conclusions As the technology of flash memory rapidly grows, SSD has drawn a great attention from IT enterprises as an attractive storage solution for fast I/O processing needs. SSD not only generates high I/O performance because of the absence of mechanical moving overhead but also provides significant power savings. However, despite its promising potentials, most SSD usages in real products have been limited to smallsize memory devices, such as mobile equipments, because of its high cost per capacity. In this paper, we proposed a way of integrating SSD devices with HDD devices in a costeffective manner, to build a large-scale, virtual address space. We tested HDD and SSD to take a good sight for developing a hybrid filesystem. The performance results obtained by using IOzone and Postmark show different shapes in Ext2, XFS and NILFS. In the evaluation, we showed that SSD Fig. 13: HybridFS location table structure is good for storing distributed, small files. On the other hand, HDD is good for keeping sequential, big files. The prototype of HybridFS is designed to take this feature by storing metadata in SSD and real data in HDD. We will further upgrade HybridFS, while combining the advantages of both HDD and SSD to generate high I/O performance. 6. Acknowledgment This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF- 2014R1A2A2A01002614). References [1] M. Saxena and M. Swift, "FlashVM: Virtual Memory Management on Flash," 2010 USENIX Annual Technical Conference, Boston, MA, 2010. [2] W. K. Josephson, L. A. Bongo, and D. Flynn, "DFS: A File System for Virtualized Flash Storage," In Proceedings of the 8th USENIX Conference on File and Storage Technologies, San Jose, USA, 2010. [3] G. Soundararajan, V. Prabhakaran, M. Balakrishnan, and T. Wobber, "Extending SSD Lifetimes with Disk-Based Write Caches," In Proceedings of the 8th USENIX Conference on File and Storage Technologies, San Jose, USA, 2010. [4] H. Dai, M. Neufeld, and R. Han, "ELF: An Efficient Log-Structured Flash File System for Micro Sensor Nodes," SenSys 04, Baltimore, USA, 2004. [5] E. Gal and S. Toledo, "A Transactional Flash File System for Microcontrollers," In Proceedings of 2005 USENIX Annual Technical Conference, Anaheim, CA, 2005. [6] D. Woodhouse, "JFFS: The Journaling Flash File System," In Ottawa Linux Symposium, 2001. [7] M. Rosenblum and J. Ousterhout, "The Design and Implementation of a Log Structured File System," In Proceedings of the 13th ACM Symposium on Operating Systems Principles, 1991, pp.1-15. [8] S. Ames, N. Bobb, K. Greenan, O. Hofmann, M. W. Storer, C. Maltzahn, E. L. Miller, and S. A. Brandt, "LiFS: An Attribute-Rich File System for Storage Class Memories," In Proceedings of the 23rd IEEE/14th NASA Goddard Conference on Mass Storage Systems and Technologies, College Park, USA, 2006. [9] A. Wang, G. Kuenning, P. Reiher, and G. Popek, "Conquest: Better Performance Through a Disk/Persistent-RAM Hybrid File System," In Proceedings of the 2002 USENIX Annual Technical Conference, Monterey, CA, 2002. [10] N. K. Edel, D. Tuteja, E. L. Miller, and S. A. Brandt, "MRAMFS: A compressing file system for non-volatile RAM," In Proceedings of the 12th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Volendam, Netherlands, 2004. [11] E. L. Miller, S. A. Brandt, and D. D. E. Long, "HeRMES: Highperformance reliable MRAM-enabled storage," In Proceedings of the 8th IEEE Workshop on Hot Topics in Operating Systems, Schloss, Germany, 2001, pp.83-87. [12] Z. Zhang and K. Ghose, "hfs: A Hybrid File System Prototype for Improving Small File and Metadata Performance," EuroSys 07, Lisbon, Portugal, 2007. [13] N. Agrawal, V. Prabhakaran, T. Wobber, J. D. Davis, M. Manasse, and R. Panigrahy, "Design Tradeoffs for SSD Performance," 2008 USENIX Annual Technical Conference, 2008. [14] R. Card, T. Ts o, and S. Tweedie, "Design and Implementation of the Second Extended Filesystem," In Proceedings of the First Dutch International Symposium on Linux, 1995. [15] A. Sweeney, D. Doucette, W. Hu, C. Anderson, M. Nishimoto, and G. Tech, "Scalability in the XFS File System," In Proceedings of the USENIX 1996 Technical Conference, San Diego, USA, 1996. [16] IOzone, Available at: http://www.iozone.org