Exploiting Self-Adaptive, 2-Way Hybrid File Allocation Algorithm
|
|
|
- Rafe Norton
- 10 years ago
- Views:
Transcription
1 Exploiting Self-Adaptive, 2-Way Hybrid File Allocation Algorithm [ Jaechun No, Sung-Soon Park ] Abstract We present hybridfs file system that provides a hybrid structure, which makes use of performance potentials of NAND flash-based SSD (Solid-State Device). As the technology of flash memory has rapidly improved, SSD is being used in various IT products as a nonvolatile storage media. Applications looking for better I/O performance attempt to achieve desirable bandwidth, by employing SSD to storage subsystems. However, building a large-scale SSD storage subsystem deploys several issues that need to be addressed. Those issues include peculiar physical characteristics related to flash memory and high SSD cost per capacity compared to HDD devices. This paper presents a new form of self-adaptive, hybrid file system, called hybridfs, which properly attempts to address aforementioned issues. The main goal of hybridfs is to combine attractive features of both HDD and SSD devices, to construct a large-scale, virtualized address space in a costeffective way. The performance evaluation shows that hybridfs generates a comparable bandwidth to that of file system installed on SSD devices, while offering a much larger storage capacity. Keywords SSD, transparent file mapping, data section, 2- way extent allocation, data layout I. Introduction In this paper, we present hybridfs file system whose main goal is to integrate the advantages of both SSD and HDD devices in a cost-effective way. As the potentials of SSDs have been recognized, such as high random I/O performance and low-power consumption[1,13], SSD is widely being used in IT products these days. The performance superiority of SSDs over HDDs becomes a driving force of numerous researches related to SSD, with the expectation of generating high I/O performance in various applications. For instance, multimedia or database applications can obtain performance benefits by employing SSD storage subsystems, to serve a large number of I/O requests. Jaechun No College of Electronics and Information Engineering Sejong University, Korea Song-Soon Park Dept. of Computer Engineering Anyang University and Gluesys Co. LTD, Korea Despite its performance potentials, the common usage of SSD is currently restricted to small-size memory devices, such as mobile equipments. The key obstacle to the widening SSD adoption to large-scale storage subsystems is its high cost per capacity ($3/GB for SSD, whereas $0.3/GB for HDD)[1]. Even though the cost of flash memory becomes decrease, the price of SSD is still much higher, compared to that of HDD. Such a high cost/capacity ratio makes it less desirable to build large-scale storage subsystems solely composed of SSD devices. An alternative storage solution is to build a hybrid storage subsystem where both SSD and HDD devices are combined in a costeffective way, while making use of the strengths of both devices. In this paper, we present a hybrid file system, called hybridfs, developed for exploiting the hybrid storage structure. HybridFS is capable of generating comparable performance to the file system installed on SSD devices, while offering much larger storage capacity at less cost. This is achieved by integrating vast, low-cost HDD storage space with a small portion of SSD space through several optimization schemes, to address the strengths and the shortcomings of both devices. In this paper, we will discuss how we implemented hybridfs to make use of SSD s high I/O throughput, while providing a flexible internal structure to retain high sequential read performance of existing file systems on HDD devices. This paper is organized as follows: In Section 2, we discuss the design motivations and related studies. In Section 3, we describe the detailed implementation issues of hybridfs. In Section 4, we present the performance measurements of hybridfs and in Section 5, we conclude with a summary. II. Related Study As pointed out by [2,3], the major drawbacks of flash memory are write latency due to erasure per block and cleaning problems. Many flash file systems [4,5,6] take the out-of-place approach proposed by log-structured file system [7], to reduce the semiconductor overhead of flash memory. The well-known characteristic of log-structured file system is its sequential, out-of-place update using logs. The logs are divided into segments, to maintain large disk free area and to avoid space fragmentation. The garbage collection is performed per segment, by copying live data out of a segment. This update approach has been adopted to most flash file systems, to minimize the block erasure overhead. JFFS and JFFS2 [6] maintain file metadata logs for the sequential writing. To mount file system, they require to scan all the logs in flash memory, which may take considerably long time to build. Also, both flash file systems 138
2 were designed for embedded equipments with small-size NAND flash memory. TFFS [5] is an optimized file system which suits for the small embedded systems with less than 4KB of RAM. TFFS is targeted for NOR devices and provides several useful functions, such as tailored API for the embedded device and concurrent transaction recoveries. ELF [4] is built for sensor nodes and keeps a log entry for each flash page. ELF tries to optimize the logging overhead by reducing the number of log entries. However, most of flash file systems have been developed for small-size flash memory and therefore is not appropriate for a large-scale data storage. As hybridfs does with SSD devices, there are several interesting attempts to make use of storage class memories. LiFS [8] intends to store file system metadata to smaller, faster storage class memory, such as MRAM. Conquest [9] uses persistent RAM for storing small files and file system metadata. Only the remaining data of large files are stored in disk. MRAMFS [10] goes further by adding compression, to overcome the limited size of NVRAM. HeRMES [11] proposed to keep all file metadata in MRAM and to apply compression to minimize the required space for metadata. hfs [12] attempts to combine the strengths of both FFS and log-structured file system, by separating file data and file metadata into two partitions. Besides, Agrawal et. al.[13] describes interesting SSD design issues using Samsung s NAND-flash. They also suggest several ways of obtaining improved I/O performance on SSD, by analyzing a variety of I/O traces extracted from real systems. However, there is little file system works for hybrid storage subsystems where HDD and SSD are incorporated to provide a large-scale, virtualized address space. III. Hybrid File Mapping A. Logical Data Layout HybridFS was developed to exploit SSD performance advantages and data layout flexibility. The entire address space of hybridfs is constructed by combining two physical partitions: SSD partition and HDD partition. The SSD partition of hybridfs works as a write-through persistent cache and stores recently referenced files recognized by file access time. Also, it stores the metadata being used for allocating SSD extents and logs. On the other hand, HDD partition of hybridfs stores the backup-ed file data and most of file system metadata. The SSD partition of hybridfs was designed to retain SSD s performance potentials and to provide the transparent file mapping to SSD data layout. To generate improved I/O performance, hybridfs attempts to reduce allocation overhead, by providing the flexible data layout and the extent alignment to flash block size on file system level. The layout flexibility is accomplished by defining multiple data sections with the different extent size. Also, aligning extent size to the size of flash block is performed on file system level because hybridfs is not allowed to access physical flash block through FTL embedded in SSD. The transparency of file mapping is performed by separating the logical directory hierarchy from the file mapping to SSD data section. The file mapping to SSD data section is initialized at file system creation by reading the configuration file and is stored in the in-memory map table. In hybridfs, a file can be mapped to the appropriate data section, according to file size, usage, and access pattern. Furthermore, a file which does not need fast I/O processing speed, such as snapshot images, can bypass the caching to SSD partition. HybridFS also enables to move sub-level directories and their files to another data section without modifying the hierarchical structure, in case that the file access pattern or usage is changed. Figure 1. An overall structure of hybridfs Figure 1 illustrates an overview of hybridfs data layout. As can be seen in the Figure, I/O unit of SSD partition is an extent, whereas I/O unit of HDD partition is a block. Also, in the Figure, three logical data sections, D 0, D 1, and D 2, are defined with the different extent size, that is 4 for D 0, u for D 1, v for D 2 respectively, where 4<u<v. The information about SSD configuration, including the number of data sections, section range, and extent size of each data section, is determined at file system creation. Such a layout configuration enables to maintain SSD address space, according to file characteristics. For example, files with a large, contiguous access pattern, such as video files, can be assigned to the data section composed of a large extent size. This layout flexibility is also reflected to the file mapping. For instance, in Figure 1, since most files in /mnt/vod have a large, contiguous access pattern, D 2 with the largest extent size is mapped to the directory. Also, the files stored in /mnt/snap are configured to bypass SSD cache, to prevent the costly SSD space from being consumed by unnecessary files. Figure 1 also shows that the mapping to the data section can be changed to another data section without impacting the hierarchical structure. For example, /mnt/usr/a1/a2 is remapped from D 0 to D 2 and the mapping for /mnt/vod/b1/b2 is changed from D 2 to SSD bypass. B. File Allocation In hybridfs, the file allocation on SSD partition is executed on per extent. In designing the file allocation 139
3 scheme, the key objectives were to classify SSD allocation space according to file access pattern and usage, to efficiently manage the costly storage capacity, and to align the data size to flash block size to minimize the erasure latency. Classifying SSD allocation space is performed by defining the different extent size for each data section and then by mapping files to the data section, in terms of their access characteristics. Also, hybridfs attempts to align file allocation to the boundary of flash block, to control the block erasure overhead on the file system level. However, the shortcoming of hybridfs extent scheme is the extent fragmentation where a portion of extent is left unused. Furthermore, constantly aligning extent size to flash block may be difficult, especially in such a case that the extent size is smaller than the size of flash block. Figure 2. File allocation procedure In hybridfs, the file allocation on SSD partition is executed on per extent. In designing the file allocation scheme, the key objectives were to classify SSD allocation space according to file access pattern and usage, to efficiently manage the costly storage capacity, and to align the data size to flash block size to minimize the erasure latency. Classifying SSD allocation space is performed by defining the different extent size for each data section and then by mapping files to the data section, in terms of their access characteristics. Also, hybridfs attempts to align file allocation to the boundary of flash block, to control the block erasure overhead on the file system level. However, the shortcoming of hybridfs extent scheme is the extent fragmentation where a portion of extent is left unused. Furthermore, constantly aligning extent size to flash block may be difficult, especially in such a case that the extent size is smaller than the size of flash block. To alleviate the extent fragmentation problem, hybridfs divides the extents into two groups: one group for the clean extent where all the blocks are free and the other group for the extent segment where a portion of blocks in an extent are unused. Using the extent segment might sacrifice the performance advantage of SSD because the file allocation on the extent segment might not be aligned to the boundary of flash block. However, we decided to concentrate on making use of the costly SSD address space as much as possible. With an extent composed of x number of blocks, there exist log(x)+1 number of extent bitmaps, one for the first group and log(x) number of segment bitmaps for the second group. HybridFS reuses only extent segments whose remaining number of free blocks is more than or equal to half of total blocks in an extent, to prevent widespread file allocation across extent segments. The first bitmap of the second group, that is 0 th segment bitmap, shows the allocation status of the extent segments where x-1 number of blocks is unused. Similarly, i th segment bitmap indicates whether the extent segments with x-2 i (0 i log(x)-1) number of blocks are used. Figure 2 illustrates the file allocation process with the extent consisting of 64 blocks. This procedure is the default allocation process for SSD partition and is applied in such a case that either the extent size of a data section is equal to or larger than the flash block size, or the SSD flash block size is not clearly specified. In Figure 2, there exist seven extent bitmaps, one for the clean extent and the other six for the extent segments. Searching for an extent is started by scanning the clean extent bitmap. Suppose that file A needing 1block is assigned to the clean extent, as shown in Figure 2(b). In this case, the bit of the clean extent bitmap is set to one. In Figure 2(c), if file B requires 11 blocks to allocate, hybridfs then reuses the same extent to store data, while setting the associated bit of 0 th segment bitmap to one. The corresponding bits from 1 th to 3 th segment bitmaps would be unavailable because, after the file allocation of B, the remaining number of unused blocks is smaller than that indicated by those bitmaps. In Figure 2(d), if file C needs 48 blocks to store new data, then the associated bit of 4 th segment bitmap is set to one, to indicate the corresponding extent no longer to be a candidate for the file allocation. Upon releasing all the files, the extent returns to the clean state. When the extent size of the data section is smaller than the flash block size, hybridfs defines a virtual extent composed of multiple, real extents. The size of a virtual extent matches the size of flash block and the allocation policy including extent bitmap and extent segment is executed on per virtual extent. With a virtual extent consisting of y number of extents, there exists log(y) +1 number of virtual extent bitmaps, one for the virtual clean extent and the other log(y) number of bitmaps for the virtual extent segment. The virtual extent bitmaps indicate the allocation status of virtual extents, in a similar way as hybridfs did with real extents composed of x number of blocks. In other words, i th virtual segment bitmap indicates whether the virtual extent segments with y-2 i (0 i log(y)- 1) number of extents are used. Figure 3. Virtual extent allocation Figure 3 shows how the virtual extent segment is organized in the data section composed of 4KB of extents, 140
4 on top of SSD partition with 256KB of flash block. In this case, a virtual extent is consisted of 64 4KB of real extents and the seven virtual extent bitmaps are created to indicate the allocation status of virtual extents. In Figure 3, when file A stores 48KB of data, the bit of the virtual clean extent bitmap is set to one, followed by setting the associated bits from 0 th to 3 th virtual segment bitmaps to be unavailable. On the 192KB of allocation request, hybridfs reuses the virtual extent segment starting from the 16 th real extent, while setting the associated bit of 4 th virtual segment bitmap to one. IV. Performance Evaluation We evaluated I/O performance of hybridfs, comparing to the performance of two file systems: ext2[14] and xfs[15]. We chose ext2 for the performance comparison because the block allocation implemented on HDD partition is similar to ext2. The comparison with ext2 gives an opportunity to observe how significantly SSD partition of hybridfs affects the performance of various file operations. The reason for choosing xfs for the performance comparison is its extent-based allocation using B+ tree. HybridFS uses the pre-determined extent allocation to be configured at file system creation. We will observe how differently both allocation schemes work on several file operations. The experimental platform has Intel Xeon 3GHz CPU, 16GB of RAM, 750GB of SATA HDD, and 80GB of fusion-io SSD. We compared I/O performance of three file systems, using IOzone benchmark[16], on top of HDD and SSD devices. In IOzone benchmark, in case of read evaluations, we saw that the impact of memory cache greatly contributes to generate high I/O bandwidth. Figure 4. IOzone write Figure 5. IOzone rewrite Figure 6. IOzone read Figure 4 shows the write bandwidth on both devices. As can be seen, the performance of hybridfs is almost three times higher than that of both ext2 and xfs installed on HDD devices, whereas is 12% faster than xfs on SSD device. Even though the performances of ext2 and xfs on SSD devices are much higher than those on HDD devices, it is less desirable to build a large-scale storage subsystem solely composed of SSD devices, due to the limited storage capacity and high cost. The performance shows that in such a case, hybridfs offers an alternative way of utilizing SSD s high performance and HDD s vast storage capacity with low cost. The performance evaluation of write operations on two devices points out several interesting aspects. First, the difference between the worst and best performance of ext2 and xfs on HDD devices is much larger than the corresponding difference on SSD devices. For example, on top of HDD device, when the performance with 512MB of file size is compared to that with 256KB of file size, ext2 shows almost eight times speed improvement. On the other hand, on top of SSD device, the performance difference between 64KB and 512MB of file sizes is about 35%, which is much less than that on HDD device. Xfs also shows a similar behaviour to ext2, resulting in about 33% of performance speedup between 64KB and 512MB of file sizes on SSD device, while producing much higher difference between the same file sizes on HDD device. This indicates that the overhead of HDD moving parts more deteriorates write performance than the overhead of SSD semiconductor property does. Due to the absence of moving overhead, both ext2 and xfs on SSD devices generate almost three times higher bandwidth than those on HDD devices. Second, although hybridfs uses HDD portion as metadata store, such an internal structure does not greatly degrade write performance because we can observe that, except for 64KB of file size, the performance difference between small-size and large-size files is very small even though more metadata accesses occur in the small-size files than in the large-size files. In Figure 5, we compared the rewrite performance of hybridfs to that of both ext2 and xfs. Similar to the write performance on HDD device, the rewrite throughput of hybridfs outperforms that of both ext2 and xfs installed on HDD devices. When compared to both file systems on SSD devices, hybridfs generates almost the same bandwidth with ext2, but is marginally faster than xfs. In the file read operation (Figure 6), unlike in the write experiments on HDD device, hybridfs does not produce 141
5 noticeable performance difference. For ext2 and xfs, the performance difference between worst and best cases on both devices is much less than that in the write experiment. This is because, in the IOzone benchmark, most read requests are served from memory cache. V. Conclusion As the technology of flash memory rapidly grows, SSD has drawn a great attention from IT enterprises as an attractive storage solution for fast I/O processing needs. SSD not only generates high I/O performance because of the absence of mechanical moving overhead but also provides significant power savings. However, despite its promising potentials, most SSD usages in real products have been limited to small-size memory devices, such as mobile equipments, because of its high cost per capacity. In this paper, we proposed a way of integrating SSD devices with HDD devices in a cost-effective manner, to build a largescale, virtual address space. To achieve better I/O performance, hybridfs uses SSD partition as a writethrough cache, which contains hot files recognized by file access time. Besides making use of the advantages of SSD, hybridfs attempts to provide a flexible internal structure to retain the excellent sequential read performance of existing file systems. HybridFS evaluation shows that achieving high I/O performance by combining the advantages of both SSD and HDD devices is possible. The strength of hybridfs is most noticeable when its write performance is compared to the corresponding performance of both ext2 and xfs installed on HDD devices. The write experiment indicates that the mechanical moving overhead of HDD more affects the write performance than the semiconductor overhead of SSD does. HybridFS frequently produced comparable performance to that of both ext2 and xfs installed on SSD devices. Since building a large-scale storage subsystem using SSD devices is less desirable due to high cost and limited storage resource, hybridfs can be a good alternative to make use of high I/O performance of SSD and vast storage capacity of HDD. As a future work, we would verify the performance of hybridfs by using more benchmarks. [4] H. Dai, M. Neufeld, and R. Han, ELF: An Efficient Log- Structured Flash File System for Micro Sensor Nodes, SenSys 04, Baltimore, USA, [5] E. Gal and S. Toledo, A Transactional Flash File System for Microcontrollers, In Proceedings of 2005 USENIX Annual Technical Conference, Anaheim, CA, [6] D. Woodhouse, JFFS: The Journaling Flash File System, In Ottawa Linux Symposium, [7] M. Rosenblum and J. Ousterhout, The Design and Implementation of a Log Structured File System, In Proceedings of the 13 th ACM Symposium on Operating Systems Principles, 1991, pp [8] S. Ames, N. Bobb, K. Greenan, O. Hofmann, M. W. Storer, C. Maltzahn, E. L. Miller, and S. A. Brandt, LiFS: An Attribute-Rich File System for Storage Class Memories, In Proceedings of the 23 rd IEEE/14 th NASA Goddard Conference on Mass Storage Systems and Technologies, College Park, USA, [9] A. Wang, G. Kuenning, P. Reiher, and G. Popek, Conquest: Better Performance Through a Disk/Persistent-RAM Hybrid File System, In Proceedings of the 2002 USENIX Annual Technical Conference, Monterey, CA, [10] N. K. Edel, D. Tuteja, E. L. Miller, and S. A. Brandt, MRAMFS: A compressing file system for non-volatile RAM, In Proceedings of the 12 th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Volendam, Netherlands, [11] E. L. Miller, S. A. Brandt, and D. D. E. Long, HeRMES: Highperformance reliable MRAM-enabled storage, In Proceedings of the 8 th IEEE Workshop on Hot Topics in Operating Systems, Schloss, Germany, 2001, pp [12] Z. Zhang and K. Ghose, hfs: A Hybrid File System Prototype for Improving Small File and Metadata Performance, EuroSys 07, Lisbon, Portugal, [13] N. Agrawal, V. Prabhakaran, T. Wobber, J. D. Davis, M. Manasse, and R. Panigrahy, Design Tradeoffs for SSD Performance, 2008 USENIX Annual Technical Conference, [14] R. Card, T. Ts o, and S. Tweedie, Design and Implementation of the Second Extended Filesystem, In Proceedings of the First Dutch International Symposium on Linux, [15] A. Sweeney, D. Doucette, W. Hu, C. Anderson, M. Nishimoto, and G. Tech, Scalability in the XFS File System, In Proceedings of the USENIX 1996 Technical Conference, San Diego, USA, [16] IOzone, Available at: Acknowledgment This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2014R1A2A2A ). Also, this work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(msip) (No.B (2015), Development of Integrated Management Technology for Micro Server Resources). References [1] M. Saxena and M. Swift, FlashVM: Virtual Memory Management on Flash, 2010 USENIX Annual Technical Conference, Boston, MA, [2] W. K. Josephson, L. A. Bongo, and D. Flynn, DFS: A File System for Virtualized Flash Storage, In Proceedings of the 8 th USENIX Conference on File and Storage Technologies, San Jose, USA, [3] G. Soundararajan, V. Prabhakaran, M. Balakrishnan, and T. Wobber, Extending SSD Lifetimes with Disk-Based Write Caches, In Proceedings of the 8 th USENIX Conference on File and Storage Technologies, San Jose, USA,
Developing NAND-memory SSD based Hybrid Filesystem
214 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 Developing NAND-memory SSD based Hybrid Filesystem Jaechun No 1 1 College of Electronics and Information Engineering, Sejong University, Seoul,
hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System
hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System Jinsun Suk and Jaechun No College of Electronics and Information Engineering Sejong University 98 Gunja-dong, Gwangjin-gu, Seoul
Data Storage Framework on Flash Memory using Object-based Storage Model
2011 International Conference on Computer Science and Information Technology (ICCSIT 2011) IPCSIT vol. 51 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V51. 118 Data Storage Framework
A PRAM and NAND Flash Hybrid Architecture for High-Performance Embedded Storage Subsystems
1 A PRAM and NAND Flash Hybrid Architecture for High-Performance Embedded Storage Subsystems Chul Lee Software Laboratory Samsung Advanced Institute of Technology Samsung Electronics Outline 2 Background
HFM: Hybrid File Mapping Algorithm for SSD Space Utilization
Appl. Math. Inf. Sci., No., - () Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/./amis/ HFM: Hybrid File Mapping Algorithm for SSD Space Utilization Jaechun No,,
Flash-Friendly File System (F2FS)
Flash-Friendly File System (F2FS) Feb 22, 2013 Joo-Young Hwang ([email protected]) S/W Dev. Team, Memory Business, Samsung Electronics Co., Ltd. Agenda Introduction FTL Device Characteristics
p-oftl: An Object-based Semantic-aware Parallel Flash Translation Layer
p-oftl: An Object-based Semantic-aware Parallel Flash Translation Layer Wei Wang, Youyou Lu, and Jiwu Shu Department of Computer Science and Technology, Tsinghua University, Beijing, China Tsinghua National
UBI with Logging. Brijesh Singh Samsung, India [email protected]. Rohit Vijay Dongre Samsung, India [email protected].
UBI with Logging Brijesh Singh Samsung, India [email protected] Rohit Vijay Dongre Samsung, India [email protected] Abstract Flash memory is widely adopted as a novel nonvolatile storage medium
Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages
Implementation of Buffer Cache Simulator for Hybrid Main Memory and Flash Memory Storages Soohyun Yang and Yeonseung Ryu Department of Computer Engineering, Myongji University Yongin, Gyeonggi-do, Korea
Recovery Protocols For Flash File Systems
Recovery Protocols For Flash File Systems Ravi Tandon and Gautam Barua Indian Institute of Technology Guwahati, Department of Computer Science and Engineering, Guwahati - 781039, Assam, India {r.tandon}@alumni.iitg.ernet.in
Speeding Up Cloud/Server Applications Using Flash Memory
Speeding Up Cloud/Server Applications Using Flash Memory Sudipta Sengupta Microsoft Research, Redmond, WA, USA Contains work that is joint with B. Debnath (Univ. of Minnesota) and J. Li (Microsoft Research,
Managing Storage Space in a Flash and Disk Hybrid Storage System
Managing Storage Space in a Flash and Disk Hybrid Storage System Xiaojian Wu, and A. L. Narasimha Reddy Dept. of Electrical and Computer Engineering Texas A&M University IEEE International Symposium on
Design of a NAND Flash Memory File System to Improve System Boot Time
International Journal of Information Processing Systems, Vol.2, No.3, December 2006 147 Design of a NAND Flash Memory File System to Improve System Boot Time Song-Hwa Park*, Tae-Hoon Lee*, and Ki-Dong
On Benchmarking Popular File Systems
On Benchmarking Popular File Systems Matti Vanninen James Z. Wang Department of Computer Science Clemson University, Clemson, SC 2963 Emails: {mvannin, jzwang}@cs.clemson.edu Abstract In recent years,
Don t Let RAID Raid the Lifetime of Your SSD Array
Don t Let RAID Raid the Lifetime of Your SSD Array Sangwhan Moon Texas A&M University A. L. Narasimha Reddy Texas A&M University Abstract Parity protection at system level is typically employed to compose
SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs
SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND -Based SSDs Sangwook Shane Hahn, Sungjin Lee, and Jihong Kim Department of Computer Science and Engineering, Seoul National University,
OSDsim - a Simulation and Design Platform of an Object-based Storage Device
OSDsim - a Simulation and Design Platform of an Object-based Storage Device WeiYa Xi Wei-Khing For DongHong Wang Renuga Kanagavelu Wai-Kit Goh Data Storage Institute, Singapore xi [email protected]
Everything you need to know about flash storage performance
Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices
File Systems for Flash Memories. Marcela Zuluaga Sebastian Isaza Dante Rodriguez
File Systems for Flash Memories Marcela Zuluaga Sebastian Isaza Dante Rodriguez Outline Introduction to Flash Memories Introduction to File Systems File Systems for Flash Memories YAFFS (Yet Another Flash
An Analysis on Empirical Performance of SSD-based RAID
An Analysis on Empirical Performance of SSD-based RAID Chanhyun Park, Seongjin Lee, and Youjip Won Department of Computer and Software, Hanyang University, Seoul, Korea {parkch0708 insight yjwon}@hanyang.ac.kr
Filesystems Performance in GNU/Linux Multi-Disk Data Storage
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical
Buffer-Aware Garbage Collection for NAND Flash Memory-Based Storage Systems
Buffer-Aware Garbage Collection for NAND Flash Memory-Based Storage Systems Sungin Lee, Dongkun Shin and Jihong Kim School of Computer Science and Engineering, Seoul National University, Seoul, Korea {chamdoo,
DFS: A File System for Virtualized Flash Storage
DFS: A File System for Virtualized Flash Storage William K. Josephson [email protected] Lars A. Bongo [email protected] Kai Li [email protected] David Flynn [email protected] Abstract This paper
Energy aware RAID Configuration for Large Storage Systems
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa [email protected] Miyuki Nakano [email protected] Masaru Kitsuregawa [email protected] Abstract
Accelerating Server Storage Performance on Lenovo ThinkServer
Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Offline Deduplication for Solid State Disk Using a Lightweight Hash Algorithm
JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.15, NO.5, OCTOBER, 2015 ISSN(Print) 1598-1657 http://dx.doi.org/10.5573/jsts.2015.15.5.539 ISSN(Online) 2233-4866 Offline Deduplication for Solid State
Integrating Flash-based SSDs into the Storage Stack
Integrating Flash-based SSDs into the Storage Stack Raja Appuswamy, David C. van Moolenbroek, Andrew S. Tanenbaum Vrije Universiteit, Amsterdam April 19, 2012 Introduction: Hardware Landscape $/GB of flash
An Overview of Flash Storage for Databases
An Overview of Flash Storage for Databases Vadim Tkachenko Morgan Tocker http://percona.com MySQL CE Apr 2010 -2- Introduction Vadim Tkachenko Percona Inc, CTO and Lead of Development Morgan Tocker Percona
A Data De-duplication Access Framework for Solid State Drives
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 28, 941-954 (2012) A Data De-duplication Access Framework for Solid State Drives Department of Electronic Engineering National Taiwan University of Science
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance
Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance Hybrid Storage Performance Gains for IOPS and Bandwidth Utilizing Colfax Servers and Enmotus FuzeDrive Software NVMe Hybrid
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
Caching Mechanisms for Mobile and IOT Devices
Caching Mechanisms for Mobile and IOT Devices Masafumi Takahashi Toshiba Corporation JEDEC Mobile & IOT Technology Forum Copyright 2016 Toshiba Corporation Background. Unified concept. Outline The first
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
Seeking Fast, Durable Data Management: A Database System and Persistent Storage Benchmark
Seeking Fast, Durable Data Management: A Database System and Persistent Storage Benchmark In-memory database systems (IMDSs) eliminate much of the performance latency associated with traditional on-disk
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
The Pitfalls of Deploying Solid-State Drive RAIDs
The Pitfalls of Deploying Solid-State Drive RAIDs Nikolaus Jeremic 1, Gero Mühl 1, Anselm Busse 2 and Jan Richling 2 Architecture of Application Systems Group 1 Faculty of Computer Science and Electrical
How To Scale Myroster With Flash Memory From Hgst On A Flash Flash Flash Memory On A Slave Server
White Paper October 2014 Scaling MySQL Deployments Using HGST FlashMAX PCIe SSDs An HGST and Percona Collaborative Whitepaper Table of Contents Introduction The Challenge Read Workload Scaling...1 Write
In-Block Level Redundancy Management for Flash Storage System
, pp.309-318 http://dx.doi.org/10.14257/ijmue.2015.10.9.32 In-Block Level Redundancy Management for Flash Storage System Seung-Ho Lim Division of Computer and Electronic Systems Engineering Hankuk University
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
Data Backup and Archiving with Enterprise Storage Systems
Data Backup and Archiving with Enterprise Storage Systems Slavjan Ivanov 1, Igor Mishkovski 1 1 Faculty of Computer Science and Engineering Ss. Cyril and Methodius University Skopje, Macedonia [email protected],
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
FAWN - a Fast Array of Wimpy Nodes
University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed
On Benchmarking Embedded Linux Flash File Systems
On Benchmarking Embedded Linux Flash File Systems Pierre Olivier Université de Brest, 20 avenue Le Gorgeu, 29285 Brest cedex 3, France [email protected] Jalil Boukhobza Université de Brest, 20
Data Distribution Algorithms for Reliable. Reliable Parallel Storage on Flash Memories
Data Distribution Algorithms for Reliable Parallel Storage on Flash Memories Zuse Institute Berlin November 2008, MEMICS Workshop Motivation Nonvolatile storage Flash memory - Invented by Dr. Fujio Masuoka
Speed and Persistence for Real-Time Transactions
Speed and Persistence for Real-Time Transactions by TimesTen and Solid Data Systems July 2002 Table of Contents Abstract 1 Who Needs Speed and Persistence 2 The Reference Architecture 3 Benchmark Results
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
Optimizing SQL Server Storage Performance with the PowerEdge R720
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program
Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
SH-Sim: A Flexible Simulation Platform for Hybrid Storage Systems
, pp.61-70 http://dx.doi.org/10.14257/ijgdc.2014.7.3.07 SH-Sim: A Flexible Simulation Platform for Hybrid Storage Systems Puyuan Yang 1, Peiquan Jin 1,2 and Lihua Yue 1,2 1 School of Computer Science and
MS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
Flexible Storage Allocation
Flexible Storage Allocation A. L. Narasimha Reddy Department of Electrical and Computer Engineering Texas A & M University Students: Sukwoo Kang (now at IBM Almaden) John Garrison Outline Big Picture Part
Original-page small file oriented EXT3 file storage system
Original-page small file oriented EXT3 file storage system Zhang Weizhe, Hui He, Zhang Qizhen School of Computer Science and Technology, Harbin Institute of Technology, Harbin E-mail: [email protected]
An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
SOLID STATE DRIVES AND PARALLEL STORAGE
SOLID STATE DRIVES AND PARALLEL STORAGE White paper JANUARY 2013 1.888.PANASAS www.panasas.com Overview Solid State Drives (SSDs) have been touted for some time as a disruptive technology in the storage
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Indexing on Solid State Drives based on Flash Memory
Indexing on Solid State Drives based on Flash Memory Florian Keusch MASTER S THESIS Systems Group Department of Computer Science ETH Zurich http://www.systems.ethz.ch/ September 2008 - March 2009 Supervised
89 Fifth Avenue, 7th Floor. New York, NY 10003. www.theedison.com 212.367.7400. White Paper. HP 3PAR Adaptive Flash Cache: A Competitive Comparison
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com 212.367.7400 White Paper HP 3PAR Adaptive Flash Cache: A Competitive Comparison Printed in the United States of America Copyright 2014 Edison
Flash for Databases. September 22, 2015 Peter Zaitsev Percona
Flash for Databases September 22, 2015 Peter Zaitsev Percona In this Presentation Flash technology overview Review some of the available technology What does this mean for databases? Specific opportunities
Software-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
HP Z Turbo Drive PCIe SSD
Performance Evaluation of HP Z Turbo Drive PCIe SSD Powered by Samsung XP941 technology Evaluation Conducted Independently by: Hamid Taghavi Senior Technical Consultant June 2014 Sponsored by: P a g e
A Virtual Storage Environment for SSDs and HDDs in Xen Hypervisor
A Virtual Storage Environment for SSDs and HDDs in Xen Hypervisor Yu-Jhang Cai, Chih-Kai Kang and Chin-Hsien Wu Department of Electronic and Computer Engineering National Taiwan University of Science and
The Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
NetApp FAS Hybrid Array Flash Efficiency. Silverton Consulting, Inc. StorInt Briefing
NetApp FAS Hybrid Array Flash Efficiency Silverton Consulting, Inc. StorInt Briefing PAGE 2 OF 7 Introduction Hybrid storage arrays (storage systems with both disk and flash capacity) have become commonplace
FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures
Technology Insight Paper FlashSoft Software from SanDisk : Accelerating Virtual Infrastructures By Leah Schoeb January 16, 2013 FlashSoft Software from SanDisk: Accelerating Virtual Infrastructures 1 FlashSoft
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique Jyoti Malhotra 1,Priya Ghyare 2 Associate Professor, Dept. of Information Technology, MIT College of
Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2
Executive Summary Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 October 21, 2014 What s inside Windows Server 2012 fully leverages today s computing, network, and storage
Object-based SCM: An Efficient Interface for Storage Class Memories
Object-based SCM: An Efficient Interface for Storage Class Memories Yangwook Kang Jingpei Yang Ethan L. Miller Storage Systems Research Center, University of California, Santa Cruz {ywkang, yangjp, elm}@cs.ucsc.edu
Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays
1 Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays Farzaneh Rajaei Salmasi Hossein Asadi Majid GhasemiGol [email protected] [email protected] [email protected]
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology
Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology Evaluation report prepared under contract with NetApp Introduction As flash storage options proliferate and become accepted in the enterprise,
Database Hardware Selection Guidelines
Database Hardware Selection Guidelines BRUCE MOMJIAN Database servers have hardware requirements different from other infrastructure software, specifically unique demands on I/O and memory. This presentation
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME? EMC and Intel work with multiple in-memory solutions to make your databases fly Thanks to cheaper random access memory (RAM) and improved technology,
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
NV-DIMM: Fastest Tier in Your Storage Strategy
NV-DIMM: Fastest Tier in Your Storage Strategy Introducing ArxCis-NV, a Non-Volatile DIMM Author: Adrian Proctor, Viking Technology [email: [email protected]] This paper reviews how Non-Volatile
RAMCloud and the Low- Latency Datacenter. John Ousterhout Stanford University
RAMCloud and the Low- Latency Datacenter John Ousterhout Stanford University Most important driver for innovation in computer systems: Rise of the datacenter Phase 1: large scale Phase 2: low latency Introduction
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment
LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment Evaluation report prepared under contract with LSI Corporation Introduction Interest in solid-state storage (SSS) is high, and
XFS File System and File Recovery Tools
XFS File System and File Recovery Tools Sekie Amanuel Majore 1, Changhoon Lee 2 and Taeshik Shon 3 1,3 Department of Computer Engineering, Ajou University Woncheon-doing, Yeongton-gu, Suwon, Korea {amanu97,
NAND Flash-based Disk Cache Using SLC/MLC Combined Flash Memory
International Workshop on Storage Network Architecture and Parallel I/Os NAND Flash-based Disk Cache Using /MLC Combined Flash Memory Seongcheol Hong School of Information and Communication Engineering
Cumulus: filesystem backup to the Cloud
Michael Vrable, Stefan Savage, a n d G e o f f r e y M. V o e l k e r Cumulus: filesystem backup to the Cloud Michael Vrable is pursuing a Ph.D. in computer science at the University of California, San
