OSDsim - a Simulation and Design Platform of an Object-based Storage Device
|
|
|
- Hilda Gaines
- 10 years ago
- Views:
Transcription
1 OSDsim - a Simulation and Design Platform of an Object-based Storage Device WeiYa Xi Wei-Khing For DongHong Wang Renuga Kanagavelu Wai-Kit Goh Data Storage Institute, Singapore xi [email protected] Abstract Modeling and simulation is an effective way to design and evaluate the performance of a network storage system. OSDsim is designed and developed for simulation of an Object-based Storage Device (OSD) system. An OSD File System (OSDFS) has been designed and developed for OSDsim. It can provide high throughput for big object requests and also maintain high disk utilization for small object requests. The OSDFS can be configured to suit different workload patterns. The disk module designed and developed for OSDsim makes use of the dynamic disk profile extracted from the actual disk drive through interrogative and empirical methods. Therefore disk drive simulation should be accurate. OSDsim has been validated and the write error is within 5%. 1. Introduction OSD defines a new object-based interface for a storage device. This new approach for data storage is currently the subject of much intensive research and development, both in industry as well as in academia. Some significant initiatives include the work of the Storage Network Industry Association (SNIA) s OSD Technical Work Group (TWG) [5] to draft industry standards and that of the ANSI T10/OSD Committee to facilitate interoperable solutions [6]. The Network Attached Secure Disk [2] project at the Parallel Data Lab at Carnegie Mellon University enables direct data transfers between clients and online storages without invoking the file server in common data access operations. In an OSD system, file system functionalities are divided into two logical components: a file manager and a storage manager. A file manager takes care of tasks including data hierarchy, naming, file to object mapping, access control ect.. While the storage manager is in charge of actual data retrieving from disk drive. The storage manager component is located on an Object-based Storage Target (OST) together with disk drive or other storage media. A new and efficient file system which can provide high throughput and maintain high disk utilization needs to be designed for an OSD system. In this paper we present the OSDsim, especially designed and developed for simulation of an OSD system. The file system designed can provide high throughput for large object requests and still maintain high disk utilization for small object requests. In the following sections, the OSDsim, including all modules for the simulator, is first introduced. Detail descriptions are provided for the submodules of the OSDFS storage component and the disk drive and its parameters extraction. After that is experimental work and validation of the OSDsim. Details are provided on the performance of OSDsim in the validation test using actual experimental data to compare with the OSDsim s simulation results. Finally the results of further simulation and analysis are presented to compare the performance of different region settings for OSDFS. 2. OSDsim OSDsim comprises four main modules. They are the Object-based Storage Client (OSC) module, Object-based Storage Target (OST) module, Meta-Data Server (MDS) module and Network module. Figure 1 shows the OSDsim module structure with sub-modules. Figure 1. The OSDsim structure and modules
2 2.1. OSC Module The OSC module consists of four main sub-modules. These are the I/O workload module, user interface, Virtual File System (VFS), and OSDFS user component. 1. The I/O workload sub-module: This sub-module generates synthetic traces for the system simulation. It also provides the interface for taking in external traces. 2. User interface: The user interface interprets user commands such as MOUNT, MKDIR, RMDIR, WRITE, DELETE, MV, CP and READ OSDFS Storage Component Sub-module The OSDFS storage component sub-module located on the top of the disk drive is responsible for data allocation and retrieval. Four commands: FORMAT, DELETE, READ, and WRITE have been implemented. Figure 2 shows the design structure of this OSDFS storage component. The whole disk is divided into equally sized regions with each region consisting of four parts: a region head which stores regions information, a free onode map which records free onode space, an onode table which keeps the information on each onode meta data and the area for data blocks which is the location for actual object data. 3. VFS: This sub-module implements the directory lookup, file look up and memory management for the system. 4. OSDFS user component: Some main functions implemented in this sub-module includes converting the user interface commands into OSD SCSI commands, and keeping the file structure hierarchy MDS Module There are two main sub-modules in the MDS module. One is for file to object mapping and the other is for meta data management. 1. File to object mapping: Currently one file to one object mapping has been implemented. A unique object id is also assigned to each object. 2. Meta data management: This sub-module is responsible for managing and retrieving object meta data information. B tree is used to store the object meta data information. Each node of the B tree contains the information of User object id, Group object id, and inode information Network Module OSDsim is actually the enhanced and extended version of SANSim [13] which simulate in detail Fibre Channel (FC) to frame level. The network module for OSDsim adopts the FC module in the SANSim 2.4. OST Module The OST module includes two main sub-modules: a OSDFS storage component and the disk drive. Figure 2. Design structure of the OSDFS storage component There are three main objectives in the designing of the OSDFS storage component. The firstly is that it should be efficient to search for continuous free space. Secondly it should provide high throughput for large object data and lastly it should be able to maintain good disk utilization for small object data. An efficient region management scheme based on extent was adopted. With this the searching time for continuous free space is shorter than bitmap searching used by general file systems. This is one of the reasons why our file system can outperform the general file system like ext2[7] and ext3[4]. The performance comparisons for file systems can be found in the paper which is also submitted for MSST2006 titled Adaptive Extents-Based File System for Object-Based Storage Devices. Variable-sized blocks are used for data allocation and object data are grouped according to their sizes. Large object data can be allocated to big regions and small object data allocated to small regions. Therefore, OSDFS can provide high throughput for large object data and still maintain high disk utilization when there are a lot of small object data. OSDFS also modifies the onode write scheme from a general purpose file system as to improve write performance. In a general file system an onode (meta data of an object data on disk) writing normally takes place right after each object data writing. This scheme is not efficient as
3 the onode table and data blocks are in different part of the regions. OSDFS adopts the scheme of writing N (N > 1) onodes together after writing N objects data. The write performance is improved through a reduction of by seeking distances. Read performance can be improved by computing the data location for non-fragmented objects instead of requiring the retrieval of data location from the disk. Figure 3a shows how the system computes the location of the data blocks for incoming object requests for the non-fragmented object data. For the object requests with data fragmentation, Figure 3b shows that the onode data location information has first to be read from the onode table on disk and then directed to data blocks. Figure 4. Seeking profile with a full seek Disk Drive Sub-module OSDsim needs a disk drive simulation module which can simulate disk drive activities accurately. The simulated disk drive must be a SCSI disk drive available in the market so that OSDsim can be validated using an actual physical system. Presently, there is no such disk drive simulation module available. Some storage system simulators come with a very detailed disk drive module, for example DiskSim [3]. However many of the disk drive parameters are not readily available to the public. Furthermore, many of the simulators use only very simple disk drive modeling which then affect the accuracy of the simulation results. They compute the disk performance based only on average values such as average seeking times and rotational delays. This is not good enough for our requirements for the OSDsim. We obtained disk drive parameters using two methods: interrogative method and empirical method [12] [11]. We modify the SCSIBench [8] to suit OSDsim. We are able to obtain parameters such as cache size, zone information, cylinder and track skew, LBN to PBN mapping defect regions and actual seeking curve. Such information are gathered and passed to our disk drive module for the disk simulation. Figure 4 shows the seeking profile obtained experimentally for the disk drive Seagate SCSI ST318437LW. Figure 4 is the seeking profile with a full seek. Figure 5 is the expanded view for the first 5000 cylinders. From Figure 4 we can see that there are four points which are obviously out of the normal seek profile. For such obviously erroneous points, we use the previous seeking times instead. The disk drive sub-module consists of four main components: bus interface, cache/scheduler, command processing and the component consisting of mechanics, data transfer and data mapping as shown in Figure 6. As the disk drive module makes use of actual disk parameters, the simulation of the disk should give better accuracy. Figure 5. Seeking profile for the first 5000 cylinders Figure 6. Disk drive module 3. Simulation Validation 3.1. Experiment Set Up We implemented OSDFS storage component at the user level on a PC with a 1 GHZ PentiumIII CPU and 512MB RAM running Redhat Linux, Kernel The operating system was installed on a 40GB ATA Maxtor D740X-6L disk drive and the target performance testing was on the Seagate SCSI disk drive model ST318437LW.
4 Figure 3. Diagram to illustrate how a read request is processed 3.2. Comparison of Experimental Data and Simulation Data Both read and write performance for different size ranges from 1KB to 512KB of the object requests were tested on the system. The system was made to write 5000 object requests first and then to read these back. The performance in terms of throughput were recorded from the system and computed. There were then compared with the simulation results from OSDsim by applying the same requests. Figure 7 & 8 shows the performance comparisons for both experimental and simulation results. Figure 7 is the performance for write requests comparison for both experimental and simulation. The biggest error is about 4.7% for the write performance when request size is 64KB. Figure 8 is the performance for read requests for both experimental and simulation. The largest error, of about 12.7%, at the point of request size 512KB. Figure 8. Read performance 4. Simulation and Analysis 4.1. Simulation Environment Simulation and analysis were also performed with the OSDFS storage component configured with different types of region settings. The disk drive used for this is a Seagate SCSI disk drive, model ST31843LW. The simulation is based on closed-loop [9] which means that a subsequent request is issued only when the previous request complete its execution. The assumptions made for the simulations are as follow: 1. The OSDFS file to object mapping is one to one. 2. System cache is large enough for the boot sections, the region head as well as the free onode map. 3. The write performance is only measured for the data writing on the disk in Step 3 in the following simulation steps. Figure 7. Write performance 4. Read performance is for the reading of all the data written in Step 3 in the following simulation steps. The following steps were employed in the simulation procedure:
5 Step 1: Format the disk. Step 2: Create an aged file system by performing data writing and deletion. Data with object size close to size of data unit are continuously written to the disk until the disk is full. Deletion requests are then performed with every other object delete. As an result, the free space left on the disk are not continuous and the largest will be equal to region data unit size We can see from Figure 10 that for all the different types of region settings, the read performances are better than the write performances. The larger the number of regions used for the OSDFS, the better the performances. Figure 11 is the overall disk space utilizations for various region settings. We can see that the larger the unit size of a region, the smaller the disk space utilization. Step 3: Perform write requests on the disk again. The workload size distribution follows the pattern summarized from the actual trace obtained from HP-UX machines. These consist of twenty machines located in laboratories for undergraduate classes [10]. This workload is referred as Instructional workload (INS) and the workload size distribution is shown in Figure 9. The write performance is computed based on the write requests performed in this step. Step 4: Perform read requests on the disk. The read requests read out all the data written in Step 3. Figure 10. Performance analysis for different number of regions Figure 9. Workload size distribution Figure 11. Disk space utilizations for different regions settings 4.2. Simulation Results and Analysis The performance of OSDFS was tested for the five different types of region settings shown in Table 1. Figure 10 shows the read and write performances in terms of throughput for various type of region settings. Table 1. Region Settings No. of regions Size of data unit (KB) , , 128, , 128, 512, , 128, 512, 2048, 8196 Figure 12 shows the average number of fragmentation per object. The number is computed base on data written in Step 3 in the simulation procedure. We can see from this figure that with five regions, this value is the smallest while with only one regions this value is the largest. This is the main reason why the performance for a setting of five region is better than that with only one region under the workload of INS. 5. Summary and Related Work So far we have not seen much work done on OSD simulation. The only work we are aware of was done in IBM [1] in which an IBM Object Storage Device for Linux was developed. The simulator does not design/develop an ob-
6 Figure 12.. Average number of fragmentations per object [11] N. Talagala, R. Arpaci-Dusseau, and D. Patterson. Microbenchmark-based extraction of local and global disk characteristics. Technical Report CSD , University of California, Berkeley, [12] B. L. Worthington, G. Ganger, Y. N. Patt, and J. Wilkes. Online extraction of scsi disk drive parameters. In proceedings of the ACM Sigmetrics, pages pp , [13] Y.-L. Zhu, C.-Y. Wang, W.-Y. Xi, and F. Zhu. Sansim - a simulation and design platform of storage area network. In Proceedings of 21st IEEE / 12th NASA Goddard Conference on Mass Storage Systems and Technologies, ject file system. It only uses the local file system to store object data. In our work, we have designed and developed an OSD simulator for an OSD system. The OSDFS storage component developed for the simulator is capable of providing high throughput for large object requests while, at the same time, maintaining high disk utilization for small object requests. The disk drive module developed for the simulator simulates not only critical mechanical dynamics but also the cache/buffer, spare regions and other activities. Actual parameters which define important characteristics are extracted from actual disk drives and used in the simulations to ensure that simulation errors remain very small. Validation experiments with the actual system implemented with OSDFS at the user lever and installed with an actual physical disk drive have been conducted to compare the simulation results with measured performance data obtained from physical system. These show OSDsim to perform well with simulation results for write performance to be within 5% and read performance within 13% of actual measured data. References [1] osdlinux.html. [2] [3] [4] [5] activities/workgroups/osd/. [6] [7] R. Card, T. Ts o, and S. Tweedie. Design and implementation of the second extended filesystem. In Proceedings of the First Dutch International Symposium on Linux, [8] Z. Dimitrijevi, R. Rangaswami, D. Watson, and A. Acharya. Diskbench: User-level disk feature extraction tool [9] G. R. Ganger. System-oriented evaluation of i/o subsystem performance. PhD dissertation, [10] D. Roselli, J. Lorch, and T. Anderson. A comparison of file system workloads. In Proceedings of the 2000 USENIX Technical Conference, pages pp , 2000.
Filesystems Performance in GNU/Linux Multi-Disk Data Storage
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical
On Benchmarking Popular File Systems
On Benchmarking Popular File Systems Matti Vanninen James Z. Wang Department of Computer Science Clemson University, Clemson, SC 2963 Emails: {mvannin, jzwang}@cs.clemson.edu Abstract In recent years,
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*
A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program
Data Storage Framework on Flash Memory using Object-based Storage Model
2011 International Conference on Computer Science and Information Technology (ICCSIT 2011) IPCSIT vol. 51 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V51. 118 Data Storage Framework
Online Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
DualFS: A New Journaling File System for Linux
2007 Linux Storage & Filesystem Workshop February 12-13, 13, 2007, San Jose, CA DualFS: A New Journaling File System for Linux Juan Piernas SDM Project Pacific Northwest National
OBFS: A File System for Object-based Storage Devices
OBFS: A File System for Object-based Storage Devices Feng Wang, Scott A. Brandt, Ethan L. Miller, Darrell D. E. Long Storage System Research Center University of California, Santa Cruz Santa Cruz, CA 964
February, 2015 Bill Loewe
February, 2015 Bill Loewe Agenda System Metadata, a growing issue Parallel System - Lustre Overview Metadata and Distributed Namespace Test setup and implementation for metadata testing Scaling Metadata
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Flexible Storage Allocation
Flexible Storage Allocation A. L. Narasimha Reddy Department of Electrical and Computer Engineering Texas A & M University Students: Sukwoo Kang (now at IBM Almaden) John Garrison Outline Big Picture Part
Analysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Design and Implementation of Metadata Cache Management Strategy for the Distributed File System
, pp.89-102 http://dx.doi.org/10.14257/ijgdc.2014.7.3.09 Design and Implementation of Metadata Cache Management Strategy for the Distributed File System Jingning Liu 1, Junjian Chen 2, Wei Tong 1, Lei
Exploiting Self-Adaptive, 2-Way Hybrid File Allocation Algorithm
Exploiting Self-Adaptive, 2-Way Hybrid File Allocation Algorithm [ Jaechun No, Sung-Soon Park ] Abstract We present hybridfs file system that provides a hybrid structure, which makes use of performance
Exploring RAID Configurations
Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID
Oracle Cluster File System on Linux Version 2. Kurt Hackel Señor Software Developer Oracle Corporation
Oracle Cluster File System on Linux Version 2 Kurt Hackel Señor Software Developer Oracle Corporation What is OCFS? GPL'd Extent Based Cluster File System Is a shared disk clustered file system Allows
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality
Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Introduction A History of Hard Drive Capacity Starting in 1984, when IBM first introduced a 5MB hard drive in
An Implementation of Storage-Based Synchronous Remote Mirroring for SANs
An Implementation of Storage-Based Synchronous Remote Mirroring for SANs SHU Ji-wu, YAN Rui, WEN Dongchan, ZHENG Weimin Department of Computer Science and Technology, Tsinghua University, Beijing 100084,
Recovery Protocols For Flash File Systems
Recovery Protocols For Flash File Systems Ravi Tandon and Gautam Barua Indian Institute of Technology Guwahati, Department of Computer Science and Engineering, Guwahati - 781039, Assam, India {r.tandon}@alumni.iitg.ernet.in
File Systems Management and Examples
File Systems Management and Examples Today! Efficiency, performance, recovery! Examples Next! Distributed systems Disk space management! Once decided to store a file as sequence of blocks What s the size
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
A 64-bit, Shared Disk File System for Linux
A 64-bit, Shared Disk File System for Linux Kenneth W. Preslan Sistina Software, Inc. and Manish Agarwal, Andrew P. Barry, Jonathan E. Brassow, Russell Cattelan, Grant M. Erickson, Erling Nygaard, Seth
ontune SPA - Server Performance Monitor and Analysis Tool
ontune SPA - Server Performance Monitor and Analysis Tool Product Components - ontune is composed of the Manager; the Agents ; and Viewers Manager - the core ontune component, and installed on the management/viewing
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Finding a needle in Haystack: Facebook s photo storage IBM Haifa Research Storage Systems
Finding a needle in Haystack: Facebook s photo storage IBM Haifa Research Storage Systems 1 Some Numbers (2010) Over 260 Billion images (20 PB) 65 Billion X 4 different sizes for each image. 1 Billion
Configuration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
An On-line Backup Function for a Clustered NAS System (X-NAS)
_ An On-line Backup Function for a Clustered NAS System (X-NAS) Yoshiko Yasuda, Shinichi Kawamoto, Atsushi Ebata, Jun Okitsu, and Tatsuo Higuchi Hitachi, Ltd., Central Research Laboratory 1-28 Higashi-koigakubo,
Managing Storage Space in a Flash and Disk Hybrid Storage System
Managing Storage Space in a Flash and Disk Hybrid Storage System Xiaojian Wu, and A. L. Narasimha Reddy Dept. of Electrical and Computer Engineering Texas A&M University IEEE International Symposium on
Client-aware Cloud Storage
Client-aware Cloud Storage Feng Chen Computer Science & Engineering Louisiana State University Michael Mesnier Circuits & Systems Research Intel Labs Scott Hahn Circuits & Systems Research Intel Labs Cloud
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems. Presented by
Maximizing VMware ESX Performance Through Defragmentation of Guest Systems Presented by July, 2010 Table of Contents EXECUTIVE OVERVIEW 3 TEST EQUIPMENT AND METHODS 4 TESTING OVERVIEW 5 Fragmentation in
VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide
VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide 1. Summary Version 0.8, December 03, 2007 Copyright 2003~2007 VIA Technologies, INC
hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System
hybridfs: Integrating NAND Flash-Based SSD and HDD for Hybrid File System Jinsun Suk and Jaechun No College of Electronics and Information Engineering Sejong University 98 Gunja-dong, Gwangjin-gu, Seoul
End-to-end Data integrity Protection in Storage Systems
End-to-end Data integrity Protection in Storage Systems Issue V1.1 Date 2013-11-20 HUAWEI TECHNOLOGIES CO., LTD. 2013. All rights reserved. No part of this document may be reproduced or transmitted in
A Deduplication File System & Course Review
A Deduplication File System & Course Review Kai Li 12/13/12 Topics A Deduplication File System Review 12/13/12 2 Traditional Data Center Storage Hierarchy Clients Network Server SAN Storage Remote mirror
CSAR: Cluster Storage with Adaptive Redundancy
CSAR: Cluster Storage with Adaptive Redundancy Manoj Pillai, Mario Lauria Department of Computer and Information Science The Ohio State University Columbus, OH, 4321 Email: pillai,[email protected]
Programming for GCSE Topic H: Operating Systems
Programming for GCSE Topic H: Operating Systems William Marsh School of Electronic Engineering and Computer Science Queen Mary University of London Aims Introduce Operating Systems Core concepts Processes
The Linux Virtual Filesystem
Lecture Overview Linux filesystem Linux virtual filesystem (VFS) overview Common file model Superblock, inode, file, dentry Object-oriented Ext2 filesystem Disk data structures Superblock, block group,
NetIQ Privileged User Manager
NetIQ Privileged User Manager Performance and Sizing Guidelines March 2014 Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE
Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata
Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached
DEXT3: Block Level Inline Deduplication for EXT3 File System
DEXT3: Block Level Inline Deduplication for EXT3 File System Amar More M.A.E. Alandi, Pune, India [email protected] Zishan Shaikh M.A.E. Alandi, Pune, India [email protected] Vishal Salve
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre
Commoditisation of the High-End Research Storage Market with the Dell MD3460 & Intel Enterprise Edition Lustre University of Cambridge, UIS, HPC Service Authors: Wojciech Turek, Paul Calleja, John Taylor
Ceph. A file system a little bit different. Udo Seidel
Ceph A file system a little bit different Udo Seidel Ceph what? So-called parallel distributed cluster file system Started as part of PhD studies at UCSC Public announcement in 2006 at 7 th OSDI File system
Data Backup and Archiving with Enterprise Storage Systems
Data Backup and Archiving with Enterprise Storage Systems Slavjan Ivanov 1, Igor Mishkovski 1 1 Faculty of Computer Science and Engineering Ss. Cyril and Methodius University Skopje, Macedonia [email protected],
Assuring Demanded Read Performance of Data Deduplication Storage with Backup Datasets
Assuring Demanded Read Performance of Data Deduplication Storage with Backup Datasets Young Jin Nam School of Computer and Information Technology Daegu University Gyeongsan, Gyeongbuk, KOREA 7-7 Email:
Object-based Storage
The Next Wave of Storage Technology and Devices, a new approach to storage technology, is currently the subject of extensive research and development in the storage industry. This work includes standards
Original-page small file oriented EXT3 file storage system
Original-page small file oriented EXT3 file storage system Zhang Weizhe, Hui He, Zhang Qizhen School of Computer Science and Technology, Harbin Institute of Technology, Harbin E-mail: [email protected]
SPC BENCHMARK 1 EXECUTIVE SUMMARY 3PAR INC. 3PARINSERV T800 STORAGE SERVER SPC-1 V1.10.1
SPC BENCHMARK 1 EXECUTIVE SUMMARY 3PAR INC. 3PARINSERV T800 STORAGE SERVER SPC-1 V1.10.1 Submitted for Review: September 2, 2008 Submission Identifier: A00069 EXECUTIVE SUMMARY Page 2 of 7 EXECUTIVE SUMMARY
IDENTIFYING AND OPTIMIZING DATA DUPLICATION BY EFFICIENT MEMORY ALLOCATION IN REPOSITORY BY SINGLE INSTANCE STORAGE
IDENTIFYING AND OPTIMIZING DATA DUPLICATION BY EFFICIENT MEMORY ALLOCATION IN REPOSITORY BY SINGLE INSTANCE STORAGE 1 M.PRADEEP RAJA, 2 R.C SANTHOSH KUMAR, 3 P.KIRUTHIGA, 4 V. LOGESHWARI 1,2,3 Student,
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System André Brinkmann, Michael Heidebuer, Friedhelm Meyer auf der Heide, Ulrich Rückert, Kay Salzwedel, and Mario Vodisek Paderborn
Chapter 13 Selected Storage Systems and Interface
Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.
A Packet Forwarding Method for the ISCSI Virtualization Switch
Fourth International Workshop on Storage Network Architecture and Parallel I/Os A Packet Forwarding Method for the ISCSI Virtualization Switch Yi-Cheng Chung a, Stanley Lee b Network & Communications Technology,
Storage Management. in a Hybrid SSD/HDD File system
Project 2 Storage Management Part 2 in a Hybrid SSD/HDD File system Part 1 746, Spring 2011, Greg Ganger and Garth Gibson 1 Project due on April 11 th (11.59 EST) Start early Milestone1: finish part 1
Lecture 16: Storage Devices
CS 422/522 Design & Implementation of Operating Systems Lecture 16: Storage Devices Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions of
Chapter 11: File System Implementation. Operating System Concepts 8 th Edition
Chapter 11: File System Implementation Operating System Concepts 8 th Edition Silberschatz, Galvin and Gagne 2009 Chapter 11: File System Implementation File-System Structure File-System Implementation
Panasas at the RCF. Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory. Robert Petkus Panasas at the RCF
Panasas at the RCF HEPiX at SLAC Fall 2005 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory Centralized File Service Single, facility-wide namespace for files. Uniform, facility-wide
Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution
Technical Report Video Surveillance Storage and Verint Nextiva NetApp Video Surveillance Storage Solution Joel W. King, NetApp September 2012 TR-4110 TABLE OF CONTENTS 1 Executive Summary... 3 1.1 Overview...
EMC Invista: The Easy to Use Storage Manager
EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
HDFS Space Consolidation
HDFS Space Consolidation Aastha Mehta*,1,2, Deepti Banka*,1,2, Kartheek Muthyala*,1,2, Priya Sehgal 1, Ajay Bakre 1 *Student Authors 1 Advanced Technology Group, NetApp Inc., Bangalore, India 2 Birla Institute
Network File System (NFS) Pradipta De [email protected]
Network File System (NFS) Pradipta De [email protected] Today s Topic Network File System Type of Distributed file system NFS protocol NFS cache consistency issue CSE506: Ext Filesystem 2 NFS
File-System Implementation
File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation
Cray DVS: Data Virtualization Service
Cray : Data Virtualization Service Stephen Sugiyama and David Wallace, Cray Inc. ABSTRACT: Cray, the Cray Data Virtualization Service, is a new capability being added to the XT software environment with
HP reference configuration for entry-level SAS Grid Manager solutions
HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2
Chapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
POSIX and Object Distributed Storage Systems
1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome
HiSMRfs: a High Performance File System for Shingled Storage Array
HiSMRfs: a High Performance File System for Shingled Storage Array Chao JIN, Wei-Ya XI, Zhi-Yong CHING, Feng HUO, Chun-Teck LIM Data Storage Institute, Agency of Science, Technology and Research, Singapore
DISK ARRAY SIMULATION MODEL DEVELOPMENT
ISSN 1726-4529 Int j simul model 10 (2011) 1, 27-37 Original scientific paper DISK ARRAY SIMULATION MODEL DEVELOPMENT Vickovic, L.; Celar, S. & Mudnic, E. University of Split, Faculty of Electrical Engineering,
PC Boot Considerations for Devices >8GB
X3T10 95-321 Rev 1 PC Boot Considerations for Devices >8GB Overview This is a draft of a document proposed in the System Issues Study Group meeting held on 7/12/95 in Colorado Springs. It is intended to
Violin: A Framework for Extensible Block-level Storage
Violin: A Framework for Extensible Block-level Storage Michail Flouris Dept. of Computer Science, University of Toronto, Canada [email protected] Angelos Bilas ICS-FORTH & University of Crete, Greece
<Insert Picture Here> Btrfs Filesystem
Btrfs Filesystem Chris Mason Btrfs Goals General purpose filesystem that scales to very large storage Feature focused, providing features other Linux filesystems cannot Administration
How To Improve Performance On A Single Chip Computer
: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings
RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing
: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang 1, Hai Jin 1,, and Roy Ho University of Southern California 1 The University of Hong Kong Email: {kaihwang, hjin}@ceng.usc.edu
Eloquence Training What s new in Eloquence B.08.00
Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium
Evaluating HDFS I/O Performance on Virtualized Systems
Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang [email protected] University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
Datacenter Operating Systems
Datacenter Operating Systems CSE451 Simon Peter With thanks to Timothy Roscoe (ETH Zurich) Autumn 2015 This Lecture What s a datacenter Why datacenters Types of datacenters Hyperscale datacenters Major
Mass Storage Structure
Mass Storage Structure 12 CHAPTER Practice Exercises 12.1 The accelerating seek described in Exercise 12.3 is typical of hard-disk drives. By contrast, floppy disks (and many hard disks manufactured before
760 Veterans Circle, Warminster, PA 18974 215-956-1200. Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974.
760 Veterans Circle, Warminster, PA 18974 215-956-1200 Technical Proposal Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974 for Conduction Cooled NAS Revision 4/3/07 CC/RAIDStor: Conduction
HP Serviceguard Cluster Configuration for Partitioned Systems
HP Serviceguard Cluster Configuration for Partitioned Systems July 2005 Abstract...2 Partition configurations...3 Serviceguard design assumptions...4 Hardware redundancy...4 Cluster membership protocol...4
ext4 online defragmentation
ext4 online defragmentation Takashi Sato NEC Software Tohoku, Ltd. [email protected] Abstract ext4 greatly extends the filesystem size to 1024PB compared to 16TB in ext3, and it is capable of storing
GSE z/os Systems Working Group s390-tools
Bertrand Jouniaux 2011/06/15 GSE z/os Systems Working Group s390-tools Based on s390-tools: The Swiss Army Knife for Linux on System z System Administration Hans-Joachim Picht Running Linux on System z
Architecting a High Performance Storage System
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to
Developing NAND-memory SSD based Hybrid Filesystem
214 Int'l Conf. Par. and Dist. Proc. Tech. and Appl. PDPTA'15 Developing NAND-memory SSD based Hybrid Filesystem Jaechun No 1 1 College of Electronics and Information Engineering, Sejong University, Seoul,
