UNIX File Management (continued)



Similar documents
File Systems Management and Examples

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

Timing of a Disk I/O Transfer

Storage and File Systems. Chester Rebeiro IIT Madras

We mean.network File System

tmpfs: A Virtual Memory File System

Network File System (NFS)

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Network File System (NFS) Pradipta De

Chapter 11: File System Implementation. Operating System Concepts 8 th Edition

Chapter 11 I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling

Lecture 16: Storage Devices

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

File Management Chapters 10, 11, 12

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

System Calls and Standard I/O

Operating Systems. Design and Implementation. Andrew S. Tanenbaum Melanie Rieback Arno Bakker. Vrije Universiteit Amsterdam

Outline. Operating Systems Design and Implementation. Chap 1 - Overview. What is an OS? 28/10/2014. Introduction

File System Management

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

OPERATING SYSTEMS FILE SYSTEMS

Devices and Device Controllers

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

09'Linux Plumbers Conference

Distributed File Systems

Linux Kernel Architecture

The Linux Virtual Filesystem

Survey of Filesystems for Embedded Linux. Presented by Gene Sally CELF

COS 318: Operating Systems

ProTrack: A Simple Provenance-tracking Filesystem

The Linux Device File-System

File Systems for Flash Memories. Marcela Zuluaga Sebastian Isaza Dante Rodriguez

COS 318: Operating Systems. Storage Devices. Kai Li Computer Science Department Princeton University. (

File Management. Chapter 12

RAID Storage, Network File Systems, and DropBox

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

Storage in Database Systems. CMPSCI 445 Fall 2010

Chapter 12 File Management

Chapter 12 File Management. Roadmap

Disks and RAID. Profs. Bracy and Van Renesse. based on slides by Prof. Sirer

EXPLODE: a Lightweight, General System for Finding Serious Storage System Errors

The Classical Architecture. Storage 1 / 36

Network Attached Storage. Jinfeng Yang Oct/19/2015

Computer Engineering and Systems Group Electrical and Computer Engineering SCMFS: A File System for Storage Class Memory

Topics in Computer System Performance and Reliability: Storage Systems!

Sistemas Operativos: Input/Output Disks

Linux Driver Devices. Why, When, Which, How?

System Calls Related to File Manipulation

Chapter 12 File Management

Red Hat Linux Internals

Linux/UNIX System Programming. POSIX Shared Memory. Michael Kerrisk, man7.org c February 2015

CHAPTER 17: File Management

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Distributed File Systems. NFS Architecture (1)

1 Storage Devices Summary

File Management. COMP3231 Operating Systems. Kevin Elphinstone. Tanenbaum, Chapter 4

An Implementation Of Multiprocessor Linux

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated

An Open Source Wide-Area Distributed File System. Jeffrey Eric Altman jaltman *at* secure-endpoints *dot* com

Chapter 6, The Operating System Machine Level

CSE 120 Principles of Operating Systems. Modules, Interfaces, Structure

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Designing an NFS-based Mobile Distributed File System for Ephemeral Sharing in Proximity Networks

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Operating System Engineering: Fall 2005

COS 318: Operating Systems. Storage Devices. Kai Li and Andy Bavier Computer Science Department Princeton University

CS3210: Crash consistency. Taesoo Kim

Lecture 24 Systems Programming in C

Distributed File Systems

1. Introduction to the UNIX File System: logical vision

Operating Systems and Networks

Database Hardware Selection Guidelines

Finding a needle in Haystack: Facebook s photo storage IBM Haifa Research Storage Systems

CS 416: Opera-ng Systems Design

Virtual Memory Behavior in Red Hat Linux Advanced Server 2.1

Oracle Cluster File System on Linux Version 2. Kurt Hackel Señor Software Developer Oracle Corporation

A Comparative Study on Vega-HTTP & Popular Open-source Web-servers

Lecture 17: Virtual Memory II. Goals of virtual memory

Windows NT. Chapter 11 Case Study 2: Windows Windows 2000 (2) Windows 2000 (1) Different versions of Windows 2000

ELEC 377. Operating Systems. Week 1 Class 3

Encrypt-FS: A Versatile Cryptographic File System for Linux

The Native AFS Client on Windows The Road to a Functional Design. Jeffrey Altman, President Your File System Inc.

4.2: Multimedia File Systems Traditional File Systems. Multimedia File Systems. Multimedia File Systems. Disk Scheduling

OCFS2: The Oracle Clustered File System, Version 2

Ryusuke KONISHI NTT Cyberspace Laboratories NTT Corporation

File System Encryption with Integrated User Management

Lab 2 : Basic File Server. Introduction

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

File-system Intrusion Detection by preserving MAC DTS: A Loadable Kernel Module based approach for LINUX Kernel 2.6.x

Distributed File Systems. Chapter 10

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

Lesson 06: Basics of Software Development (W02D2

Cray DVS: Data Virtualization Service

Transcription:

UNIX File Management (continued)

OS storage stack (recap) Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 2

Virtual File System (VFS) Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 3

Older Systems only had a single file system They had file system specific open, close, read, write, calls. However, modern systems need to support many file system types ISO9660 (CDROM), MSDOS (floppy), ext2fs, tmpfs 4

Supporting Multiple File Systems Alternatives Change the file system code to understand different file system types Prone to code bloat, complex, non-solution Provide a framework that separates file system independent and file system dependent code. Allows different file systems to be plugged in 5

Virtual File System (VFS) Application FD table OF table VFS FS FS2 Buffer cache Disk scheduler Disk scheduler Device driver Device driver 6

Virtual file system (VFS) / ext3 open( /home/leonidr/file, ); Traversing the directory hierarchy may require VFS to issue requests to several underlying file systems /home/leonidr nfs 7

Virtual File System (VFS) Provides single system call interface for many file systems E.g., UFS, Ext2, XFS, DOS, ISO9660, Transparent handling of network file systems E.g., NFS, AFS, CODA File-based interface to arbitrary device drivers (/dev) File-based interface to kernel data structures (/proc) Provides an indirection layer for system calls File operation table set up at file open time Points to actual handling code for particular type Further file operations redirected to those functions 8

The file system independent code deals with vfs and vnodes VFS FS vnode inode File Descriptor Tables Open File Table File system dependent code9 9

VFS Interface Reference S.R. Kleiman., "Vnodes: An Architecture for Multiple File System Types in Sun Unix," USENIX Association: Summer Conference Proceedings, Atlanta, 1986 Linux and OS/161 differ slightly, but the principles are the same Two major data types VFS Represents all file system types Contains pointers to functions to manipulate each file system as a whole (e.g. mount, unmount) Form a standard interface to the file system Vnode Represents a file (inode) in the underlying filesystem Points to the real inode Contains pointers to functions to manipulate files/inodes (e.g. open, close, read, write, ) 10

Vfs and Vnode Structures struct vnode Generic (FS-independent) fields fs_data vnode ops ext2fs_read ext2fs_write... FS-specific implementation of vnode operations size uid, gid ctime, atime, mtime FS-specific fields Block group number Data block list 11

Vfs and Vnode Structures struct vfs Generic (FS-independent) fields fs_data vfs ops ext2_unmount ext2_getroot... FS-specific implementation of FS operations Block size Max file size FS-specific fields i-nodes per group Superblock address 12

A look at OS/161 s VFS The OS161 s file system type Represents interface to a mounted filesystem struct fs { }; int (*fs_sync)(struct fs *); const char *(*fs_getvolname)(struct fs *); struct vnode *(*fs_getroot)(struct fs *); int (*fs_unmount)(struct fs *); void *fs_data; Private file system specific data Force the filesystem to flush its content to disk Retrieve the volume name Retrieve the vnode associated with the root of the filesystem Unmount the filesystem Note: mount called via function ptr passed to vfs_mount 13

Count the number of references to this vnode Vnode struct vnode { int vn_refcount; struct spinlock vn_countlock;k struct fs *vn_fs; void *vn_data; const struct vnode_ops *vn_ops; }; Array of pointers to functions operating on vnodes Pointer to FS specific vnode data (e.g. inode) Lock for mutual exclusive access to counts Pointer to FS containing the vnode 14

Vnode Ops struct vnode_ops { unsigned long vop_magic; /* should always be VOP_MAGIC */ int (*vop_eachopen)(struct vnode *object, int flags_from_open); int (*vop_reclaim)(struct vnode *vnode); int (*vop_read)(struct vnode *file, struct uio *uio); int (*vop_readlink)(struct vnode *link, struct uio *uio); int (*vop_getdirentry)(struct vnode *dir, struct uio *uio); int (*vop_write)(struct vnode *file, struct uio *uio); int (*vop_ioctl)(struct vnode *object, int op, userptr_t data); int (*vop_stat)(struct vnode *object, struct stat *statbuf); int (*vop_gettype)(struct vnode *object, int *result); int (*vop_isseekable)(struct vnode *object, off_t pos); int (*vop_fsync)(struct vnode *object); int (*vop_mmap)(struct vnode *file /* add stuff */); int (*vop_truncate)(struct vnode *file, off_t len); int (*vop_namefile)(struct vnode *file, struct uio *uio); 15

Vnode Ops int (*vop_creat)(struct vnode *dir, const char *name, int excl, struct vnode **result); int (*vop_symlink)(struct vnode *dir, const char *contents, const char *name); int (*vop_mkdir)(struct vnode *parentdir, const char *name); int (*vop_link)(struct vnode *dir, const char *name, struct vnode *file); int (*vop_remove)(struct vnode *dir, const char *name); int (*vop_rmdir)(struct vnode *dir, const char *name); int (*vop_rename)(struct vnode *vn1, const char *name1, struct vnode *vn2, const char *name2); }; int (*vop_lookup)(struct vnode *dir, char *pathname, struct vnode **result); int (*vop_lookparent)(struct vnode *dir, char *pathname, struct vnode **result, char *buf, size_t len); 16

Vnode Ops Note that most operations are on vnodes. How do we operate on file names? Higher level API on names that uses the internal VOP_* functions int vfs_open(char *path, int openflags, struct vnode **ret); void vfs_close(struct vnode *vn); int vfs_readlink(char *path, struct uio *data); int vfs_symlink(const char *contents, char *path); int vfs_mkdir(char *path); int vfs_link(char *oldpath, char *newpath); int vfs_remove(char *path); int vfs_rmdir(char *path); int vfs_rename(char *oldpath, char *newpath); int vfs_chdir(char *path); int vfs_getcwd(struct uio *buf); 17

Example: OS/161 emufs vnode ops /* * Function table for emufs files. */ static const struct vnode_ops emufs_fileops = { VOP_MAGIC, /* mark this a valid vnode ops table */ emufs_eachopen, emufs_reclaim, emufs_read, NOTDIR, /* readlink */ NOTDIR, /* getdirentry */ emufs_write, emufs_ioctl, emufs_stat, }; emufs_file_gettype, emufs_tryseek, emufs_fsync, UNIMP, /* mmap */ emufs_truncate, NOTDIR, /* namefile */ NOTDIR, /* creat */ NOTDIR, /* symlink */ NOTDIR, /* mkdir */ NOTDIR, /* link */ NOTDIR, /* remove */ NOTDIR, /* rmdir */ NOTDIR, /* rename */ NOTDIR, /* lookup */ NOTDIR, /* lookparent */ 18

File Descriptor & Open File Tables Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 19

Motivation System call interface: fd = open( file, ); read(fd, );write(fd, );lseek(fd, ); close(fd); VFS interface: vnode = vfs_open( file, ); vop_read(vnode,uio); vop_write(vnode,uio); vop_close(vnode); Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 20

File Descriptors File descriptors Each open file has a file descriptor Read/Write/lseek/. use them to specify which file to operate on. State associated with a file descriptor File pointer Determines where in the file the next read or write is performed Mode Was the file opened read-only, etc. 21

An Option? Use vnode numbers as file descriptors and add a file pointer to the vnode Problems What happens when we concurrently open the same file twice? We should get two separate file descriptors and file pointers. 22

Single global open file array fd is an index into the array Entries contain file pointer and pointer to a vnode An Option? fd fp i-ptr Array of Inodes in RAM vnode 23

Issues File descriptor 1 is stdout Stdout is console for some processes A file for others Entry 1 needs to be different per process! fd fp v-ptr vnode 24

Per-process File Descriptor Array Each process has its own open file array P1 fd Contains fp, v-ptr etc. Fd 1 can point to any vnode for each process (console, log file). P2 fd fp v-ptr vnode vnode fp v-ptr 25

Issue Fork Fork defines that the child shares the file pointer with the parent Dup2 Also defines the file descriptors share the file pointer With per-process table, we can only have independent file pointers Even when accessing the same file P1 fd P2 fd fp v-ptr fp v-ptr vnode vnode 26

Per-Process fd table with global open file table Per-process file descriptor array Contains pointers to open file table entry Open file table array Contain entries with a fp and pointer to an vnode. Provides Shared file pointers if required Independent file pointers if required Example: All three fds refer to the same file, two share a file pointer, one has an independent file pointer P1 fd P2 fd Per-process File Descriptor Tables ofptr ofptr ofptr fp v-ptr fp v-ptr Open File Table 27 vnode vnode

Per-Process fd table with global open file table Used by Linux and most other Unix operating systems P1 fd ofptr fp v-ptr vnode P2 fd ofptr fp v-ptr vnode ofptr Per-process File Descriptor Tables Open File Table 28

Buffer Cache Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 29

Buffer Buffer: Temporary storage used when transferring data between two entities Especially when the entities work at different rates Or when the unit of transfer is incompatible Example: between application program and disk 30

Application Program Buffering Disk Blocks Allow applications to work with arbitrarily sized region of a file Buffers in Kernel RAM However, apps can still optimise for a particular block size Transfer of arbitrarily sized regions of file Transfer of whole blocks 4 11 12 5 13 10 7 14 15 16 6 Disk 31

Application Program Buffering Disk Blocks Writes can return immediately after copying to kernel buffer Transfer of arbitrarily sized regions of file Buffers in Kernel RAM Avoids waiting until write to disk is complete Write is scheduled in the background Transfer of whole blocks 4 11 12 5 13 10 7 14 15 16 6 Disk 32

Application Program Buffering Disk Blocks Can implement read-ahead by pre-loading next block on disk into Buffers in Kernel RAM kernel buffer Avoids having to wait until next read is issued Transfer of arbitrarily sized regions of file Transfer of whole blocks 4 11 12 5 13 10 7 14 15 16 6 Disk 33

Cache Cache: Fast storage used to temporarily hold data to speed up repeated access to the data Example: Main memory can cache disk blocks 34

Application Program Caching Disk Blocks On access Cached blocks in Kernel RAM Before loading block from disk, check if it is in cache first Avoids disk accesses Can optimise for repeated access for single or several processes Transfer of arbitrarily sized regions of file Transfer of whole blocks 4 11 12 5 13 10 7 14 15 16 6 Disk 35

Buffering and caching are related Data is read into buffer; an extra independent cache copy would be wasteful After use, block should be cached Future access may hit cached copy Cache utilises unused kernel memory space; may have to shrink, depending on memory demand 36

Unix Buffer Cache On read Hash the device#, block# Check if match in buffer cache Yes, simply use in-memory copy No, follow the collision chain If not found, we load block from disk into cache 37

Replacement What happens when the buffer cache is full and we need to read another block into memory? We must choose an existing entry to replace Need a policy to choose a victim Can use First-in First-out Least Recently Used, or others. Timestamps required for LRU implementation However, is strict LRU what we want? 38

File System Consistency File data is expected to survive Strict LRU could keep critical data in memory forever if it is frequently used. 39

File System Consistency Generally, cached disk blocks are prioritised in terms of how critical they are to file system consistency Directory blocks, inode blocks if lost can corrupt entire filesystem E.g. imagine losing the root directory These blocks are usually scheduled for immediate write to disk Data blocks if lost corrupt only the file that they are associated with These blocks are only scheduled for write back to disk periodically In UNIX, flushd (flush daemon) flushes all modified blocks to disk every 30 seconds 40

File System Consistency Alternatively, use a write-through cache All modified blocks are written immediately to disk Generates much more disk traffic Temporary files written back Multiple updates not combined Used by DOS Gave okay consistency when Floppies were removed from drives Users were constantly resetting (or crashing) their machines Still used, e.g. USB storage devices 41

Disk scheduler Application FD table OF table VFS FS Buffer cache Disk scheduler Device driver 42

Disk Management Management and ordering of disk access requests is important: Huge speed gap between memory and disk Disk throughput is extremely sensitive to Request order Disk Scheduling Placement of data on the disk file system design Disk scheduler must be aware of disk geometry 43

Disk Geometry Physical geometry of a disk with two zones Outer tracks can store more sectors than inner without exceed max information density A possible virtual geometry for this disk 44

Evolution of Disk Hardware Disk parameters for the original IBM PC floppy disk and a Western Digital WD 18300 hard disk 45

Things to Note Average seek time is approx 12 times better Rotation time is 24 times faster Transfer time is 1300 times faster Most of this gain is due to increase in density Represents a gradual engineering improvement 46

Storage Capacity is 50000 times greater 47

Estimating Access Time 48

A Timing Comparison 11 8 67 49

Disk Performance is Entirely Dominated by Seek and Rotational Delays Will only get worse as capacity increases much faster than increase in seek time and rotation speed Note it has been easier to spin the disk faster than improve seek time Operating System should minimise mechanical delays as much as possible 100% 80% 60% 40% 20% 0% Average Access Time Scaled to 100% 1 2 Transfer Rot. Del. Seek Transfer 22 0.017 Rot. Del. 100 4.165 Seek 77 6.9 Disk 50

Disk Arm Scheduling Algorithms Time required to read or write a disk block determined by 3 factors 1.Seek time 2.Rotational delay 3.Actual transfer time Seek time dominates For a single disk, there will be a number of I/O requests Processing them in random order leads to worst possible performance

First-in, First-out (FIFO) Process requests as they come Fair (no starvation) Good for a few processes with clustered requests Deteriorates to random if there are many processes

Shortest Seek Time First Select request that minimises the seek time Generally performs much better than FIFO May lead to starvation

Elevator Algorithm (SCAN) Move head in one direction Services requests in track order until it reaches the last track, then reverses direction Better than FIFO, usually worse than SSTF Avoids starvation Makes poor use of sequential reads (on down-scan) Inner tracks serviced more frequently than outer tracks

Modified Elevator (Circular SCAN, C-SCAN) Like elevator, but reads sectors in only one direction When reaching last track, go back to first track non-stop Note: seeking across disk in one movement faster than stopping along the way. Better locality on sequential reads Better use of read ahead cache on controller Reduces max delay to read a particular sector