Memory Management CS 447. Prof. R.K. Joshi Dept of CSE IIT Bombay

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Operating Systems. Virtual Memory

OPERATING SYSTEM - VIRTUAL MEMORY

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Chapter 12. Paging an Virtual Memory Systems

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Operating Systems, 6 th ed. Test Bank Chapter 7

Virtual vs Physical Addresses

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

OPERATING SYSTEMS MEMORY MANAGEMENT

Lecture 17: Virtual Memory II. Goals of virtual memory

Virtual Memory Paging

Chapter 7 Memory Management

OPERATING SYSTEM - MEMORY MANAGEMENT

COS 318: Operating Systems. Virtual Memory and Address Translation

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Memory Management CS 217. Two programs can t control all of memory simultaneously

Chapter 7 Memory Management

Computer Architecture

The Classical Architecture. Storage 1 / 36

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

Board Notes on Virtual Memory

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

The Linux Virtual Filesystem

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Midterm Exam #2 Solutions November 10, 1999 CS162 Operating Systems

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Chapter 11 I/O Management and Disk Scheduling

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

A3 Computer Architecture

Virtual Memory Behavior in Red Hat Linux Advanced Server 2.1

Operating Systems Concepts: Chapter 7: Scheduling Strategies

COS 318: Operating Systems

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

361 Computer Architecture Lecture 14: Cache Memory

Operating Systems. Steven Hand. Michaelmas / Lent Term 2008/ lectures for CST IA. Handout 3. Operating Systems N/H/MWF@12

Chapter 6, The Operating System Machine Level

File System Management

Operating System Tutorial

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Virtual Machines. COMP 3361: Operating Systems I Winter

Midterm Exam #2 November 10, 1999 CS162 Operating Systems

File-System Implementation

Storage and File Systems. Chester Rebeiro IIT Madras

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

OS OBJECTIVE QUESTIONS

Chapter 11 I/O Management and Disk Scheduling

Timing of a Disk I/O Transfer

CS5460: Operating Systems

Chapter 10: Virtual Memory. Lesson 03: Page tables and address translation process using page tables

CSC 2405: Computer Systems II

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

Performance Comparison of RTOS

Operating System Overview. Otto J. Anshus

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Storage in Database Systems. CMPSCI 445 Fall 2010

CS5460: Operating Systems. Lecture: Virtualization 2. Anton Burtsev March, 2013

W4118: segmentation and paging. Instructor: Junfeng Yang

How To Write A Virtual Machine (Or \"Virtual Machine\") On A Microsoft Linux Operating System (Or Microsoft) (Or Linux) ( Or Microsoft Microsoft Operating System) (For A Non-Power Os) (On A

Lecture 1: Data Storage & Index

General Purpose Operating System Support for Multiple Page Sizes

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Chapter 13: Query Processing. Basic Steps in Query Processing

W4118 Operating Systems. Instructor: Junfeng Yang

Data Storage - I: Memory Hierarchies & Disks

Chapter 12 File Management. Roadmap

Chapter 12 File Management

Outline: Operating Systems

Chapter 12 File Management

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Virtualization. Explain how today s virtualization movement is actually a reinvention

HY345 Operating Systems

Filing Systems. Filing Systems

Red Hat Linux Internals

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

An Implementation Of Multiprocessor Linux

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

Lecture 25 Symbian OS

Physical Data Organization

Database Management Systems

Introduction. What is an Operating System?

Memory Management under Linux: Issues in Linux VM development

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

KVM & Memory Management Updates

How To Write To A Linux Memory Map On A Microsoft Zseries (Amd64) On A Linux (Amd32) (

1. Introduction to the UNIX File System: logical vision

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CIT 470: Advanced Network and System Administration. Topics. Performance Monitoring. Performance Monitoring

Transcription:

Memory Management CS 447 Prof. R.K. Joshi Dept of CSE IIT Bombay

Some Simple Memory schemes

Some Simple Memory schemes

Some Simple Memory schemes

Overlays: User level memory management (e.g. TurboPascal)

When is Address Binding performed? Compile Time (Absolute Addressing) Loading Time (Relocatable code located at load time) Execution Time (Code relocatable throughout execution)

Linking Static linking Dynamic Linking Implications to memory?

Logical vs physical address space!" "

Memory Allocation: Continuity and chunk size Contiguous Non-contiguous Fixed Partition Size Variable Partition Size

Contiguous allocation, fixed partitions # $ ' ( ) %*%*#*# % & +, -! (, %

Allocation policies # $.",", /++ 0, 1 % & %

Contiguous Allocation, Variable Partition # % # # & 0 2 0 0 2 0 0 2 0

Allocation Policies 3!!,! + ( 0 /+ 0 4 5 0

Allocation Policies 6 6 6 3 6 3! ++ + 6, 5! 0

Fragmentation External - Occurs in variable partitions Internal - Occurs in Fixed partitions

Compaction ) 7 #) % %) # & 0 2) % 2 0 &)& % 0

Compaction ) 7 #) % %) # & 0 2) % 2 0 &)& % 0

Compaction ) 7 #) % %) # & 0 2) % 2 0 &)& % 0

Compaction ) 7 #) % %) # & 0 2) % 2 0 &)& % 0

Non-contiguous allocation fixed partition: Paging,+ 9,,!" 0 (, 8 0 : 6 ),!" (

Translation look ahead cache (TLB),+ 9.9,!,!" 0 (, 8 0 : 6 ),!" (

TLB performance Hit ration: 80% Prob. Of TLB hit:.8 tlb access time: 20 ns Mem access time: 100ns Calculate effective access time?

Large address spaces What about large address spaces? 32 bit space: page size 4K, How many entries in PT? Size of PT?

Multilevel Paging,+ 9,#,%,!" 0 (, 8 (

Inverted Page Table ;, ;0 ;0 ;,.( <.( =0 (>

Inverted Page Table Does it need more information? Tuple stored: pid, page no, frame no Hash on: pid, page no. 64 bit address space, 4KB page size, PT size=?? Inverted PT size= for 1GB RAM, 4KB page/frame size:

Non contiguous allocation and variable partition sizes: Segments 00 8? 8 (.(.# 0+

STBR 00 (.( 8? 8 (.(.# 0+

STLR 00 ( 9 (.(? 8? 8 (.(.# 0+

Paged segmentation vs segmented paging Page the segment table Page every segment

Page the segments 00, 00

Page the segment Table 00, 00

Protection against invalid addresses PTLR STLR/segment limt Valid/invalid bit for swapped pages

Swapping 9 # %!,+ @< ( -! 5, 6 +, " A = 6 +> 5 0+ -,,*, "!" " (

Demand paging Bring in a page only when it is needed Pure demand paging: initially nothing is loaded What s the minimum no. of pages that needs to be loaded to complete an instruction?

Example instructions Add A, B, C (e.g. C=A+B) Fetch opcode ADD and decode it Fetch A instruction Fetch B Fetch C Fetch *A Fetch *B Add Store sum at C Page fault at any step, entire instruction is restarted

An MVC instruction ( B 0 " 3, 0 +, 0+ - B5

2 solutions Save context and restore Check initially whether all needed addresses are in physical memory

Effective access time Ma= Memory access time: P = prob. Of page fault Effective access time: p*(fault service time)+(1-p)ma Fault service time: Trap to OS Save registers Determine that the intr is a page fault Check that reference is legal and determine disk address Issue disk read Wait for device another process may be scheduled Begin transfer disk i/o completion Correct page table Restore registers and Start process again Access physical memory

Page replacement Algorithms To get lowest page fault rate e.g. 100, 0432, 0101, 0612, 0102, 0103, 0103, 0611, 0412 Reference string: 1,4,1,6,1,1,1,6,4 Page faults with 1 frame, 3 frames?

Example 7 0 1 2 0 3 0 4 2 3 0 3 1 2 2 0 1 7 0 1 FIFO replacement Faults with 3 frames=?? Faults with 4 frames=??

Another Example 1 2 3 4 1 2 5 1 2 3 4 5 FIFO replacement #Faults with 3 frames= #Faults with 4 frames= Belady s anomaly

Another Example 1 2 3 4 1 2 5 1 2 3 4 5 FIFO replacement #Faults with 3 frames= #Faults with 4 frames= Belady s anomaly

Cause of Belady s anomaly Will Set of pages in memory for n frames be always subset of pages in memory with n+1 frames, then LRU? FIFO?

Global vs local page replacement Select from all processes Select form self/user

OPT Replace page that will not be used for longest period of time -> requires knowledge of future Approximation: Least recently used

Implementation of LRU Time stamp Approximation: - Reference register schemes - Stack (pull a page that is referred, and push on top) bottom page is LRU page

Additional Reference bit algorithm 8 bit register 1 reference bit Set when page is accessed Move reference bit by right shift every 100 ms for all processes Smallest number: LRU Largest number: MRU 0 (!0!

If you had only 1 bit?

Second chance algorithm When searching for a victim: If ref bit=1, give it a chance, turn it to 0 If ref bit=0, replace it Search in FIFO order # # # #

If you had 2 bits? A reference bit: set if page is referenced A modify bit: set if page is modified

Enhanced Second chance 4 possibilities: algorithm Ref bit Modify bit 0 0 1 1 1 0 0 1

Ref bit modify bit 0 0 Neither recently used nor modified Best to replace 0 1 Not recently used but is dirty Not good to replace as page has to be written back 1 0 Recently used but is clean, may not be used again 1 1 Recently used and modified also

CD, Initially Pg1: 0 0 Pg2: 0 0 Pg3: 0 0 Pg4: 0 0 After read p1, write p2, write p3 1 0 1 1 1 1 0 0 Now if 2 pages need to be replaced, which will be your victims?

Counting based algorithms Count of no. of references to each page (costly implementation) Least frequently used Most frequently used

Thrashing A process may have more than min pages required currently, but it page faults again and again since its active pages get replaced Early systems: OS detected less CPU utilization and introduced new batch processes resulted in lesser CPU utilization! Effect of thrashing could be localized by using local page replacement as against global page replacement

Thrashing process spends more time in paging than executing Thrashing,+ +4 Thrashing zone 0 +,

Locality of Reference Allocate as many frames to a process as in its current locality (e.g. a function code and a global data structure) Page Addresses Time

The working set model for page replacement Define working set as set of pages in the most recent d references Working set is an approximation for locality

An example d=10 Reference String: 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 t1 t2 WS(t1) = {1,2, 5, 6, 7} WS(t2) = {3, 4}

What should be the value of d? If d is too small, it may not cover the current locality If too large, it overlaps several localities D = sizeof (WS i ) = total demand for pages A i If D > m (#frames): Thrashing will occur OS selects a process and suspends it (swap out)

Page fault rate for a process: for page replacement PF rate Allocate more frames Decrease no of allocated frames No of frames

Preallocation and allocation control Preallocation: start with a min no of frames per process Imposing limits: Administrator imposes upper bound on pages per process/per user

Kernel memory allocator Must handle both Small and large chunks Page level allocator is not suitable E.g. pathname translation, interrupt handlers, zombie exit status, proc structures, file descriptor blocks Page level allocator may preallocate pages to kernel, and kernel may efficiently use it with an allocator on top of a page(s) Kernel memory may be statically allocated or kernel may ask page allocator for more pages if needed KMA shouldn t suffer much from fragmentation and also should be fast

Resource Map Allocator Set of <base, size> pairs for free memory area Initially: 0 1024 Call rmalloc (256), rmalloc (320) 576 448 256 320 Call rmfree (256,128) 256 128 192 576 448 256 128

256 128320 576 448 256 128 RMA After many memory operations, there may be many free blocks of varying sizes Unix uses typically first fit to find enough memory (linear search through list) As size of resource map increases fragmentation increases To coalesce adjacent regions, map must be kept in sorted order 4.3 BSD, System V IPC in some cases

Simple Power-of-Two free lists A list stores buffers of 2 k bytes 32 64 128 256 512 In free lists Each block stores next block pointer (4 bytes) Allocated block points to its list 32 byte block can satisfy 0-28 bytes 64 byte block satisfies 29-60 bytes 1024 Allocated

Simple Power-of-Two free lists 32 64 128 256 512 1024 Avoids lengthy (linear) search Internal fragmentation: e.g. for 512 bytes, use 1024 No coalescing as sizes are fixed

Mc Kusick Karels Allocator Improvements over power-of-two allocator 4.4 BSD, DIGITAL Unix No wastage when requested memory is exactly power of two

Mc kusick- Karels allocator Freelistmemory[] 32 64 128 256 512 1024 Pages 0 1 2 3 4 5 6 7 8 9 10 11 Kmemsize[ ] 32 512 64 F 32 128 F 32 32 512 F 20 Free pages

MK Allocator Kmemsize[ ] Freelistmemory[] 32 64 128 256 512 1024 If the page is free, it indicates next free pagekmemsi ze[ ] Free pages Else it contains block size used to partition the Allocated blocks do not contain the address of free memory list to which they are to be returned We know to what page an allocated block belongs from MSB of the block (page no.) This page no. is used to find out size of the block through kmemsize [ ] array Pages 0 1 2 3 4 5 6 7 8 9 10 11 32 51264 F 32 128 F 32 32 512 F 20

Buddy Allocator Create small buffers by repeatedly halving a large buffer Coalescing of adjacent buffers whenever possible Each half is a buddy of the other

Buddy Allocator 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 Free list headers: 32 64 128 256 512 0 1023 B 256 C 128 D 64 D 64 A

Buddy Allocator A 1024 Now Allocate 256 split A into A, A split A into B, B return B Free list headers A 512 A 512 32 64 128 256 512 B 256 B 256 A 512

Free list headers 32 64 128 256 512 B 256 B 256 Now Allocate 128 split B into C and C allocate C A 512 Free list headers 32 64 128 256 512 B 256 C 128 C 128 A 512

Free list headers 32 64 128 256 512 B 256 C 128 C 128 A 512 Now Allocate 64 split C into D and D allocate D Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 A 512

Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 A 512 Now Allocate 128 split A into E and E split E into F and F allocate F Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 F 128 F 128 E 256

Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 F 128 F 128 E 256 Now release (C, 128) Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 F 128 F 128 E 256

Free list headers 32 64 128 256 512 B 256 C 128 D 64 D 64 F 128 F 128 E 256 Now release (D, 64) Free list headers 32 64 128 256 512 B 256 B 256 F 128 F 128 E 256