Chapter 12. Paging an Virtual Memory Systems



Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Operating Systems. Virtual Memory

OPERATING SYSTEM - VIRTUAL MEMORY

Chapter 7 Memory Management

OPERATING SYSTEM - MEMORY MANAGEMENT

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Lecture 17: Virtual Memory II. Goals of virtual memory

Virtual vs Physical Addresses

Operating Systems, 6 th ed. Test Bank Chapter 7

Virtual Memory Paging

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

HY345 Operating Systems

Computer Architecture

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

361 Computer Architecture Lecture 14: Cache Memory

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Chapter 7 Memory Management

Outline: Operating Systems

Chapter 2: OS Overview

OPERATING SYSTEMS MEMORY MANAGEMENT

Memory Management CS 217. Two programs can t control all of memory simultaneously

Chapter 14 MS-DOS Operating System

Keil C51 Cross Compiler

CS5460: Operating Systems

Board Notes on Virtual Memory

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Storage and File Structure

Filing Systems. Filing Systems

Chapter 13: Query Processing. Basic Steps in Query Processing

Chapter 10: Virtual Memory. Lesson 08: Demand Paging and Page Swapping

The Classical Architecture. Storage 1 / 36

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Stack Allocation. Run-Time Data Structures. Static Structures

Chapter 11 I/O Management and Disk Scheduling

W4118: segmentation and paging. Instructor: Junfeng Yang

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

System Software Prof. Dr. H. Mössenböck

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Part III Storage Management. Chapter 11: File System Implementation

File-System Implementation

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Operating System Tutorial

Operating System Components

What Is Recursion? Recursion. Binary search example postponed to end of lecture

Buffer Management 5. Buffer Management

Intel P6 Systemprogrammering 2007 Föreläsning 5 P6/Linux Memory System

Midterm Exam #2 Solutions November 10, 1999 CS162 Operating Systems

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Linux Process Scheduling Policy

W4118 Operating Systems. Instructor: Junfeng Yang

Chapter 10: Virtual Memory. Lesson 03: Page tables and address translation process using page tables

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Lecture 1: Data Storage & Index

Segmentation and Fragmentation

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

An overview of FAT12

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

Operating Systems. Steven Hand. Michaelmas / Lent Term 2008/ lectures for CST IA. Handout 3. Operating Systems N/H/MWF@12

Performance Comparison of RTOS

Chapter 1 Computer System Overview

MS SQL Performance (Tuning) Best Practices:

Chapter 11 I/O Management and Disk Scheduling

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

COS 318: Operating Systems. I/O Device and Drivers. Input and Output. Definitions and General Method. Revisit Hardware

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

WHITE PAPER Optimizing Virtual Platform Disk Performance

Hardware Assisted Virtualization

Computer Organization and Architecture

COMP 356 Programming Language Structures Notes for Chapter 10 of Concepts of Programming Languages Implementing Subprograms.

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

CHAPTER 17: File Management

Certification Authorities Software Team (CAST) Position Paper CAST-13

A3 Computer Architecture

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

COS 318: Operating Systems. Virtual Memory and Address Translation

OS OBJECTIVE QUESTIONS

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

Introduction to Java

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

Storage and File Systems. Chester Rebeiro IIT Madras

Chapter 12 File Management

Chapter 12 File Management. Roadmap

KVM & Memory Management Updates

6. Storage and File Structures

Storage in Database Systems. CMPSCI 445 Fall 2010

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Raima Database Manager Version 14.0 In-memory Database Engine

Transcription:

Chapter 12 Paging an Virtual Memory Systems

Paging & Virtual Memory Virtual Memory - giving the illusion of more physical memory than there really is (via demand paging) Pure Paging - The total program is kept in memory as sets of (non-contiguous) pages No illusion of virtual memory Demand Paging - A program s working set is kept in memory, reference outside WS causes corresponding code to be retrieved from disk ( page fault ) Provides the illusion of virtual memory

Paging Systems Processes (programs) are divided into fixed size pieces called Pages Main memory is divided into fixed size partitions called Blocks (Page Frames) Pure Paging - entire program is kept in memory during execution, but pages are not kept in contiguous blocks Demand paging - only parts of program kept in memory during execution, pages are not kept in contiguous blocks

Virtual Versus Physical Addresses A virtual address is represented as <page, offset> where the page is determined by dividing each process into fixed size pages, the offset is a number in the range 0 - (page_size-1). Memory is divided into fixed size blocks (or page frames) and accommodates a process pages. The physical address (PA) then is (block_number * page_size + offset). In pure paging systems the entire VA space of a process must reside in physical memory during execution, but pages are not kept in contiguous blocks.

Pure Paging Virtual Addresses VA is determined from the compiled address VA has two components: page number address in page (or offset or displacement)

Virtual Address to Physical Address Mapping Page 0 Page 1 Page 2 VA to PA mapping Page Frame 0 Page Frame 1 Page Frame 2 0 - p-1 p - 2p-1 2p - 3p-1 Page 3 Page Frame 3 3p - 4p-1 Page 4 Page Frame 4 4p - 5p-1 Page 5 Program divided into Pages of size p : : Real storage divided into page frames of size p

Key Idea in Paging Systems CPU Compiled Address Address Translator: Determines VA Maps VA to PA (Hardware + OS) PA Memory Assume 256 bytes per page: VA: page number offset 8 bits

Key Idea in Paging Systems A[0] A[1] A[2] A[3] VA space: 3 253 3 254 3 255 4 000 Page Map Table: Page Frame 3 7 4 5 5 6 7 Frame PA space: A[3]. A[0..2] Page Offset VA: PA: 3 253 7 253

Addressing Scheme Base addr of PMT a Page Number Displacement + p d Virtual Address v = (p, d) a PMT p b b * page_size + Physical Address PA = (b * page size) + d

Paging Mapping Example PMT's MBT PMTAR Job 1 Page Block 0 3 1 5 2 1 0 1 2 OS J1 P2 J2 P0 PMT addr of current Job Job 2 0 1 2 6 3 4 5 J1 P0 J3 P0 J1 P1 0 4 6 J2 P1 Job 3 1 8 7 J3 P2 2 7 8 J3 P1

Page Management Page Map Table (PMT): Contains VA page to PA block mapping Page Block 0 7 1 2 2 13 1 PMT / job 1 Entry / page

Page Management Page Map Table Address Register (PMTAR): Length of program in pages (# PMT entries) Base address of current PMT Points to PMT for currently executing job 1 PMTAR / System

Page Management Memory Block Table (MBT) Maps each block of main memory either to a process id and page number or to "free" 1 MBT / System 1 Entry / Block

Page Management Process (Job) Control Block (PCB) Contains information about all jobs in the system Stores: Job Size Location of PMT 1 PCB / system 1 entry / job

Page Addressing - Let s get REAL PMTAR VA = < page, offset > 3 4388 PA = block size * block + offset Main Memory Assume: 1 word PMT entries; byte addressable MM 4380 4384 4388 25 10 6 PMT Example: 4392 16 page & block size = 4 K bytes VA = < 1, 1234 > 4396 4400 63 45 PA = 4096 * 16 + 1234 block #s

Determining Virtual Address <Page, Offset> from the Compiled Address Compiled Address (relative to 0) : 18087 Page size: 2K (2048 bytes) Memory is byte addressable Virtual Address: Page = Div (Compiled Address, Page Size) Offset = MOD (Compiled Address, Page Size) <8, 1703> 11 DIV => Shift right, 11 bits (2048 = 2 ) MOD => AND 21 High-order bits with 0

Review Questions Assume: 2 bytes PMT entries; byte addressable MM page & block size = 4 K bytes 1) What is the maximum size for any program? 2) What PA corresponds to compiled address 220945? 2) What is the MBT length if MM size is 80M? (Assume MBT entries are 2 bytes long.) 3) What is the PMT length if compiled size = 300K?

Allocating Pages WS Word size in bytes P Page size in Bytes Size Size of program in bytes NPPgm Num of pages needed for pgm NPPmt number of pages needed for PMT (1 word / entry) NPTot Total number of pages needed MaxBlocks main memory size, in blocks procedure procedure allocation allocation (int (int Size) Size) { { NPpgm NPpgm := := ceiling( ceiling( Size Size / / P); P); NPmt NPmt := := ceiling( ceiling( (NPPgm (NPPgm * * WS) WS) / / P P ); ); NPTot NPTot := := NPPgm NPPgm + + NPPmt; NPPmt; If If ( ( NPTot NPTot > > MaxBlocks MaxBlocks ) ) Then Then ERROR ERROR Else Else If If ( ( NPTot NPTot blocks blocks are are not not free free in in MBT MBT ) ) Then Then Add Add job job to to HOLDQ; HOLDQ; Else Else { { Allocate Allocate pages pages to to blocks; blocks; Update Update MBT, MBT, PCB; PCB; Create, Create, initialize initialize PMT; PMT; } } } }

Sharing Pages of Reentrant Code or Data Between Processes

Pros/Cons of Paging Advantages: Efficient memory usage Simple partition management due to discontiguous loading and fixed partition size No compaction necessary Easy to share pages

Pros/Cons of Paging Disadvantages: Job Size <= Memory Size Internal fragmentation (half the page size on the average) Need special hardware for address translation Some main memory space used for PMT's Address translation lengthens memory cycle times

Demand Paging Jobs are paged, but not all pages have to be in memory at the same time VIRTUAL MEMORY The operating system creates the illusion of more memory Job size can exceed main memory size Pages are only brought in when referenced (on demand ) Often page 0 is loaded initially when a job is scheduled

Demand Paging Motivation Job 1 Job 2 Job 3 PMT's Page Block 0 3 1 5 2 1 0 2 1 6 0 4 1 2 0 1 2 3 4 5 6 MBT OS J1 P2 J2 P0 J1 P0 J3 P0 J1 P1 J2 P1 1. What happens if job 3 references page 1? 2. What does the CPU do while J3P1 is being read?

Terminology Page Page fault: fault: Interrupt that that arises arises upon upon a reference to to a page page that that is is not not in in main main memory. Page Page swapping :: Replacement of of one one page page in in memory by by another page page in in response to to a page page fault. fault.

When a Page Fault Occurs Select a page to be removed Copy it to disk if it has been changed ** Update old page s PMT ** Copy new page to main memory Update new page s PMT Update MBT ** Thrashing occurs when a job continuously references pages which are not in main memory

Demand Page Management Page Map Table (PMT) Maps page to blocks Status: Pointer to Main Memory Block Indicator Main/Secondary Memory File Map Table (FMT) Maps a job s pages to secondary memory PMT for the Disk Memory Block Table (MBT) Maps block to page Contains: Job/Page Number Reference bit Change bit 1 FMT / job 1 entry / page

Demand Paging Schematic 0 1 2 PMTAR 3 0 1 2 FMT PMT M M S DISK 0 1 2 3 4 5 6 7 Main Memory OS 0 1 2 3 4 5 6 7 MBT J1 P1 J1 P2 R C

Demand Paging Data Structures Job 1 Job 2 Job 3 0 1 2 0 1 0 1 2 PMT Page Block In_Mem Disk 3 5 1 2 6 4 Yes Yes Yes Yes Yes Yes No No FMT Dsk Addr Dsk Addr Dsk Addr Dsk Addr Dsk Addr Dsk Addr Dsk Addr Dsk Addr 0 1 2 3 4 5 6 MBT OS J1 P2 J2 P0 J1 P0 J3 P0 J1 P1 J2 P1 Referenced Changed 5 2 3 8 6 4 0 1 0 0 1 0 1. What happens if job 3 references page 1?

Summary of Data Structures 1) Page Map Table (PMT): Maps page to block Fields: - page number (which page in memory) - In_Memory <--- New! 2) Memory Block Table (MBT): Maps block to either process id and page number or to "free" Fields: <--- New! - Reference Count - Change Bit 3) File Map Table (FMT): Maps a job's pages to secondary memory (like a PMT for the disk) <--- New! 1 FMT / job, 1 entry / page

Page Replacement Now we consider the decision of selecting which page to replace upon a page fault. Local Local versus versus Global Global Page Page Replacement Local Requires that each process remove a page from its own set of allocated blocks Global A replacement page may be selected from the set of all blocks

A Program s Execution Profile Question: Does a program need all its pages in main memory at all times?

The Principle of Locality At any time, the locality of a process is the set of pages that are actively being used together Spatial There is a high probability that once a location is referenced, the one after it will be accessed in the near future Sequential code, Array processing, Code within a loop Temporal A referenced location is likely to be accessed again in the near future Loop indices, Single data elements

More on Locality Does a linked list help or hurt locality? Does a recursive function display spatial or temporal locality?

Working Set Theory (Formalizes "Locality") A process working set is the number of pages currently being referenced during ( t, t+ ) for some small. The working set size is an estimate of degree of locality A job should not be scheduled unless there is room for its entire working set Why?

Idea Behind Working Set

Motivation : Page Replacement Algorithms Which page replacement rule should we use to give the minimum page fault rate? Page fault rate = # faults / #refs

Page Replacement Algorithm: Optimal Replacement Replace the page which will not be used for the longest period of time Lowest page fault rate of all algorithms Requires knowledge of the future Example: MM has 3 blocks containing 3,5,2. Current and future refs: 4, 3, 3, 4, 2, 3, 4, 5, 1, 3, 4 fault OPT replaces 5

Optimal Replacement Algorithm Page Trace: Block Number 0 1 2 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults = Page Fault Rate =

Replacement Algorithm: FIFO Replace the "oldest" page A frequently used page may be swapped out Belady's Anomaly: For some page replacement algorithms, the page fault rate may increase as the number of blocks increase

FIFO Page Replacement Page Trace: Block Number 0 1 2 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults = Page Trace: 0 Block 1 Number 2 3 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults =

Replacement Algorithms: Least Recently Used (LRU) Uses the recent past as an approximation of the near future Stack algorithm Does NOT suffer from Belady's Anomaly Hardware / Overhead intensive

Least Recently Used (LRU) Page Trace: Block Number 0 1 2 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults = Page Trace: 0 Block 1 Number 2 3 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults =

Replacement Algorithms: LRU Approximation Uses reference bits in the MBT and a static reference pointer (RP) The reference pointer is not reinitialized between calls to LRU Approximation Set referenced bit to 1 when loading a page Set referenced bit to 1 on a R/W Set referenced bit to 0 if currently a 1 and scanning for a replacement page Replace page with reference bit = 0

LRU Approximation Algorithm Initially: RP <- -1 begin begin RP RP := := (RP (RP + 1) 1) mod mod MBTSize; While While (MBT[RP].Referenced = 1 Do Do Begin Begin MBT[RP].Referenced := := 0 RP RP := := (RP (RP + 1) 1) mod mod MBTSize; End End return(rp); Note: referenced bit is set to 1 when a page is (a) referenced, and (b) when first loaded into memory RP always points to last page replaced

LRU Approximation Page Trace: Block Number 0 1 2 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults = Page Fault Rate =

Replacement Algorithms: Least Frequently Used (LFU) Keep a reference count, select page with lowest count Reference count is number of times a page has been referenced over its current stay in memory, not over the lifetime of the program Page Trace: 0 Block 1 Number 2 0 1 2 3 0 1 4 0 1 2 3 4 # Page Faults =

Pros/Cons of Demand Paging Advantages: Can run program larger than physical memory Allows higher multiprogramming level than pure paging Efficient memory usage No compaction is required Portions of process that are never called are never loaded Simple partition management due to discontinuous loading and fixed partition size Easy to share pages

Pros/Cons of Demand Paging Disadvantages: Internal fragmentation Program turnaround time increases each time a page is replaced, then reloaded Need special address translation hardware