Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) (Review) Memory Management

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Chapter 7 Memory Management

OPERATING SYSTEM - MEMORY MANAGEMENT

Chapter 7 Memory Management

W4118: segmentation and paging. Instructor: Junfeng Yang

Operating Systems, 6 th ed. Test Bank Chapter 7

Lecture 17: Virtual Memory II. Goals of virtual memory

COS 318: Operating Systems. Virtual Memory and Address Translation

Virtual vs Physical Addresses

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

OPERATING SYSTEMS MEMORY MANAGEMENT

How To Write A Page Table

OPERATING SYSTEM - VIRTUAL MEMORY

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

CS5460: Operating Systems

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

Board Notes on Virtual Memory

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Chapter 1 Computer System Overview

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Intel P6 Systemprogrammering 2007 Föreläsning 5 P6/Linux Memory System

W4118 Operating Systems. Instructor: Junfeng Yang

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Outline: Operating Systems

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

361 Computer Architecture Lecture 14: Cache Memory

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

Chapter 12. Paging an Virtual Memory Systems

File-System Implementation

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Memory Management CS 217. Two programs can t control all of memory simultaneously

18-548/ Associativity 9/16/98. 7 Associativity / Memory System Architecture Philip Koopman September 16, 1998

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Computer Architecture

CS5460: Operating Systems. Lecture: Virtualization 2. Anton Burtsev March, 2013

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Binary search tree with SIMD bandwidth optimization using SSE

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Hardware Assisted Virtualization

Chapter 8: Structures for Files. Truong Quynh Chi Spring- 2013

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

CHAPTER 17: File Management

Lecture 25 Symbian OS

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Memories Are Made of This

Chapter 5 Cloud Resource Virtualization

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Operating Systems 4 th Class

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

System Virtual Machines

Traditional IBM Mainframe Operating Principles

Operating System Tutorial

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

WHITE PAPER. AMD-V Nested Paging. AMD-V Nested Paging. Issue Date: July, 2008 Revision: 1.0. Advanced Micro Devices, Inc.

Virtualization. Explain how today s virtualization movement is actually a reinvention

File Management Chapters 10, 11, 12

File System Management

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Chapter 10: Virtual Memory. Lesson 03: Page tables and address translation process using page tables

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Computer-System Architecture

Record Storage and Primary File Organization

Addendum Intel Architecture Software Developer s Manual

Chapter 12 File Management

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

Chapter 12 File Management. Roadmap

VIRTUAL MEMORY IN CONTEMPORARY MICROPROCESSORS

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

Disk Space Management Methods

@ )nventor; Radin Qeorge 26 Franklin Piermont New York 10968{US)

Chapter 6, The Operating System Machine Level

Chapter 11: File System Implementation. Operating System Concepts 8 th Edition

The basic CAP CPU consists of a microprogramming control unit, 4K 16-bit words of micro-control storage, and an 79

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Platter. Track. Index Mark. Disk Storage. PHY 406F - Microprocessor Interfacing Techniques

Chapter 3 Operating-System Structures

Operating System Overview. Otto J. Anshus

Paging: Introduction A Simple Example And Overview

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05

Operating Systems. Steven Hand. Michaelmas / Lent Term 2008/ lectures for CST IA. Handout 3. Operating Systems N/H/MWF@12

File Systems Management and Examples

Chapter 11 I/O Management and Disk Scheduling

File Management. Chapter 12

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe. Slide 13-1

File Management. COMP3231 Operating Systems. Kevin Elphinstone. Tanenbaum, Chapter 4

CS3600 SYSTEMS AND NETWORKS

Raid storage. Raid 0: Striping. Raid 1: Mirrored

Transcription:

Outline V22.0202-001 Computer Systems Organization II (Honors) (Introductory Operating Systems) Lecture 14 Memory Management March 28, 2005 Announcements Lab 4 due next Monday (April 4 th ) demos on 4 th, 5 th Memory Management (cont d) logical versus physical address space swapping allocation schemes Partitioning Paging Segmentation [ Silberschatz/Galvin/Gagne: Chapter 8] 3/28/2005 2 (Review) Memory Management mechanisms for memory management: binding: mapping program names into actual memory locations mapping: utilizing the limited physical memory to bring logical memory objects (belonging to multiple processes) back and forth Lecture 14: allocation of physical memory to processes assume that the entire process fits in physical memory Lectures 14 and 15: supporting virtual memory in allocated physical memory process data and instructions need not all fit into physical memory Last lecture Typically two levels of binding source program names logical addresses (at compile time) logical addresses physical addresses (at load- and run-time) Reducing process memory requirements Overlays Dynamically-linked libraries 3/28/2005 3 Multiprogramming and Swapping Problem: Memory requirements of all the processes cannot be simultaneously met Solution: Swapping Dynamically move a process out of memory into a backing store (and back in) as dictated by the medium-term scheduler backing store is typically a fast disk choice of which processes to swap out/in can be influenced by short-term scheduling policy (e.g., priority-driven) knowledge of process actual memory requirements requires the process to reserve, commit, decommit, and release memory Main memory Operating system swap out swap in 3/28/2005 4 P2 Backing store

Swapping: Issues High context-switch times assume a user process of size 100 KB backing store is a standard hard disk with transfer rate of 5 MB/s actual transfer of 100 KB from and to memory takes 2 (100 KB / 5000 KB/s) = 2 (1/50 second) = 2 (20 ms) = 40 ms + disk time helps to know exactly how much memory is being used also, determines frequency Swapping out a process that is currently in the middle of I/O I/O completion might store values in memory, now occupied by a new process common solutions never swap out a process while in a wait state induced by I/O requests all I/O interactions are via a special set of buffers that are controlled by the and are part of its space; not swapped out 3/28/2005 5 Memory Mapping Schemes Goal: Allocate physical memory to processes translate process logical addresses into physical memory addresses Objectives memory protection users from other users, system from users efficient use of memory programmer convenience large virtual memory space Three schemes Partitioning Paging Segmentation (if time permits) 3/28/2005 6 Memory Mapping (1): Partitioning Idea: Divide memory into partitions Protection each partition protected with a key at run time, process key (stored in a register) matched with partition key on mismatch, generates a trap Allocation fixed partitions memory is divided into a number of fixed size partitions each partition is allotted to a single process used in the early IBM 360 models no longer in use variable partitions contiguous memory is allocated on loading released on termination this is what you are using in Nachos Lab 4 operating system Process Process 3/28/2005 7 Memory Mapping: Partitioning (cont d) Partitioning for statically-bound programs programs must execute in the same place allocation is inefficient, and swapping is very constrained no provision for changing memory requirements Partitioning for dynamically-bound programs relocation registers a CPU register keeps track of the starting address where the process is loaded whenever a memory location is accessed: the system computes physical-address = logical-address + relocation register fetches the value from the resulting memory location the stream of physical addresses are seen only by the MMU how to prevent a process from accessing addresses outside its partition? 3/28/2005 8

Memory Mapping: Partitioning (contd.) Memory Allocation and Scheduling Protection and relocation for dynamically-bound programs Two registers keep info for each partition: limit, relocation 4 Processes: (), P2 (224K), (288K), P4 () Limit Relocation CPU Logical Yes Physical address < + address memory No Trap Other advantages relocation register can be changed on the fly why is this useful? 3/28/2005 9 896K (swap out P2) 576K 224K 288K 64K 3/28/2005 10 224K 352K 96K 288K 64K P2 P4 224K 288K 64K 224K 96K 96K 288K 64K P2 P2 P4 ends, swap in P2 Partitioning Policies Memory is viewed as sequence of blocks and voids (holes) blocks are in use voids are available: neighboring voids are coalesced to satisfy request Question: Given a request for process memory and list of current voids, how to satisfy the request First fit: allocate space from the first void in the list that is big enough fast and good in terms of storage utilization Best fit: allocate space from a void to leave minimum remaining space very good storage utilization Worst fit: allocate a void such that the remaining space is a maximum requires peculiar memory loads to perform well in terms of storage utilization Partitioning Policies (contd.) Criterion for evaluating a policy: Fragmentation External fragmentation void space between blocks that does not serve any useful purpose statistical analysis of first-fit: ~0.5N blocks will be lost due to fragmentation can be avoided by compaction Swap out a partition Swap it back into another part of memory: requires relocation Internal fragmentation it is not worth maintaining memory that leaves very small voids (e.g., a few bytes) between used regions occurs more obviously when unit of allocation is large (e.g. disks) Happens when memory request is smaller than the smallest partition size 3/28/2005 11 3/28/2005 12

Memory Compaction: Reducing Fragmentation Moving partitions around can group the voids together increase likelihood of their being used to satisfy a future request Many ways of doing this: 200 K P2 400 K 400 K 400 K 200 K P2 200 K P2 900 K 400 K 300 K 400 K 900 K 900 K 200 K P2 200 K Moved 600 K Moved 400 K Moved 200 K 3/28/2005 13 Memory Mapping (2): Paging Motivation: Partitioning suffers from large external fragmentation Paging view physical memory as composed of several fixed-size frames a frame is a physical memory allocation unit view logical memory as consisting of blocks of the same size: pages allocation problem put pages into frames a page table maintains the mapping allocation need not preserve the contiguity of logical memory e.g., pages 1, 2, 3, 4 can be allocated to frames 3, 7, 9, 14 how does this avoid external fragmentation? paging played a major role in virtual memory design separation between the meaning of a location in the user's virtual space and its actual physical storage 3/28/2005 14 Paging (example) Paging (cont d) Mapping of pages to frames Page 0 0 the mapping is hidden from the user and is controlled via the Page 1 Page 2 Page 3 Logical Memory (Pages) 1 4 7 3 Page Table Page 0 Page 3 Page 1 1 2 3 4 5 6 Allocation of frames to processes (Nachos Lab 4) the maintains a map of the available and allotted frames via a structure called a frame table whether a frame is allocated or not if allocated, to which page of which process Address translation performed on every memory access must be performed extremely efficiently so as to not degrade performance typical scheme frames (and pages) are of size 2 k Physical Memory (Frames) Page 2 7 for each logical address of a = m + n bits the higher order m bits indicate the page number p i and the remaining n bits indicate the offset w i into the page 3/28/2005 15 3/28/2005 16

Page Table Lookup Mapping between pages and frames is maintained by a page table the page number p i is used to index into the p i th entry of the (process ) page table where the corresponding frame number f i is stored Sometimes, also specify the maximum length of the page table Logical address p i Page Table All of this requires hardware support since performed on every memory access w i p i Physical address 3/28/2005 17 f i w i Page Table Structure Page table typically stored in memory a single page table base register that points to the beginning of the page table p i is now the offset into this table problem requires two accesses to memory for each value even with caches, can become very slow Solution: Translation Lookaside Buffer (TLB) a portion of the page table is cached in the TLB little performance degradation if a value is a hit in the TLB if not: a memory access is needed to load the value into the TLB an existing value must be flushed if the TLB is full E.g.: Average memory access time for a system with 90% hit rate in TLB = 0.9*(Access TLB + Access mem ) + 0.1*(Access mem + Access mem ) =~ 1.1*(Access mem ) 3/28/2005 18 Multi-level Page Tables Advantages of Multi-level Page Tables Rationale: Modern systems support a very large logical address space page tables themselves become very large e.g., for a system with 32-bit logical addresses and 4K pages we need 2 20 page table entries (4 bytes per PTE implies 4 MB of space) Solution: page the page table itself cost: additional memory accesses (but caching helps) Logical address p1# p2# disp Consider a system with 32-bit logical addresses, 4 KB pages, 4-byte PTEs Application process: 124 MB 2 MB Text + Data Number of PTEs using a single-level page table? 128 MB / 4 KB = 2 27 / 2 12 = 2 15 pages Therefore, 2 15 PTEs, occupying 2 17 bytes of space (128 KB) Number of PTEs using a two-level page table? How do we split up address bits? One option: Minimize space for first-level table One page (4KB) can store 4 KB / 4 byte = 2 10 PTEs Therefore, view logical address as: 10 (p1) 10 (p2) 12 p1# Physical address p2# size base-addr Primary Table size base-addr Secondary Table 3/28/2005 19 2 MB Stack 128 MB virtual address space 512 PTEs 512 PTEs Number of useful PTEs: 2 + 512 + 512 Space occupied: 3 pages (12 KB) 3/28/2005 20

Page Tables and Sharing Page tables permit different virtual addresses (frames of different processes) to map to the same physical address convenient sharing of common code (dynamically-linked system libraries) shared data segments for IPC Logical Memory (Pages) Physical Memory (Frames) code 0 0 code 1 1 4 code 0 1 code 2 7 3 2 data p1 Page Table data p1 3 for code 0 1 code 1 4 4 code 1 7 5 code 2 6 data p2 6 data p2 Page Table for P2 code 2 7 3/28/2005 21 Inverted Page Tables Observation usually, only a portion of all the pages from the system's memory can be stored in the physical memory so while the required page table for all of logical memory might be massive, only a small subset of it contains useful mappings We can take advantage of this fact in both TLB and page table design Logical address p i miss w i p i hit in f i th position Inverted TLB Physical address 3/28/2005 22 w i Inverted Page Tables (cont d) Efficiency considerations the inverted page table is organized based on physical addresses via frame numbers searching for the frame number can be very slow use a hash table based on the and logical page number as keys recently located entries of the inverted page table can be stored in a TLBlike structure based on associative registers Main disadvantage of inverted page tables: sharing each process that shares an object will have its own (disjoint) space where the shared object is mapped not possible to maintain with standard inverted page tables since space for only one <, page number> tuple Protection Issues with Paging Partition protection scheme Check that address lies between base and base+limit Cannot be used on page-based systems: WHY? Physical memory can only be accessed through page table mappings all addresses are interpreted by the MMU intervention required to manipulate page tables and TLBs Special bits in the page table entry enforce per-frame protection an accessibility bit whether a page is invalid, readable, writable, executable a valid/invalid bit to indicate whether a page is in the user's (logical) space Sometimes, the hardware may support a page-table length register specifies size of the process page table trailing invalid pages can be eliminated useful when processes are using a small fraction of available address space 3/28/2005 23 3/28/2005 24

Memory Mapping (3): Segmentation A segment is alogical piece of the program e.g., the code for the program functions, its data structures, symbol tables Segmentation views logical memory as broken into such segments segments are of variable size (unlike pages) Accessing a segment the logical address is regarded as two-dimensional a segment pointer to an entry in the segment table a displacement into the segment itself Memory Mapping: Segment Table Lookup Mapping logical addresses to physical addresses the mapping is maintained by the segment table the segment number s# is used to index into the (process ) segment table where the corresponding segment size and base address are stored Logical address s# disp Linear address Allocating a segment a segment is a partition with a single base-limit pair the limit attribute stores the segment length prevents programs from accessing locations outside the segment space differs from partitioning in that there can be multiple segments/process s# size base-addr base-addr Segment Table 3/28/2005 25 3/28/2005 26 Memory Mapping: Segmentation Hardware Segment registers some designs (e.g. Intel x86) provide registers to identify segments loading a segment register loads a (hidden) segment specification register from the segment table construction of the logical address is done explicitly TLBs some designs, such as the MIPS 2000, only provide a TLB the is responsible for loading this, and doing appropriate translation Traditional approach: Store the segment table in memory segment table base register (STBR), segment table length register (STLR) saved and restored on each context switch translation of address (s,d) check that s is valid: s < STLR Look up base address, limit: segment table entry at address (STBR + s) check that offset d is valid: d < length compute physical address 3/28/2005 27 Segmentation: Pros and Cons Pros protection in terms of ensuring that illegal address accesses are avoided, comes for free the segment length check plays an important role here sharing segments across programs is straightforward by loading identical segment table base register values Caveat: How do instructions refer to addresses within segments? Relative addressing works well with sharing Absolute addressing does not: requires same segment number Cons external fragmentation is potentially a big problem contrast this with paging where only internal fragmentation is possible 3/28/2005 28

Memory Mapping: Segmentation and Paging Overlay a segmentation scheme on a paging environment several examples originally proposed for GE 645 / Multics Intel x86 uses segment registers to generate 32-bit logical addresses, which are translated to physical addresses by an optional multi-level paging scheme alleviates the problem of external fragmentation Memory Mapping: Examples Multics (c. 1965) 34-bit logical address 18-bit segment number, 16-bit offset [8-bit major segment,10-bit minor segment], [6-bit page, 10-bit offset] Both the segment table and segment itself are paged! Segmentation structure Segment table is paged major segment number indexes page table for segment table minor segment number is offset within the page of the segment table this gives the page table of the desired segment and the segment length Paging structure one-level page table, 1KB pages TLB 16 entries; key = 24-bit (seg# & page#); value = frame# 3/28/2005 29 3/28/2005 30 Memory Mapping: Examples (cont d) Pentium segmentation and paging Local 8K segments Logical Address: 13 16 bits 32 bits selector offset local/global 24-bit base 20-bit limit + Pentium Memory Mapping (cont d) Very flexible addressing scheme pure paging All segment registers set up with the same selector Descriptor for this selector has base = 0, limit = MAXVAL Offset becomes the address pure segmentation How can this be done? options in between Global 8K segments Linear Address: (4 KB pages) 10 bits 12 bits directory page offset See Section 8.7.3 for a description of how Linux uses the Pentium memory addressing support Directory table Page table frame 3/28/2005 31 (4 MB pages) 3/28/2005 32

Memory Mapping: Summary Partitioning: Process is allocated a single contiguous region of memory Translation and protection using size, limit registers Suffers from external fragmentation Paging: Process pages are mapped into memory frames Translation using per-process page table (TLBs cache translations) Sharing possible by having multiple pages point to same frame Protection because page-table mappings controlled by, extra bits ensure page being accessed in a valid fashion (e.g., read-only) Internal fragmentation possible, but no external fragmentation Segmentation: Process is allocated multiple regions, one per segment Translation and protection using size, limit registers Sharing enabled by associating segment descriptors with same information Suffers from external fragmentation, but this has smaller impact 3/28/2005 33