Announcements. CMSC 412 Spring Basic Relocation Hardware. Logical vs. Physical Addresses. Contiguous Allocation. Contiguous Allocation.

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

W4118: segmentation and paging. Instructor: Junfeng Yang

OPERATING SYSTEM - MEMORY MANAGEMENT

Chapter 7 Memory Management

Chapter 7 Memory Management

COS 318: Operating Systems. Virtual Memory and Address Translation

Virtual vs Physical Addresses

OPERATING SYSTEMS MEMORY MANAGEMENT

Lecture 17: Virtual Memory II. Goals of virtual memory

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Operating Systems, 6 th ed. Test Bank Chapter 7

Chapter 12. Paging an Virtual Memory Systems

Outline: Operating Systems

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Board Notes on Virtual Memory

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Memory Management CS 217. Two programs can t control all of memory simultaneously

CS5460: Operating Systems

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Computer Architecture

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

CHAPTER 17: File Management

How To Write A Page Table

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Operating Systems. Virtual Memory

Chapter 6, The Operating System Machine Level

File-System Implementation

Topics in Computer System Performance and Reliability: Storage Systems!

Chapter 12 File Management. Roadmap

Chapter 12 File Management

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Binary search tree with SIMD bandwidth optimization using SSE

361 Computer Architecture Lecture 14: Cache Memory

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

1. Computer System Structure and Components

Virtualization. Explain how today s virtualization movement is actually a reinvention

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Operating Systems. Design and Implementation. Andrew S. Tanenbaum Melanie Rieback Arno Bakker. Vrije Universiteit Amsterdam

Outline. Operating Systems Design and Implementation. Chap 1 - Overview. What is an OS? 28/10/2014. Introduction

Memory Management under Linux: Issues in Linux VM development

File Management Chapters 10, 11, 12

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Chapter 11: File System Implementation. Operating System Concepts 8 th Edition

Storage and File Structure

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

CS5460: Operating Systems. Lecture: Virtualization 2. Anton Burtsev March, 2013

The Classical Architecture. Storage 1 / 36

CSC 2405: Computer Systems II

Chapter 10: Virtual Memory. Lesson 03: Page tables and address translation process using page tables

I/O. Input/Output. Types of devices. Interface. Computer hardware

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

File System Management

Lecture 25 Symbian OS

An Implementation Of Multiprocessor Linux

Virtual Machines. COMP 3361: Operating Systems I Winter

Computer-System Architecture

Physical Data Organization

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

File Systems Management and Examples

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

Record Storage and Primary File Organization

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

A3 Computer Architecture

Part III Storage Management. Chapter 11: File System Implementation

Chapter 11 I/O Management and Disk Scheduling

COS 318: Operating Systems

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Segmentation Segmentation: Generalized Base/Bounds

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

SOC architecture and design

Operating System Overview. Otto J. Anshus

b. A program calls malloc to request more memory from the operating system. i.what system call would malloc use to request more memory of the OS?

OPERATING SYSTEM - VIRTUAL MEMORY

Speeding Up Cloud/Server Applications Using Flash Memory

Presentation of Diagnosing performance overheads in the Xen virtual machine environment

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Performance Comparison of RTOS

COS 318: Operating Systems. Virtual Machine Monitors

Keil C51 Cross Compiler

@ )nventor; Radin Qeorge 26 Franklin Piermont New York 10968{US)

ELEC 377. Operating Systems. Week 1 Class 3

18-548/ Associativity 9/16/98. 7 Associativity / Memory System Architecture Philip Koopman September 16, 1998

CS161: Operating Systems

Unit Storage Structures 1. Storage Structures. Unit 4.3

Transcription:

Announcements CMSC 412 Spring 2007 Reading: Chapter 8 (except 8.1.2-8.1.5) Chapter 9 (next time) Memory Management: Paging and Segmentation Logical vs. Physical Addresses Basic Relocation Hardware The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. Logical address generated by the CPU; also referred to as virtual address. Physical address address seen by the memory unit. Contiguous Allocation Main memory usually in two partitions: Resident operating system, usually held in low memory with interrupt vector. User processes held in high memory. Single-partition allocation Subdivide user-space into fixed-size chunks of memory. Each process gets a full chunk. What if the process doesn t need that much? What if it needs more? Contiguous Allocation Multiple-partition allocation Hole block of available memory; holes of various size are scattered throughout memory. When a process arrives, it is allocated memory from a hole large enough to accommodate it. Operating system maintains information about: a) allocated partitions b) free partitions (hole) OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 9 process 2 OS process 5 process 9 process 10 process 2 1 1

Dynamic Storage Allocation How to satisfy a request of size n from a list of free holes? Goal: efficient utilization of storage. If we have n processes that require m bytes of memory where nm < M, where M is all available memory, we can fit all processes in memory at once. Goal: fast allocation times. The time to find a hole of necessary size Dynamic Storage Allocation First-fit: Allocate the first hole that is big enough. Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. First-fit and best-fit better than worst-fit in terms of speed and storage utilization. Fragmentation External Fragmentation total memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. Reducing Fragmentation Compaction Shuffle memory contents to place all free memory together in one large block. Compaction is possible only if using relocatable logical addresses. Why? I/O problem Pin job in memory while it is doing I/O. Or, do I/O only into (non-relocatable) OS buffers. Non-Contiguous Allocation Can eliminate external fragmentation and improve storage utilization by allowing process memory to be noncontiguous. Break the process memory into different chunks and allocate chunks in different parts of physical memory. Segmentation Programs tend to be divided into different parts Code Data Heap Stack They can reside in separate segments that may be non-contiguous in physical memory. 2 2

Logical View of Segmentation 1 4 1 2 3 2 4 3 user space physical memory space Segmentation Architecture Logical address consists of a two tuple: <segment-number, offset> Segment table maps segment # to base contains the starting physical address where the segments reside in memory. limit specifies the length of the segment. Segmentation Hardware Example of Segmentation Paging Divide physical memory into fixed sized chunks called frames (pages), logical memory into same-sized pages typical frames are 512 bytes to 64 Kbytes When a process is to be executed, load the pages that are needed into memory. Have a map from pages to frames Called the page table Address Translation Scheme Address generated by CPU is divided into: Page number (p) used as an index into the page table, which contains base address of each page in physical memory. Page offset (d) combined with base address to define the physical memory address that is sent to the memory unit. Consider 32-bit addresses on Pentium: 4096-byte pages (12 bits for the offset) 20 bits for the page number 3 3

Address Translation Paging Example Free Frames Paging Fragmentation If we have 4K pages, and a process requires 15K of memory, how many pages will we need? Suffer from internal fragmentation: last page likely not completely used On average 50% of a page lost per process to internal fragmentation How does this impact our choice of page size? Before allocation After allocation Implementation of Page Table Page table is kept in main memory. Page-table base register (PTBR) points to the page table. Memory Protection Memory protection by associating protection bit with each frame. Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page. invalid indicates that the page is not in the process logical address space. 4 4

Valid/Invalid Bit Example Problem: Page Table Size One page table can get very big 2 20 entries for most programs, most items are empty Simple Solution Page-table length register (PRLR) indicates size of the page table. But not as good for sparse address spaces Hierarchical Page Tables Break up the logical address space into multiple page tables. Simplest case: a two-level page table. Used on the Pentium. The outer table is called the page directory, which points to individual page tables. Each page table is itself a page. Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits. a page offset consisting of 12 bits. Since the page table is paged, the page number is further divided into: a 10-bit page number. a 10-bit page offset. Two-Level Paging Example Thus, a logical address is as follows: page number page offset Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture p i p 2 d 10 10 12 where p 1 is an index into the outer page table, and p 2 is the displacement within the page of the outer page table. 5 5

Problem: Still too big? What about 64 bit address spaces? Two-level page table: Page num: 52 bits, or 2 52 entries in page directory! More levels of hierarchy? Further subdivide the page number. 32-bit SPARC: three-level scheme. 32-bit 68030: four-level scheme Try something different Hashed Page Tables Common in address spaces > 32 bits. The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. Hashed Page Table Inverted Page Table One entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page. Decreases memory needed to store the page table, but increases per-ref search time Memory directly proportional to physical address space size Use hash table to limit the search to one or at most a few page-table entries. Inverted Page Table Architecture Problem: Address Lookup Every data/instruction access requires many memory accesses. At the least, one for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fastlookup hardware cache called associative memory or translation look-aside buffers (TLBs) typically 16-64 entries 6 6

Paging Hardware With TLB Example: i386 TLB Valid Virtual Page Physical Page 20 bits 20 bits Effective Access Time Associative Lookup = ε time units Memory access = 1 time unit Hit ratio = α Fraction of time that a page number is found in the TLB. Effective Access Time (EAT) EAT = (1 + ε) α + (2 + ε)(1 α) = 2 + ε α time units Super Pages TLB Entries Tend to be limited in number Can only refer to one page Idea Create bigger pages 4MB instead of 4KB One TLB entry covers more memory What s the tradeoff? Shared Pages Shared code One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). May require contiguous range of logical addresses Shared Pages Example Private data Each process keeps a separate copy of its own data. 7 7

Sharing Writable Pages When writes occur, decide if processes share data operating systems often implement copy on write - pages are shared until a process carries out a write when a shared page is written, a new page frame is allocated writing process owns the modified page all other sharing processes own the original processes use semaphores or other means to coordinate access Sharing and Inverted PTs With the scheme just presented, each process s page table maps a different logical page to the same physical frame. When there is only one page table, indexed in part by process IDs, how might this work? Paging and Segmentation The Pentium and other architectures combine these. On the Pentium Logical addresses are mapped via segmentation to linear addresses Linear addresses are mapped via paging to physical addresses By default, only segmentation is used P&S: Protection Options How can we provide address space protection? Using segmentation: protection enforced at the logical address level Using paging: make segmentation a noop and provide it at the linear address level What are the tradeoffs? Project 4 does the latter More details in recitation slides Swapping Main memory is big, but what if we run out? A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Backing store (disk) bigger than main memory Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. But disk is slower than main memory Swapping used on UNIX, Linux, Windows, Schematic View of Swapping 8 8

Virtual Memory We can integrate swapping and paging; called virtual memory Pages can be swapped to and from disk, rather than entire processes Presents the illusion of a larger physical memory Virtual addressing simplifies model Swapping: can load pages into different addresses in physical memory Addressing: different processes have their own address spaces, starting at 0, separate from other processes 9 9