Memory Management and Paging. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

OPERATING SYSTEM - MEMORY MANAGEMENT

Lecture 17: Virtual Memory II. Goals of virtual memory

Virtual vs Physical Addresses

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

COS 318: Operating Systems. Virtual Memory and Address Translation

Chapter 7 Memory Management

Board Notes on Virtual Memory

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

W4118: segmentation and paging. Instructor: Junfeng Yang

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Chapter 7 Memory Management

OPERATING SYSTEM - VIRTUAL MEMORY

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Chapter 12. Paging an Virtual Memory Systems

6.033 Computer System Engineering

Operating Systems, 6 th ed. Test Bank Chapter 7

OPERATING SYSTEMS MEMORY MANAGEMENT

How To Write A Page Table

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Operating Systems. Virtual Memory

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

Outline: Operating Systems

Computer Architecture

361 Computer Architecture Lecture 14: Cache Memory

CS5460: Operating Systems

Using Synology SSD Technology to Enhance System Performance Synology Inc.

The Classical Architecture. Storage 1 / 36

Virtual Memory Paging

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

1. Computer System Structure and Components

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

File-System Implementation

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

W4118 Operating Systems. Instructor: Junfeng Yang

6. Storage and File Structures

Buffer Management 5. Buffer Management

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Intel P6 Systemprogrammering 2007 Föreläsning 5 P6/Linux Memory System

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Topics in Computer System Performance and Reliability: Storage Systems!

Virtual Machines. COMP 3361: Operating Systems I Winter

WHITE PAPER Optimizing Virtual Platform Disk Performance

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

The Linux Virtual Filesystem

Paging: Introduction A Simple Example And Overview

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

IOMMU: A Detailed view

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Binary search tree with SIMD bandwidth optimization using SSE

OSes. Arvind Seshadri Mark Luk Ning Qu Adrian Perrig SOSP2007. CyLab of CMU. SecVisor: A Tiny Hypervisor to Provide

Storage in Database Systems. CMPSCI 445 Fall 2010

Segmentation Segmentation: Generalized Base/Bounds

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

University of Dublin Trinity College. Storage Hardware.

Computer-System Architecture

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

WHITE PAPER. AMD-V Nested Paging. AMD-V Nested Paging. Issue Date: July, 2008 Revision: 1.0. Advanced Micro Devices, Inc.

Computer Systems Structure Input/Output

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

HY345 Operating Systems

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

Lecture 25 Symbian OS

KVM & Memory Management Updates

File Systems Management and Examples

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Physical Data Organization

x64 Servers: Do you want 64 or 32 bit apps with that server?

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

A3 Computer Architecture

Lecture 16: Storage Devices

Peter J. Denning, Naval Postgraduate School, Monterey, California

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

Midterm Exam #2 November 10, 1999 CS162 Operating Systems

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

COS 318: Operating Systems

Secure Cloud Storage and Computing Using Reconfigurable Hardware

Memory Management under Linux: Issues in Linux VM development

The Big Picture. Cache Memory CSE Memory Hierarchy (1/3) Disk

Price/performance Modern Memory Hierarchy

Segmentation and Fragmentation

DISK DEFRAG Professional

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

Solid State Drive Architecture

Transcription:

Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Announcements PA #2 due Friday March 18 11:55 pm - note extension of a day Read chapters 11 and 12

From last time... Memory hierarchy Memory management unit (MMU) relocates logical addresses to physical addresses using base register checks for out of bounds memory references using limit register Address binding at run time Swapping to/from backing store / disk fragmentation of main memory first fit, best fit, worst fit

One of the problems with fragmentation is finding a sufficiently large contiguous piece of unallocated memory to fit a process into heavyweight solution is to compact memory Another solution to external fragmentation is to divide the logical address space into fixed-size pages each page is mapped to a similarly sized physical frame in main memory (thus main memory is divided into fixed-size frames) the frames don t have to be contiguous in memory, so each process can now be scattered throughout memory and can be viewed as a collection of frames that are not necessarily contiguous this solves the external fragmentation problem: when a new process needs to be allocated, and there are enough free pages, then find any set of free pages in memory Need a page table to keep track of where each logical page of a process is located in main memory, i.e. to keep track of the mapping of each logical page to a physical memory frame

page 0 page 1 page 2 page 3 page 4 Paging frame # A page table for each process is maintained by the 0 OS Given a logical address, MMU will determine to which 1 logical page it belongs, and then will consult the page table to find the physical frame in RAM to access 2 Logical Address Space Page Table Logical page Physical frame 0 1 1 4 2 3 3 7 4 0 3 4 5 6 7 RAM page 4 page 0 unallocated page 2 page 1 unallocated unallocated page 3 could be allocated to a new process P2

user s view of memory is still as one contiguous block of logical address space MMU performs run-time mapping of each logical address to a physical address using the page table Typical page size is 4-8 KB example: if a page table allows 32-bit entries, and if page size is 4 KB, then can address 2 44 bytes = 16 TB of memory No external fragmentation, but we get some internal fragmentation example: if my process is 4001 KB, and each page size is 4 KB, then I have to allocate two pages = 8 KB, so that 3999 KB of 2 nd page is wasted due to fragmentation internal to a page OS also has to maintain a frame table that keeps track of what frames are free

Conceptually, every logical address can be divided into two parts: most significant bits = page # p, used to index into page table to retrieve the corresponding physical frame f least significant bits = page offset d logical address CPU p d p physical address Page Table f d RAM f

Implementing a page table option #1: use dedicated bank of hardware registers to store the page table fast per-instruction translation slow per context switch - entire page table has to be reloaded limited by being too small - some page tables can be large, e.g. 1 million entries option #2: store the page table in main memory and just keep a pointer to the page table in a special CPU register called the Page Table Base Register (PTBR) can accommodate fairly large page tables fast context switch - only PTBR needs to be reloaded slow per-instruction translation, because each instruction fetch requires two steps: finding the page table in memory and indexing to the appropriate spot to retrieve the physical frame # f retrieving the instruction from physical memory frame f

Solution to option #2 s slow two-step memory access: cache a subset of page table mappings/entries in a small set of CPU buffers called Translation-Look-aside Buffers (TLBs) Several TLB caching policies: cache the most popular or frequently referenced pages in TLB cache the most recently used pages

MMU in CPU first looks in TLB s to find a match for a given logical address if match found, then quickly call main memory with physical address frame f (plus offset d) this is called a TLB hit TLB as implemented in hardware does a fast parallel match of the input page to all stored values in the cache - about 10% overhead in speed if no match found, then 1. go through regular two-step lookup procedure: go to main memory to find page table and index into it to retrieve frame #f, then retrieve what s stored at address <f,d> in physical memory 2. Update TLB cache with the new entry from the page table if cache full, then implement a cache replacement strategy, e.g. Least Recently Used (LRU) - we ll see this later This is called a TLB miss Goal is to maximize TLB hits and minimize TLB misses

RAM Paging with TLB and PTBR logical address CPU p d logical page # physical frame # physical address f d p Page Table 2a TLB miss TLB 1 TLB hit 2b update TLB page f f target PTBR

Shared pages if code is thread-safe or reentrant, then multiple processes can share and execute the same code example: multiple users each using the same editor (vi/emacs) on the same computer in this case, page tables can point to the same memory frames example: suppose a shared editor consists of two pages of code, edit1 and edit2, and each process has its own data RAM P1 s logical address space P1 s page table 0 edit1 3 0 1 edit2 5 1 2 data1 0 2 P2 s logical address space edit1 3 edit2 data2 P2 s page table 5 1 0 1 2 3 4 5 data1 data2 edit1 edit2

Each entry in the page table can actually store several extra bits of information besides the physical frame # f R/W or Read-only bits - for memory protection, writing to a read-only page causes a fault and a trap to the OS Valid/invalid bits - for memory protection, accessing an invalid page causes a page fault Is the logical page in the logical address space? If there is virtual memory (we ll see this later), is the page in memory or not? dirty bits - has the page been modified for page replacement? (we ll see this later in virtual memory discussion) 0 1 2 3 Page Table phys fr # R/W or Read only 2 1 0 1 8 0 1 0 4 0 0 0 7 1 1 0 Valid/ Invalid Dirty/ Modified

Problem with page tables: they can get very large example: 32-bit address space with 4 KB/page (2 12 ) implies that there are 2 32 /2 12 = 1 million entries in page table per process it s hard to find contiguous allocation of at least 1 MB for each page table solution: page the page table! this is an example of hierarchical paging. subdividing a process into pages helped to fit a process into memory by allowing it to be scattered in non-contiguous pieces of memory, thereby solving the external fragmentation problem so reapply that principle here to fit the page table into memory, allowing the page table to be scattered non-contiguously in memory This is an example of 2-level paging. In general, we can apply N- level paging.

RAM Hierarchical (2-level) paging: outer page table tells where each part of the page table is stored, i.e. indexes into the page table outer page table page table dotted arrow means go to memory and retrieve the frame pointed to by the table entry

Hierarchical (2-level) paging divides the logical address into 3 parts: logical address physical address CPU p1 p2 d f2 d Outer Page Table p1 RAM f1 Page Table p2 f2