& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen



Similar documents
Virtual Memory Paging

Chapter 7 Memory Management

Operating Systems, 6 th ed. Test Bank Chapter 7

Computer Architecture

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Lecture 17: Virtual Memory II. Goals of virtual memory

OPERATING SYSTEM - MEMORY MANAGEMENT

Virtual vs Physical Addresses

& Data Processing 2. Exercise 2: File Systems. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

OPERATING SYSTEM - VIRTUAL MEMORY

Chapter 7 Memory Management

Chapter 12. Paging an Virtual Memory Systems

Operating Systems. Virtual Memory

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

OPERATING SYSTEMS MEMORY MANAGEMENT

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Board Notes on Virtual Memory

COS 318: Operating Systems. Virtual Memory and Address Translation

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Segmentation and Fragmentation

The Classical Architecture. Storage 1 / 36

Storage in Database Systems. CMPSCI 445 Fall 2010

Buffer Management 5. Buffer Management

Virtual Memory Behavior in Red Hat Linux Advanced Server 2.1

Disk Space Management Methods

W4118 Operating Systems. Instructor: Junfeng Yang

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

HY345 Operating Systems

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

File Management. Chapter 12

KVM & Memory Management Updates

Intel P6 Systemprogrammering 2007 Föreläsning 5 P6/Linux Memory System

COS 318: Operating Systems

Chapter 10: Virtual Memory. Lesson 03: Page tables and address translation process using page tables

File System Management

Chapter 11 I/O Management and Disk Scheduling

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

The Linux Virtual Filesystem

Data Storage - I: Memory Hierarchies & Disks

Page Replacement for Write References in NAND Flash Based Virtual Memory Systems

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

W4118: segmentation and paging. Instructor: Junfeng Yang

How To Write A Page Table

4.2: Multimedia File Systems Traditional File Systems. Multimedia File Systems. Multimedia File Systems. Disk Scheduling

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Chapter 6: Physical Database Design and Performance. Database Development Process. Physical Design Process. Physical Database Design

Optimal Stack Slot Assignment in GCC

Lecture 16: Storage Devices

A3 Computer Architecture

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Candidates should be able to: (i) describe the purpose of RAM in a computer system

361 Computer Architecture Lecture 14: Cache Memory

How To Write To A Linux Memory Map On A Microsoft Zseries (Amd64) On A Linux (Amd32) (

Unit Storage Structures 1. Storage Structures. Unit 4.3

Chapter 12 File Management

WHITE PAPER Optimizing Virtual Platform Disk Performance

Lecture 1: Data Storage & Index

Segmentation Segmentation: Generalized Base/Bounds

Paging: Introduction A Simple Example And Overview

Using Synology SSD Technology to Enhance System Performance Synology Inc.

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

& Data Processing 2. Exercise 1: Introduction to Operating System Concepts. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Data De-duplication Methodologies: Comparing ExaGrid s Byte-level Data De-duplication To Block Level Data De-duplication

File-System Implementation

Chapter 13: Query Processing. Basic Steps in Query Processing

CHAPTER 17: File Management

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES

Storage Management for Files of Dynamic Records

OS OBJECTIVE QUESTIONS

Virtual Memory. Chapter 4

Memory ICS 233. Computer Architecture and Assembly Language Prof. Muhamed Mudawar

Chapter 11 I/O Management and Disk Scheduling

Chapter 12 File Management

Chapter 12 File Management. Roadmap

File Systems Management and Examples

6. Storage and File Structures

Where is the memory going? Memory usage in the 2.6 kernel

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

Hardware Configuration Guide

Free-Space Management

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Record Storage and Primary File Organization

VM Architecture. Jun-Chiao Wang, 15 Apr NCTU CIS Operating System lab Linux kernel trace seminar

The Google File System

Transcription:

Folie a: Name & Data Processing 2 3: Memory Management Dipl.-Ing. Bogdan Marin Fakultät für Ingenieurwissenschaften Abteilung Elektro-und Informationstechnik -Technische Informatik-

Objectives Memory Management Virtual Memory Concepts Paging and Segmentation

Memory Management Data Structures How does OS keep track of unused and used blocks? Bit Maps Memory divided into small allocation units (say 4 bytes) Special bit map, one bit assigned for each unit bit per 4 bytes (32 bits) bit map occupies (wastes?) /33 of total memory Each bit provides the status of the corresponding memory unit if unit is free ; if unit is used Simple data structure to maintain but expensive to use for memory allocation E.g. 4 free bytes are needed search for a run of consecutive s in map! Linked Lists Dynamic data structure with an entry for each memory block P for used partition, H for unused block Start address of partition, Length of partition Pointer to next entry Need to detect and merge adjacent H entries into a single H entry expensive Best-fit placement requires examining the complete list whereas first-fit only requires finding the first H entry which is big enough first-fit much quicker Alternative is to keep list sorted in increasing/decreasing partition length expensive

Memory Management Data Structures Fig. Memory Structure a. Memory structure b. Bit map c. Linked list Fig. Linked list combinations mergingfor (a) requires changing entry 2 from P to H (b) requires merging entries 2 and 3 into a single H ( list is one entry shorter) (c) requires merging entries and 2 into a single H (list is one entry shorter) (d) requires merging entries,2 and 3 into a single H (list is two entries shorter)

Memory Allocation The following methods applies for variable partitioning of memory space. Best fit : allocate smallest hole that fit Worst fit : allocate the largest hole First fit : allocate the first hole that fit from the beginning Next fit : allocate the first hole that fit since last operation

Memory Allocation:. Consider a swaping system in which memory consists of the following hole sizes in memory order: H H H2 H3 H4 H5 H6 H7 K 4KB 2KB 8KB 7KB 9KB 2KB 5KB Which hole is taken for a succesive segment request of a) 2 KB b) KB c) 9KB for first fit? Repeat the questions for best fit, worst fit and next fit. First Fit (first hole that fit) a) H2 b) H c) H3 Best Fit (smalest hole that fit) a) H6 b) H c) H5 Worst Fit (largest hole) a) H2 b) H3 c) H7 Next Fit (first hole that fit since last op) a) H2 b) H3 c) H5

Memory Allocation: 2. Explain how the following algorithms work in allocating memory:. First fit. 2. Best fit. 3. Worst fit. Given free memory blocks of K, 5K, 2K, 3K and 6K (in this order on a linked list), how would each of the above algorithms place requests for 22K, 47K, 2K and 426K (requested in that order)? Which algorithm makes best use of memory? Initially: Memory Allocation: Conclusion: Best Fit uses memory best

Buddy System () Memory blocks are available in size of 2 k where S <= k <= L, and 2 S : Smallest size of block that can be allocated 2 L : Largest size of block that can be allocated : Given a memory consisting of a single continous piece of 64 pages, allocate space for a 8-page request. Allocate another 8 page request. Allocate 4 page request. Release the allocated space for the second 8- page request. Release the allocated space for the first 8-page request. 64

Buddy System (2) 64 Too big, divide the 64-page chunk into half 32 32 Too big, divide a 32-page chunk into half 6 6 32 Too big, divide a 6-page chunk into half 8 8 6 32 Just right! 8-page chunk fits in

Buddy System (3) Request for another 8-page chunk 8 8 6 32 Just right! Second 8-page chunk fits in Qn : Allocate space for a 4-page request 6-page chunk is too big, half it 8 8 8 8 32 8-page buddy is still too big each, half it again 8 8 4 4 8 32 Just right! 4-page request fits in

Buddy System (4) Release space for the second 8-page request 8 8 4 4 8 32 8-page chunk released Qn : Release space for the first 8-page request 6 4 4 8 32 After the second 8-page chunk released Two 8-page chunk buddy merge to get 6-page chunk back Only buddies can merge!

Virtual Memory (VM) concepts: Paging Physical Address (PA) the address seen by the physical memory address range limited by the amount of physical memory that is available 4MB RAM 22-bit physical address address range divided into small uniform chunks called page frames typically 4K to 32K Virtual Address (VA) the address used by the process address range limited by the CPU address range 32-bit CPU 32-bit virtual address 4 GB of virtual memory address range divided into small uniform chunks called pages page size must be the same as the page frame size VA to PA mapping Process memory consists of a collection of contiguous pages which map to possibly non-contiguous page frames Process making a VA reference must be mapped to a PA reference Paging not swapping Instead of swapping whole process to disk, process pages are paged to disk

VA to MA mapping: Page Tables VA = [ p d ] and PA = [ f d ] PA = f-bit frame number + d-bit offset 2 f+d bit PA VA = p-bit page table index + d-bit page offset 2 p+d bit VA Usually p >> f since VA range >> PA range p-bits used to retrieve corresponding f-bit frame from per process page table One page table per process Address of page table entry = Page Table Ptr. + p 2 p entries in page table Typical page table entry: [P M R rwx access f-bit frame number] P = -bit Present bit ( if page in memory, if page not in memory) M = -bit Modified bit ( if page has been modified ( dirty ), if not) R = -bit Referenced bit ( if page has been referenced recently, if not) If P = page not in memory (on swap disk) page fault OS enters page fault handling routine is there a free page? If not page out a page according to page replacement strategy (R and M bits used to determine best page to evict R= and M= is a good candidate) read in required page from swap disk (page in) and modify page table accordingly restart instruction

VA to PA address translation system Address translation in a paging system

VA to PA example VA = [page address offset ] VA = [p d] = [8 2] 2 8 = 256 virtual pages P (present bit) M (modification bit) VA No. of virtual page 2 P M... If P= : Hit = Read If P=: take available page with M= and overwrite with reloaded page If A= and all available pages show M= ) Take content of a page in main memory and restore secondary page 2) Reload page from second memory into free page No. of physical page 2 22 23 253 254 255 PA...

VA to PA (/2) A computer with a 6 bit address, a 4KB page size, has 64K of VA and 32K of PA. The page table for this computer is given in the following figure. Assuming that at a given time the running process issues the following instruction: MOV REG, 896 Which will be the outcome of the Memory Management Unit mapping? Page table 5 4 3 2 9 8 7 6 5 4 3 2 Present bit

VA to PA (2/2) VA =[ p d ] and PA =[f d ] 4K pages d=2 (since 2 2 =496) 64K of VA p+d=6 (since 2 6 =65536) p=4 32K of PA f+d=5 (since 2 5 =32768) f=3 since 2 p =2 4 =6 6 entry page table is needed, where each entry = [3 bit frame address P ] Process issues instruction MOV REG, 896 VA= 896 VA= [p= d= ] p=2 entry 2 from page table is refered Entry 2 has P= and f= PA = [f= d= ] = 2458 MMU mapping produces: MOV REG, 2458

Multi-level Page Tables Situation: Process is assumed to have access to complete VA range Page table is too big! e.g. 32-bit CPU and 4K pages p = 2 2 2 = million entry page table entries at least byte MB per page table per process! Multi-level Page Tables 2-level page tables VA = [ p q d ] and PA = [ f d ] p-bits to index top-level page table returns pointer to 2 nd -level page table q-bits to index 2 nd -level page table returns f-bit page frame 2 p entries in top-level page table each entry spans 2 q+d of VA memory top-level page table entry: [ Pointer to 2 nd -level page table ] 2 q entries in 2 nd -level page table each entry spans 2 d of VA memory ( page) 2 nd -level page table entry: [P M R f-bit frame number] there are 2 p x (2 nd -level page tables)! BUT not all of these are needed Adv.: Only keep those lower-level page tables that are needed Disadv.: n-level page tables require n memory references to perform translation!

Multi-level Page Tables 2 Level Page Table p=, q=, d=2 (VA=[ p q d ])

Multi-level Page Tables A computer with 32 bit address uses a 2 level page table. Virtual addresses are split into 9-bit top level page table field and -bit second level page field, and an offset. How large are the pages and how many are there in the address space? VA = [ p q d ] and PA = [ f d ] 9-bit top level page table field p=9 -bit second level page field q= 32 bit address p+q+d=32 d= 32 p q = 32 9 =2 Page is 2 d = 2 2 = 4K page Number of bits for the virtual page is: p + q = 2 Number of virtual pages is 2 2

Pure Segmentation SA = [ s d ] and PA = [ Base + d ] s-bit used to reference entry in segment table 2s entries in segment table; Segment table entry = [ length base ] Length (limit): specifies the maximum size of the segment Base: start address of segment in main memory If (d > length) then segmentation fault else PA = (Base + d) Address translation in a segmentation system

Segmentation with Paging Segmentation with Paging the best of both worlds Segment table contains pointer to page table for data within segment Length of segment must be in integer number of pages No need for length specification use page fault mechanism

Fragmentation Analysis (holes are represented by empty boxes) Definitions: m = bytes of the total memory s= average size of segments ks (k>=) average size of holes f= % of total of memory unused = (ammount of memory used by unused segments)/(total memory) 5% Rule (Knuth) n= numebr of segments h= number of holes h=n/2 (on average if there are n segments we expect (n/2) holes) Analysis of the free space: Conclusions: If the holes are half size of segments (k=.5) f = 2% If holes are quarter size (k=.25) f=% Smaller hole sizes more efficient memory utilisation

Page Replacement Policy

Page Replacement Policy: LRU

Page Replacement Structures

Second-Chance FIFO Policy

Clock Policy

NRU and NFU Policy

Page Replacement Policy: A computer has four page frames. The time of loading, time of last access and the R and M bits for each page are as shown bellow (the times are in clock ticks): Page Loaded Last Ref. R M 26 28 23 265 2 4 27 3 285 a) Which page will NRU replace? b) Which page will FIFO replace? c) Which page will LRU replace? d) Which page will Second Chance replace? a) R=, M= (class = best page to be replaced by NRU) in above table page 2 will be replaced by NRU b) FIFO (First In First Out) in above table page 3 will be replaced by FIFO c) LRU replaces the page with the oldest last access record in above table page will be replaced by LRU d) Second Chance: check the tail of the queue (oldest page) if R= then reset R and move page to head of queue otherwise remove page page 3 has R=, algorithm resets the R bit and moves page 3 at the head of the list and search again the new tail of the list. Page 2 is the new tail of the list, R= therefore page 2 will be replaced by Second Chance

Next Week: Cache.. Check the webpage for exercises! http://www.fb9dv.uni-duisburg.de/ti/en/education/ss6/dv2/main.html EXAM: Tu. 25/7/26 LD Sporthalle 3.3