Memory Management Organization. Operating Systems Memory Management. Operating System Functions. Memory Management. Memory management depends on:

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Operating Systems, 6 th ed. Test Bank Chapter 7

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Virtual vs Physical Addresses

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Lecture 17: Virtual Memory II. Goals of virtual memory

COS 318: Operating Systems. Virtual Memory and Address Translation

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Chapter 7 Memory Management

Computer Architecture

Virtual Memory Paging

OPERATING SYSTEM - VIRTUAL MEMORY

The Classical Architecture. Storage 1 / 36

OPERATING SYSTEM - MEMORY MANAGEMENT

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Chapter 7 Memory Management

Operating Systems. Virtual Memory

Operating System Tutorial

Computer-System Architecture

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Chapter 11 I/O Management and Disk Scheduling

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Virtual Memory Behavior in Red Hat Linux Advanced Server 2.1

OPERATING SYSTEMS MEMORY MANAGEMENT

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

CHAPTER 6 TASK MANAGEMENT

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Operating System Overview. Otto J. Anshus

CS5460: Operating Systems

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Traditional IBM Mainframe Operating Principles

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Linux Process Scheduling Policy

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 11 I/O Management and Disk Scheduling

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

W4118: segmentation and paging. Instructor: Junfeng Yang

COS 318: Operating Systems

Intel P6 Systemprogrammering 2007 Föreläsning 5 P6/Linux Memory System

File Management. Chapter 12

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Storage in Database Systems. CMPSCI 445 Fall 2010

SYSTEM ecos Embedded Configurable Operating System

Performance Comparison of RTOS

Chapter 12. Paging an Virtual Memory Systems

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

find model parameters, to validate models, and to develop inputs for models. c 1994 Raj Jain 7.1

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

Chapter 3 Operating-System Structures

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Introduction. What is an Operating System?

Chapter 6, The Operating System Machine Level

Chapter 12 File Management

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Board Notes on Virtual Memory

Storing Measurement Data

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

Have both hardware and software. Want to hide the details from the programmer (user).

KVM & Memory Management Updates

Hardware Assisted Virtualization

CS161: Operating Systems

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

I/O. Input/Output. Types of devices. Interface. Computer hardware

This tutorial will take you through step by step approach while learning Operating System concepts.

Chapter 1 Computer System Overview

Overview. CISC Developments. RISC Designs. CISC Designs. VAX: Addressing Modes. Digital VAX

With respect to the way of data access we can classify memories as:

Survey on Job Schedulers in Hadoop Cluster

Chapter 12 File Management

Chapter 12 File Management. Roadmap

Operating System Components

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Computer Organization. and Instruction Execution. August 22

APP INVENTOR. Test Review

Operating Systems 4 th Class

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

Memory Systems. Static Random Access Memory (SRAM) Cell

Storage and File Systems. Chester Rebeiro IIT Madras

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Memory Management under Linux: Issues in Linux VM development

Platter. Track. Index Mark. Disk Storage. PHY 406F - Microprocessor Interfacing Techniques

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER

Microprocessor & Assembly Language

An Implementation Of Multiprocessor Linux

Chapter 2: OS Overview

Virtualization. Explain how today s virtualization movement is actually a reinvention

Real Time Programming: Concepts

Devices and Device Controllers

Multi-GPU Load Balancing for Simulation and Rendering

Record Storage and Primary File Organization

Non-Volatile Memory Programming on the Agilent 3070

File Systems Management and Examples

Transcription:

Memory Management Organization Systems Memory Management dr. Tomasz Jordan Kruk T.Kruk@ia.pw.edu.pl Institute of ontrol & omputation Engineering Warsaw University of Technology Memory management depends on: structure of address field of instruction arguments, position of address field in instruction word, hardware capabilities of address field transformation. Depending on the length of the address field the address space may be the same in size, smaller or bigger than the scope of operating memory addresses. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Memory Management System Functions What is expected by programmers from the memory: to be huge, to be fast, to be nonvolatile. Memory hierarchy: small fast expensive memory (e.g. cache), average in size, speed and expense memory (e.g. operating memory), huge, slow and cheap memory (e.g. disk/tape memory). Functions of the operating s from the point of view of memory management: address space management through address translation mechanisms, memory contents protection, shared areas access control, effective organization of operating memory management. The size of the operating memory for operating code is usually fixed, but the allocation process for user processes is usually much more complicated. Memory Management (MM) type on the level of operating is dictated by the computer architecture. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Memory llocation Methods M..M. With Division Methods of memory allocation to user processes:. without division, all the memory allocated to one process. Multiprogramming may be implemented by swapping,. memory division, free operating memory divided into blocks allocated to particular user processes,. usage of virtual memory, there exist one or more virtual address spaces allocated to user processes and without any direct obvious mapping to the physical address space. The goals of the memory division: better utilization of hardware elements of the, mainly processor and memory, enabling fast context switching. The following types of s with memory division may be distinguished: s with static division - either fixed partitions with different length or blocks with fixed length called frames. s with dynamic division - implemented by structures describing free blocks of memory with variable length and by swapping mechanism. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 7/59 M..M. Without Division Static Division ased on onstant Size Partitions User program in RM xfff in ROM Device drivers in ROM User program User program in RM (c) Three simple ways of organizing memory for an operating with one user process. Multiple input queues Partition Partition Partition Partition 8K 7K K K K Single input queue Partition Partition Partition Partition a. fixed memory partitions with separate input queues for each partition, b. fixed memory partitions with a single input queue. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 8/59

Dynamic Division Multiprogramming Modelling For implementation of dynamic memory allocation the following hypothetical functions: allocate( size, address ) choosing from the list of free blocks that one which covers demand, returning the address of the chosen block and removing allocated block from the free block lists as a result of allocation, malloc( size ), calloc( n, size ), realloc( ptr, size ). free( address ) - adding pointed block to the free block list, e.g. free( ptr ), msize() - return the size of the biggest currently available free block, msize() (but: races possible). PU utilization (in percent) 8 % I/O wait 5% I/O wait 8% I/O wait 5 7 8 9 Degree of multiprogramming PU utilization as a function of number of processes in memory (ignoring operating overhead). Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 9/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 MM in Multiprogramming ontext Multiprogramming Effectiveness nalisys Multiprogramming has introduced two essential problems to be solved: relocation service - cannot be sure where program will be loaded in memory, mutual memory protection. Job rrival time : : :5 : PU minutes needed PU idle PU busy PU/process # Processes.8.....8.5.9...59.5 Possible solution: use of base and limit values: address locations added to base value to map to physical address, address locations larger than limit value treated as an error...9 Job starts.9.8.8.8.... Job finishes.9..9.9..7 5 7. 8..7 Time (relative to job 's arrival) (c) Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 a. arrival and work requirements of jobs, b. PU utilization for jobs with 8% I/O wait, c. sequence of events as jobs arrive and finish, numbers show amout of PU time jobs get in each interval. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Swapping llocation lgorithms Time The role of the allocation algorithm is to chose appropriate free block in order to allocate it to some user process. Some allocation algorithms: First Fit algorithm, first which matches from the list, (c) (d) D (e) D (f) D Memory allocation changes as processes come into memory and leave memory. Shaded regions are unused memory. (g) est Fit algorithm, the optimal in size from the list, Worst Fit algorithm, the biggest from the list, uddies algorithm, memory (with size k ) division into two equal in size blocks; dividing by two till obtaining minimal yet big enough block. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Fragmentation nti-fragmentation ountermeasure hoosing allocation algorithm the following aspects should be considered: effectiveness (speed), simplicity, fragmentation effect. Internal fragmentation - situation when some pieces of memory although allocated as part of some bigger structure (partition, frame) are not to be used by user processes. External fragmentation - situation when some pieces of memory although unallocated cannot be allocated for some reason (e.g. are too small) for user processes. In the there may be defined value n as a minimal allowed size of allocated block, which makes easier management and improves effectiveness but may lead into fragmentation. Methods of countermeasure against fragmentation: deallocation and merging, rellocation and compaction, paging mechanism. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Memory Management - itmaps and Lists Virtual Memory D E Hole 8 Starts at 8 Length P 5 H 5 P 8 P H 8 P P H 9 (c) Process a. part of memory with 5 processes and holes, tick marks show allocation units, shaded regions are free, the smaller the allocation unit, the larger the bitmap. b. corresponding bitmap, c. same information as a list. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 7/59 Virtual memory memory consisting of at least two memory types: small and fast (e.g. operating memory) and big but slow (e.g. disk memory), and moreover of additional hardware and software which enable automatic moving of memory pieces between those memory types. Virtual memory should be almost as fast as the faster from above-mentioned memories and almost as huge as the bigger from the above-mentioned memories. Methods of virtual memory implementation: paging, segmentation, paging with segmentation. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 9/59 Dynamic llocation Problem Memory Management Unit Role Room for growth ctually in use Room for growth -Stack -Data -Program -Stack Room for growth Room for growth PU PU package The PU sends virtual addresses to the MMU Memory management unit Memory Disk controller ctually in use -Data -Program The MMU sends physical addresses to the memory us a. allocating space for growing data segment, b. allocating space for growing stack and data segment. MMU (ang. Memory Management Unit) may be integrated with processor (as it is common now), but may be independent (as it used to be earlier). Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 8/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Paging Paging Implementation Paging is based on fixed memory division. Units of division: frames for physical memory, pages for process virtual address space. Paging mechanism is responsible for: mapping virtual address into physical address:. locating the page referenced by the address in the program,. finding the frame currently used by that page. sending - depending on requests - pages from external memory to operating memory and sending back those pages which are no longer required. Usage of paging does not require anything specific from the user. Memory allocation is implemented by the with usage of page tables and/ or frame tables. In order to enforce transparency of the solution, the following is required: handling interrupt called page fault, which signals referencing to the page which is currently absent in the main memory, page fault handling has, due to some established page replacement algorithm, allocate a frame and download the required page from the external memory, in order to minimize the cost of multiple memory accesses associative memory is used for address transalation. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Translation Example Exemplary Table Virtual address space K-K 5K-K 5K-5K 8K-5K K-8K K-K K-K K-K 8K-K K-8K K-K K-K K-K 8K-K K-8K K-K 7 5 Virtual page Physical memory address 8K-K K-8K K-K K-K K-K 8K-K K-8K K-K frame Internal operation of MMU with K pages. table 5 9 8 7 5 Present/ absent bit Virtual page = is used as an index into the page table -bit offset copied directly from input to output The following aspects should be considered: page table may be extremely huge, translation process should be extremely fast. Outgoing physical address (58) Incoming virtual address (89) Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Translation Lookaside uffer Valid Virtual page Modified Protection frame RW R 8 RW 9 9 RW 9 R 5 R 5 8 RW 8 RW 75 Typical Table Entry aching disabled Modified Present/absent frame number Referenced Protection Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Multilevel Tables Second-level page tables table for the top M of memory Top-level page table its 5 PT PT Offset 5 To pages Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 The main goal: to minimize the number of memory references required for successful mapping. spects related to TL: cost of context switching, cost of multilevel page table, implemetation: in hardware/ in software. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 7/59 Inverted Tables Traditional page table with an entry for each of the 5 pages 5-5-M physical memory has -K page frames Hash table - - frame Virtual page Indexed by hash on virtual page Indexed by virtual page omparison of a traditional page table with an inverted page table. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 8/59

Optimal Table Size Replacement lgorithms The size of a page influences internal fragmentation level and size of the page table. Let: s the average process size (in bytes), p page size, e size of one item in the page table. then overhead = s e/p + p/ The first derivative with respect to p and equating to : se/p + / = Some of page replacement algorithms: optimal solution, NRU algorithm (not recently used) (R and M bits), FIFO algorithm, the second chance algorithm, clock algorithm, LRU algorithm (least recently used). Therefore, the optimal page size is equal to p = se Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 9/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Fault Handling Second hance lgorithm fault handling has the following sequence of events:. choosing the frame for required page,. freeing pointed frame,. the disk operation scheduling to bring the page in the frame,. page table updating. loaded first 7 7 8 D 8 D E E F F 5 G 5 G 8 H 8 H Most recently loaded page is treated like a newly loaded page For choosing the frame with page to be evicted is responsible algorithm called page replacement algorithm. The optimal solution would be to evict the page which will be never again required or required in the most distant moment in the future. a. pages sorted in FIFO order, b. new list, after page fault occurrence at time and had its R bit set. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

lock Replacement lgorithm Software Simulation of LRU L R bits for pages -5, clock tick R bits for pages -5, clock tick R bits for pages -5, clock tick R bits for pages -5, clock tick R bits for pages -5, clock tick J K I E D When a page fault occurs, the page the hand is pointing to is inspected. The action taken depends on the R bit: R = : Evict the page R = : lear R and advance hand H G F 5 (c) (d) (e) The aging algorithm simulates LRU in software. Six pages for five clock ticks is shown. The five clock ticks are represented by to (e). Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Hardware LRU lgorithm Working Set The working set is the set of pages used by the k most recent memory references. The function w(k, t) is the size of the working set at time t. (c) (d) (e) w(k,t) (f) (g) (h) (i) (j) k s were referenced in the following order:,,,,,,,,,. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Working Set lgorithm Replacement lgorithms omparison Information about one page 8 Time of last use referenced during this tick not referenced during this tick 98 urrent virtual time R (Referenced) bit Scan all pages examining R bit: if (R == ) set time of last use to current virtual time if (R == and age > τ) remove this page if (R == and age τ) remember the smallest time lgorithm Optimal NRU FIFO Second chance lock LRU NFU ging Working set WSlock omment not implementable, but useful as a benchmark very crude might throw out important pages big improvement over FIFO realistic excellent, but difficult to implement exactly fairly crude approximation to LRU efficient algorithm that approximates LRU well somewhat expensive to implement good efficient algorithm table Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 7/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 9/59 lock Working Set lgorithm elady s nomaly urrent virtual time 8 8 Youngest page Oldest page ll pages frames initially empty P P P P P P P P P 9 faults 98 98 Time of last use 8 R bit 8 Youngest page Oldest page P P P P P P P P P P faults 98 98 New page (c) (d) Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 8/59 a. FIFO algorithm with three page frames, b. FIFO algorithm with four page frames, more page faults when we have more page frames. The P s show which page references cause page faults. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Shared s Swap Organization Main memory Disk Main memory Disk s s Swap area 7 5 Swap area Process table table table 7 5 Disk map Program Data Data tables a. paging to a static swap area, b. backing up pages dynamically. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Fault Handling Policy and Mechanism Separation MOVE.L #(), () its MOVE an instruction causing a page fault, where to start the reexecution of the instruction? } } } Opcode First operand Second operand User space Kernel space. fault User process Main memory. Needed page Fault handler External pager 5. Here is page. Map page in. Request page. arrives MMU handler Disk page fault handling with an external pager. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59

Single ddress Space Segment Size Virtual address space K ddress space allocated to the parse tree all stack Parse tree onstant table Source text Symbol table Free Space currently being used by the parse tree Symbol table has bumped into the source text table In a one-dimensional address space with growing tables, one table may bump into another - compiler process as an example. K K 8K K K Symbol table Segment K 8K K K Source text Segment K onstants Segment Parse tree Segment all stack Segment segmented memory allows each table to grow or shrink independently of the other tables. K K 8K K K K 8K K K Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 7/59 Segmentation Features Paging and Segmentation (I) the goal of segmentation is to organize the memory address space in a way corresponding to the logical information distribution, possibility of usage of many named (by the programmer) segments during the process of virtual address space organization, two-dimensional address space because of address identification by the pair: segment name + offset, address translation organized usually by separate for each process segment table, entries in the segment table called segment descriptors, each segment descriptor contains at least base address and size (limit) of the segment. the aim of segmentation: logical division of the operating memory, the aim of paging physical division of the memory, paging as a low-level mechanism, transparent to the programmer; segmentation as a high-level mechanism, visible to the programmer, size of a page fixed, derived from the architecture; size of a segment not fixed, set by the programmer, both with paging and segmentation the total address space may be bigger than available physical memory, segmentation enables better protection by opportunity to distinguish different segments which logically group process elements. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. /59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 8/59

Paging and Segmentation (II) MULTIS: Segmentation and Paging segmentation enables better management of elements with non-constant size (like stack or heap), segmentation provides sharing of procedures among processes (shared segments, shared libraries), the reason of paging introduction: obtaining more address space without need to buy more physical memory, the reason of segmentation introduction: enabling division and distinction of programs and data into separate logically independent address spaces with support for sharing some elements and better protection, segmentation - external fragmentation possible, Honeywell and following models, 8 segments possible, each segment might have up till 55 -bits words, paged segments, for each process segment table, the very segment table implemented as a paged segment as well, physical addresses -bits, pages aligned to ( ) bytes, thus 8 bits for page table address, eventual segment address in cache in a separate table used by the page fault handler. paging - internal segmentation possible. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 9/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Pure Segmentation Implementation MULTIS: Virtual Memory Segment (7K) Segment (8K) Segment Segment (8K) Segment (K) Segment (7K) Segment (8K) Segment (K) Segment 7 Segment (K) (K) Segment 5 (K) Segment (8K) Segment (K) Segment 7 Segment (K) (K) Segment 5 (K) Segment (K) Segment (K) Segment 7 Segment (K) (c) (d) (e) (K) (K) Segment 5 (K) Segment (K) Segment Segment 7 Segment (K) phenomenon of external fragmentation (checkerboarding), -(d) development of checkerboarding, (e) removal of the checkerboarding by compaction. usage of paging as a countermeasure against external fragmentation. bits Segment descriptor Segment 5 descriptor Segment descriptor Segment descriptor Segment descriptor Segment descriptor Segment descriptor Descriptor segment Main memory address of the page table entry entry entry table for segment 8 9 Segment length (in pages) entry entry entry table for segment The MULTIS virtual memory. size: = words = words = segment is paged = segment is not paged Miscellaneous bits Protection bits a. the descriptor segment points to the page tables, b. a segment descriptor, the numbers are the field lengths. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59

MULTIS: Physical ddress Evaluation Intel Pentium: Segmentation with Paging Segment number -bit MULTIS virtual address. number ddress within the segment Offset within the page 8 Multics: 5K independent segments, each up to K -bit words, Pentium: K independent segments, each up to billion -bit words, single Global Descriptor Table in the, Local Descriptor Table for each process. Word Descriptor frame Segment number Descriptor segment number table Offset onversion of a two-part MULTIS address into a main memory address. For simplicity it is omitted that segment of descriptors is itself paged. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 55/59 MULTIS: TL uffers Intel Pentium: Selector and Segment Descriptor omparison field Segment number Virtual page frame Protection ge Is this entry used? Pentium selector. its Index 7 Read/write = GDT/ = LDT Privilege level (-) Read only Read/write Execute only Execute only 7 9 : -it segment : -it segment : Li is in bytes : Li is in pages Privilege level (-) : System : pplication Segment type and protection ase - G D Limit P DPL S Type ase - -9 : Segment is absent from memory : Segment is present in memory simplified version of the MULTIS TL. entries for which segment number and frame number are compared in parallel. The existance of two page sizes makes the actual TL more complicated. ase -5 Limit -5 its Relative address Pentium code segment descriptor. Data segments differ slightly. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 5/59

Intel Pentium: Physical ddress Evaluation (I) Intel Pentium: Protection Selector Descriptor ase address Offset + User programs Shared libraries System calls Typical uses of the levels Limit Kernel Other fields -it linear address conversion of a (selector, offset) pair to a linear address. if paging is turned off (pure segmentation), obtained address is a physical one, with paging on, obtained address is a logical one, usage of TL buffers. Level four levels of protection, segments from the same and higher levels accessible for reading/writing, to call a procedure from different level, LL has as an argument not an address but selector of so called call gate. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 57/59 Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 59/59 Intel Pentium: Physical ddress Evaluation (II) its Linear address Dir Offset directory table frame Word selected Entries Dir Offset Directory entry points to page table table entry points to word -bits logical address space, each page has a size k, each process contains page directory with elements, each of them points to page table with elements as well. single paged -bits address space possible as well. Faculty of E&IT, Warsaw University of Technology Systems / Memory Management p. 58/59