Chapter 8 Memory-Management Strategies Memory Management

Similar documents
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Operating Systems, 6 th ed. Test Bank Chapter 7

OPERATING SYSTEM - MEMORY MANAGEMENT

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Chapter 7 Memory Management

COS 318: Operating Systems. Virtual Memory and Address Translation

W4118: segmentation and paging. Instructor: Junfeng Yang

Chapter 7 Memory Management

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

OPERATING SYSTEMS MEMORY MANAGEMENT

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Lecture 17: Virtual Memory II. Goals of virtual memory

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Virtual vs Physical Addresses

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

CS5460: Operating Systems

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Board Notes on Virtual Memory

Chapter 6, The Operating System Machine Level

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

A Data Placement Strategy in Scientific Cloud Workflows

Chapter 12. Paging an Virtual Memory Systems

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

A3 Computer Architecture

Chapter 12 File Management

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Chapter 11: File System Implementation. Operating System Concepts 8 th Edition

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Operating System Tutorial

Chapter 12 File Management

10.2 Systems of Linear Equations: Matrices

Chapter 12 File Management. Roadmap

Outline: Operating Systems

How To Write A Page Table

File System Management

Chapter 3 Operating-System Structures

a storage location directly on the CPU, used for temporary storage of small amounts of data during processing.

COS 318: Operating Systems. File Layout and Directories. Topics. File System Components. Steps to Open A File

OS OBJECTIVE QUESTIONS

1 File Management. 1.1 Naming. COMP 242 Class Notes Section 6: File Management

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

Introduction to Virtual Machines

CHAPTER 17: File Management

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

Improving Emulation Throughput for Multi-Project SoC Designs

Part III Storage Management. Chapter 11: File System Implementation

The Linux Virtual Filesystem

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Chapter 14 MS-DOS Operating System

Chapter 11 I/O Management and Disk Scheduling

ELEC 377. Operating Systems. Week 1 Class 3

Computer Architecture

1.1.1 Determining the Slot Number (Default)

COS 318: Operating Systems

State of Louisiana Office of Information Technology. Change Management Plan

Memory Management CS 217. Two programs can t control all of memory simultaneously

Physical Data Organization

ProTrack: A Simple Provenance-tracking Filesystem

ThroughputScheduler: Learning to Schedule on Heterogeneous Hadoop Clusters

A Universal Sensor Control Architecture Considering Robot Dynamics

Computer Organization and Architecture

Ch 10. Arithmetic Average Options and Asian Opitons

Memories Are Made of This

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Chapter 1 Computer System Overview

361 Computer Architecture Lecture 14: Cache Memory

Inverse Trig Functions

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Paging: Introduction A Simple Example And Overview

An Implementation Of Multiprocessor Linux

COS 318: Operating Systems. Virtual Machine Monitors

Section 1.4. Java s Magic: Bytecode, Java Virtual Machine, JIT,

FAT32 vs. NTFS Jason Capriotti CS384, Section 1 Winter Dr. Barnicki January 28, 2000

FAST 11. Yongseok Oh University of Seoul. Mobile Embedded System Laboratory

Traditional IBM Mainframe Operating Principles

Operating Systems. Steven Hand. Michaelmas / Lent Term 2008/ lectures for CST IA. Handout 3. Operating Systems N/H/MWF@12

CHAPTER 6 TASK MANAGEMENT

INFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES

Speeding Up Cloud/Server Applications Using Flash Memory

Instruction Set Architecture (ISA)

W4118 Operating Systems. Instructor: Junfeng Yang

Chapter 5 Cloud Resource Virtualization

Instruction Set Architecture. or How to talk to computers if you aren t in Star Trek

System Virtual Machines

Virtual Machines. COMP 3361: Operating Systems I Winter

Chapter 13: Query Processing. Basic Steps in Query Processing

Chapter 2 System Structures

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

Memory Management under Linux: Issues in Linux VM development

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Transcription:

Chapter 8 Memory-Management Strategies Memory Management Motivation Keep several processes in memory to improve a system s performance Selection of ifferent memory management methos Application-epenent Harware-epenent Memory A large array of wors or bytes, each with its own aress. Memory is always too small!

Memory Management The Viewpoint of the Memory Unit A stream of memory aresses! What shoul be one? Which areas are free or use (by whom) Decie which processes to get memory Perform allocation an e-allocation Remark: Interaction between CPU scheuling an memory allocation! Backgroun Aress Bining bining of instructions an ata to memory aresses Bining Time source program symbolic aress e.g., x Known at compile time, where a program will be in memory - absolute coe MS-DOS *.COM At loa time: - All memory reference by a program will be translate - Coe is relocatable - Fixe while a program runs other object moules system library object moule loa moule compiling linking loaing Relocatable aress At execution time - bining may change as a program run ynamically loae system library in-memory binary memory image Absolute aress

Backgroun Main Memory Bining at the Compiling Time A process must execute at a specific memory space Bining at the Loa Time Relocatable Coe Process may move from a memory segment to another bining is elaye till run-time Logical Versus Physical Aress CPU Logical Aress 346 The user program eals with logical aresses - Virtual Aresses (bining at the run time) + Relocation Register 14000 Memory Management Unit (MMU) Harware-Support Physical Aress 14346 Memory Aress Register Memory

Logical Versus Physical Aress A logical (physical) aress space is the set of logical (physical) aresses generate by a process. Physical aresses of a program is transparent to any process! MMU maps from virtual aresses to physical aresses. Different memory mapping schemes nee ifferent MMU s that are harware evices. (slow own) Compile-time & loa-time bining schemes results in the collapsing of logical an physical aress spaces. Dynamic Loaing Dynamic Loaing A routine will not be loae until it is calle. A relocatable linking loaer must be calle to loa the esire routine an change the program s aress tables. Avantage Memory space is better utilize. Users may use OS-provie libraries to achieve ynamic loaing

Dynamic Linking Dynamic Linking Static Linking A small piece of coe, calle stub, is use to locate or loa the appropriate routine Avantage Save memory space by sharing the library coe among processes Memory Protection & Library Upate! language library + program object moule binary program image Simple Motivation Overlays Keep in memory only those instructions an ata neee at any given time. Example: Two overlays of a two-pass assembler Symbol table common routines overlay river 20KB 30KB 10KB Certain relocation & linking algorithms are neee! 70KB Pass 1 Pass 2 80KB

Overlays Memory space is save at the cost of run-time I/O. Overlays can be achieve w/o OS support: absolute-aress coe However, it s not easy to program a overlay structure properly! Nee some sort of automatic techniques that run a large program in a limite physical memory! Swapping OS User Space swap out swap in Process p1 Process p2 Main Memory Backing Store Shoul a process be put back into the same memory space that it occupie previously? Bining Scheme?!

A Naive Way Swapping Pick up a process from the reay queue Dispatcher checks whether the process is in memory Yes Dispatch CPU to the process No Swap in the process Potentially High Context-Switch Cost: 2 * (1000KB/5000KBps + 8ms) = 416ms Transfer Time Latency Delay Swapping The execution time of each process shoul be long relative to the swapping time in this case (e.g., 416ms in the last example)! Only swap in what 100k is isk = 100msactually use. 1000k per sec + Users must keep the system informe of 100k isk = 100ms memory usage. 1000k per sec + Who shoul be swappe out? Lower Priority Processes? Any Constraint? System Design OS Pi Memory I/O buffering I/O buffering?i/o?

Swapping Separate swapping space from the file system for efficient usage Disable swapping whenever possible such as many versions of UNIX Swapping is triggere only if the memory usage passes a threshol, an many processes are running! In Winows 3.1, a swappe-out process is not swappe in until the user selects the process to run. Contiguous Allocation Single User 0000 a b 8888 OS User Unuse relocation register a limit register b A single user is allocate as much memory as neee Problem: Size Restriction Overlays (MS/DOS)

Contiguous Allocation Single User Harware Support for Memory Mapping an Protection limit register relocation register CPU < + logical aress No trap Yes physical aress memory Disavantage: Wasting of CPU an Resources No Multiprogramming Possible Contiguous Allocation Multiple Users Fixe Partitions Partition 1 Partition 2 Partition 3 Partition 4 20k 45k 60k 90k 100k proc 1 proc 7 proc 5 fragmentation Memory is ivie into fixe partitions, e.g., OS/360 (or MFT) A process is allocate on an entire partition An OS Data Structure: Partitions # size location status 1 2 3 4 25KB 15KB 30KB 10KB 20k 45k 60k 90k Use Use Use Free

Contiguous Allocation Multiple Users Harware Supports Boun registers Each partition may have a protection key (corresponing to a key in the current PSW) Disavantage: Fragmentation gives poor memory utilization! Contiguous Allocation Multiple Users Dynamic Partitions Partitions are ynamically create. OS tables recor free an use partitions 20k 40k 70k 90k 110k OS Process 1 free Process 2 free Use Free Input Queue Base = 20k size = 20KB user = 1 Base = 40k size = 30KB Base = 70k size = 20KB user = 2 Base = 90k size = 20KB P3 with a 40KB memory request!

Contiguous Allocation Multiple Users Better in Time an Storage Usage Solutions for ynamic storage allocation : First Fit Fin a hole which is big enough Avantage: Fast an likely to have large chunks of memory in high memory locations Best Fit Fin the smallest hole which is big enough. It might nee a lot of search time an create lots of small fragments! Avantage: Large chunks of memory available Worst Fit Fin the largest hole an create a new partition out of it! Avantage: Having largest leftover holes with lots of search time! Contiguous Allocation Example First Fit (RR Scheuler with Quantum = 1) 400k 2560k 400k 1000k 1700k 2000k 2300k 2560k OS OS OS 400k 1000k 2000k 2300k 2560k P1 P2 P3 400k 1000k 2000k 2300k 2560k A job queue Time = 0 Time = 0 Time = 14 OS OS OS P1 P4 P3 400k 1000k 1700k 2000k 2300k 2560k P4 P3 Process Memory Time P1 600KB 10 P2 1000KB 5 P3 300KB 20 P4 700KB 8 P5 500KB 15 Time = 14 Time = 28 Time = 28 P1 P3 400k 900k 1000k 1700k 300KB 2000k + 560KB 2300k 260KB P5? 2560k P2 terminates & frees its memory P5 P4 P3

Fragmentation Dynamic Partitions External fragmentation occurs as small chunks of memory accumulate as a byprouct of partitioning ue to imperfect fits. Statistical Analysis For the First-Fit Algorithm: 1/3 memory is unusable 50-percent rule Solutions: a. Merge ajacent free areas. b. Compaction - Compact all free areas into one contiguous region - Requires user processes to be relocatable Any optimal compaction strategy??? Fragmentation Dynamic Partitions 0 300K 500K 600K 1000K 1200K 1500K 1900K 2100K OS P1 P2 400KB P3 300KB P4 200KB 0 300K 500K 600K 800K 1200K 2100K OS P1 P2 *P3 *P4 900K 0 300K 500K 600K 1000K 1200K 2100K OS P1 P2 *P4 P3 900K 0 300K 500K 600K 1500K 1900K 2100K OS P1 P2 900K P3 *P4 Cost: Time Complexity O(n!)?!! Combination of swapping an compaction Dynamic/static relocation MOVE 600KB MOVE 400KB MOVE 200KB

Fragmentation Dynamic Partitions Internal fragmentation: A small chunk of unuse memory internal to a partition. OS P1 20,002 bytes P2 P3 request 20KB?? give P3 20KB & leave a 2-byte free area?? Reuce free-space maintenance cost Give 20,002 bytes to P3 an have 2 bytes as an internal fragmentation! Fragmentation Dynamic Partitions Dynamic Partitioning: Avantage: Eliminate fragmentation to some egree Can have more partitions an a higher egree of multiprogramming Disavantage: Compaction vs Fragmentation The amount of free memory may not be enough for a process! (contiguous allocation) Memory locations may be allocate but never reference. Relocation Harware Cost & Slow Down Solution: Page Memory!

Paging Objective Users see a logically contiguous aress space although its physical aresses are throughout physical memory Units of Memory an Backing Store Physical memory is ivie into fixe-size blocks calle frames. The logical memory space of each process is ivie into blocks of the same size calle pages. The backing store is also ivie into blocks of the same size if use. Paging Basic Metho Logical Aress Page Offset CPU p f f Page Table Physical Aress Page Number p.. f Base Aress of Page p

Paging Basic Metho Aress Translation page # p m-n m page offset n max number of pages: 2 m-n Logical Aress Space: 2 m Physical Aress Space:??? A page size tens to be a power of 2 for efficient aress translation. The actual page size epens on the computer architecture. Toay, it is from 512B or 16KB. Paging Basic Metho Page 0 1 2 3 0 4 8 12 16 A B C D Logical Memory Logical Aress 1 * 4 + 1 = 5 0 1 2 3 01 01 5 6 1 2 Page Table 110 01 0 4 8 12 16 20 24 28 C D A B Frame 0 1 2 3 4 5 6 7 Physical Memory Physical Aress = 6 * 4 + 1 = 25

Paging Basic Metho No External Fragmentation Paging is a form of ynamic relocation. The average internal fragmentation is about one-half page per process The page size generally grows over time as processes, ata sets, an memory have become larger. 4-byte page table entry & 4KB per page 2 32 * 2 12 B = 2 44 B = 16TB of physical memory Page Size Disk I/O Efficiency Page Table Maintenance Internal Fragmentation * Example: 8KB or 4MB for Solaris. Paging Basic Metho Page Replacement: An executing process has all of its pages in physical memory. Maintenance of the Frame Table One entry for each physical frame The status of each frame (free or allocate) an its owner The page table of each process must be save when the process is preempte. Paging increases context-switch time!

Paging Harware Support Page Tables Where: Registers or Memory Efficiency is the main consieration! The use of registers for page tables The page table must be small! The use of memory for page tables Page-Table Base Register (PTBR) a A Page Table Paging Harware Support Page Tables on Memory Avantages: The size of a page table is unlimite! The context switch cost may be low if the CPU ispatcher merely changes PTBR, instea of reloaing another page table. Disavantages: Memory access is slowe by a factor of 2 Translation Look-asie buffers (TLB) Associate, high-spee memory (key/tag, value) 16 ~ 1024 entries Less than 10% memory access time

Paging Harware Support Translation Look-asie Buffers(TLB): Disavantages: Expensive Harware an Flushing of Contents for Switching of Page Tables Avantage: Fast Constant-Search Time key value item Paging Harware Support Logical Aress CPU p Page# Frame# p. TLB Miss * Aress-Space Ientifiers (ASID) in TLB for process matching? Protection? Flush? TLB.. f Page Table TLB Hit f Physical Aress Physical Memory Upate TLB if a TLB miss occurs! Replacement of TLB entries might be neee.

Paging Effective Memory Access Time Hit Ratio = the percentage of times that a page number is foun in the TLB The hit ratio of a TLB largely epens on the size an the replacement strategy of TLB entries! Effective Memory Access Time Hit-Ratio * (TLB lookup + a mappe memory access) + (1 Hit-Ratio) * (TLB lookup + a page table lookup + a mappe memory access) Paging Effective Memory Access Time An Example 20ns per TLB lookup, 100ns per memory access Effective Access Time = 0.8*120ns +0.2*220ns = 140 ns, when hit ratio = 80% Effective access time = 0.98*120ns +0.02*220ns = 122 ns, when hit ratio = 98% Intel 486 has a 32-register TLB an claims a 98 percent hit ratio.

Paging Protection & Sharing Protection y v 2 y v 7 y 3 Page Table Is the page in memory? Use a Page-Table Length Register (PTLR) to inicate the size of the page table. Unuse Page table entries might be ignore uring maintenance. y v 1 0 Vali-Invali Bit Vali Page? Moifie? r/w/e protecte: 100r, 010w, 110rw, memory r/w/e irty Paging Protection & Sharing 0 2K 4K 6K 8K 10K 10,468 12,287 Example: a 12287-byte Process (16384=2 14 ) P0 P1 P2 P3 P4 P5 Logical aress p 3 11 0 1 2 3 4 5 6 7 V 2 V 3 V 4 V 7 V 8 V 9 i 0 i 0 Page Table (PTLR entries?) 0 1 2 3 4 5 6 7 8 9 P0 P1 P2 P3 P4 P5

Paging Protection & Sharing P1 *e1 *e2 *e3 * Data 1 Page Table 1 3 4 6 1 P2 *e1 *e2 *e3 * Data 2 Page Table 2 3 4 6 7 * ata1 page 0 1 2 3 4 5 6 7 n Proceures which are execute often (e.g., eitor) can be ivie into proceure + ate. Memory can be save a lot. Reentrant proceures can be save! The non-moifie nature of save coe must be enforce Aress referencing insie share pages coul be an issue. * * e1 * * e2 * * e3 * ata2 :: Multilevel Paging Motivation The logical aress space of a process in many moern computer system is very large, e.g., 2 32 to 2 64 Bytes. 32-bit aress 2 20 page entries 4MB 4KB per page 4B per entries page table Even the page table must be ivie into pieces to fit in the memory!

Multilevel Paging Two-Level Paging Logical Aress P1 P2 P1 Physical Memory P2 PTBR Outer-Page Table A page of page table Forwar-Mappe Page Table Multilevel Paging N-Level Paging Motivation: Two-level paging is not appropriate for a huge logical aress space! Logical Aress P1 P2.. Pn PTBR N pieces P1 P2 Pn Physical Memory 1 + 1 + + 1 + 1 = n+1 accesses

Multilevel Paging N-Level Paging Example 98% hit ratio, 4-level paging, 20ns TLB access time, 100ns memory access time. Effective access time = 0.98 X 120ns + 0.02 X 520ns = 128ns SUN SPARC (32-bit aressing) 3-level paging Motorola 68030 (32-bit aressing) 4- level paging VAX (32-bit aressing) 2-level paging Hashe Page Tables Objective: To hanle large aress spaces Virtual aress hash function a linke list of elements (virtual page #, frame #, a pointer) Clustere Page Tables Each entry contains the mappings for several physical-page frames, e.g., 16.

Inverte Page Table Motivation A page table tens to be big an oes not correspon to the # of pages resiing in the physical memory. Each entry correspons to a physical frame. Virtual Aress: <Process ID, Page Number, Offset> CPU Logical Aress pi P f Physical Memory pi: p Physical Aress An Inverte Page Table Inverte Page Table Each entry contains the virtual aress of the frame. Entries are sorte by physical aresses. One table per system. When no match is foun, the page table of the corresponing process must be reference. Example Systems: HP Spectrum, IBM RT, PowerPC, SUN UltraSPARC CPU Logical Aress pi P f Physical Memory pi: p Physical Aress An Inverte Page Table

Inverte Page Table Avantage Decrease the amount of memory neee to store each page table Disavantage The inverte page table is sorte by physical aresses, whereas a page reference is in a logical aress. The use of Hash Table to eliminate lengthy table lookup time: 1HASH + 1IPT The use of an associate memory to hol recently locate entries. Difficult to implement with share memory Segmentation Segmentation is a memory management scheme that support the user view of memory: A logical aress space is a collection of segments with variable lengths. Subroutine Stack Sqrt Symbol table Main program

Segmentation Why Segmentation? Paging separates the user s view of memory from the actual physical memory but oes not reflect the logical units of a process! Pages & frames are fixe-size, but segments have variable sizes. For simplicity of representation, <segment name, offset> <segmentnumber, offset> Segmentation Harware Support Aress Mapping s CPU s limit base limit Segment Table base < no yes + Physical Memory trap

Segmentation Harware Support Implementation in Registers limite size! Implementation in Memory Segment-table base register (STBR) Segment-table length register (STLR) Avantages & Disavantages Paging Use an associate memory (TLB) to improve the effective memory access time! TLB must be flushe whenever a new segment table is use! a STBR Segment table STLR Segmentation Protection & Sharing Avantage: Segments are a semantically efine portion of the program an likely to have all entries being homogeneous. Example: Array, coe, stack, ata, etc. Logical units for protection! Sharing of coe & ata improves memory usage. Sharing occurs at the segment level.

Segmentation Protection & Sharing Potential Problems External Fragmentation Segments must occupy contiguous memory. Aress referencing insie share segments can be a big issue: Seg# offset Inirect aressing?!!! Shoul all share-coe segments have the same segment number? How to fin the right segment number if the number of users sharing the segments increase! Avoi reference to segment # Segmentation Fragmentation Motivation: Segments are of variable lengths! Memory allocation is a ynamic storage-allocation problem. best-fit? first-fit? worst-ft? External fragmentation will occur!! Factors, e.g., average segment sizes Size External Fragmentation A byte Overheas increases substantially! (base+limit registers )

Segmentation Fragmentation Remark: Its external fragmentation problem is better than that of the ynamic partition metho because segments are likely to be smaller than the entire process. Internal Fragmentation?? Segmentation with Paging Motivation : Segmentation has external fragmentation. Paging has internal fragmentation. Segments are semantically efine portions of a program. Page Segments!

Pentium Segmentation 8K Private Segments + 8K Public Segments Page Size = 4KB or 4MB (page size flag in the page irectory), Max Segment Size = 4GB Tables: Local Descriptor Table (LDT) Global Descriptor Table (GDT) 6 microprogram segment registers for caching Logical Aress Selector s g p 13 1 2 Linear Aress Segment Offset p1 10 s 32 p2 10 12 Pentium Segmentation Logical Aress 16 32 s+g+p s Descriptor Table : Segment Length : Segment Base >- no f Physical aress Physical Memory : : Trap Page Directory Base Register Segment table 10 + 10 12 p1 ; p2 ; *Page table are limite by the segment lengths of their segments. Consier the page size flag an the invali bit of each page irectory entry. p1 p2 Page Directory f Page Table

Linux on Pentium Systems Limitation & Goals Supports over a variety of machines Use segmentation minimally GDT On iniviual segment for the kernel coe, kernel ata, the user coe, the user ata, the task state segment, the efault LDT Protection: user an kernel moes mile irectory offset global irectory page table 3-Level Paging Aress p1 p2 Linear Aress on Pentium p1 p2 10 10 12 Paging an Segmentation To overcome isavantages of paging or segmentation alone: Page segments ivie segments further into pages. Segment nee not be in contiguous memory. Segmente paging segment the page table. Variable size page tables. Aress translation overheas increase! An entire process still nees to be in memory at once! Virtual Memory!!

Paging an Segmentation Consierations in Memory Management Harware Support, e.g., STBR, TLB, etc. Performance Fragmentation Multiprogramming Levels Relocation Constraints? Swapping: + Sharing?! Protection?!