Memory Management Thrashing, Segmentation and Paging

Similar documents
Operating Systems. Virtual Memory

Operating Systems, 6 th ed. Test Bank Chapter 7

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

OPERATING SYSTEM - VIRTUAL MEMORY

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Chapter 12. Paging an Virtual Memory Systems

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

W4118: segmentation and paging. Instructor: Junfeng Yang

Chapter 7 Memory Management

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

W4118 Operating Systems. Instructor: Junfeng Yang

Virtual vs Physical Addresses

Computer Architecture

Lecture 17: Virtual Memory II. Goals of virtual memory

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

CS5460: Operating Systems

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

How To Write A Page Table

Virtual Memory Paging

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

The Linux Virtual Filesystem

OPERATING SYSTEM - MEMORY MANAGEMENT

Performance Comparison of RTOS

The Classical Architecture. Storage 1 / 36

Chapter 11 I/O Management and Disk Scheduling

Chapter 7 Memory Management

KVM & Memory Management Updates

Lecture 25 Symbian OS

Virtualization. Explain how today s virtualization movement is actually a reinvention

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

COS 318: Operating Systems. Virtual Memory and Address Translation

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

361 Computer Architecture Lecture 14: Cache Memory

Chapter 11 I/O Management and Disk Scheduling

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June

High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++

COS 318: Operating Systems

Chapter 1 Computer System Overview

Virtual Memory Behavior in Red Hat Linux Advanced Server 2.1

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

OS OBJECTIVE QUESTIONS

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Linux Process Scheduling Policy

CHAPTER 6 TASK MANAGEMENT

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Topics in Computer System Performance and Reliability: Storage Systems!

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

Hardware Assisted Virtualization

Operating System Overview. Otto J. Anshus

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Data Storage - I: Memory Hierarchies & Disks

Computer-System Architecture

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Outline: Operating Systems

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

6. Storage and File Structures

Chapter 6, The Operating System Machine Level

HY345 Operating Systems

RAID Basics Training Guide

How To Write To A Linux Memory Map On A Microsoft Zseries (Amd64) On A Linux (Amd32) (

Operating System Components

Tech Tip: Understanding Server Memory Counters

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

Midterm Exam #2 Solutions November 10, 1999 CS162 Operating Systems

OPERATING SYSTEMS MEMORY MANAGEMENT

Storage in Database Systems. CMPSCI 445 Fall 2010

Performance Management in the Virtual Data Center, Part II Memory Management

Filing Systems. Filing Systems

Chapter 12 File Management. Roadmap

Chapter 12 File Management

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc()

File System Management

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

COS 318: Operating Systems. Virtual Machine Monitors

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

Guide to SATA Hard Disks Installation and RAID Configuration

In and Out of the PostgreSQL Shared Buffer Cache

MS SQL Performance (Tuning) Best Practices:

Managing Prefetch Memory for Data-Intensive Online Servers

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Chapter 10: Virtual Memory. Lesson 08: Demand Paging and Page Swapping

Virtuoso and Database Scalability

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

Optimizing LTO Backup Performance

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

Virtual Machines. COMP 3361: Operating Systems I Winter

Traditional IBM Mainframe Operating Principles

A3 Computer Architecture

Guide to SATA Hard Disks Installation and RAID Configuration

Lesson-16: Real time clock DEVICES AND COMMUNICATION BUSES FOR DEVICES NETWORK

Transcription:

Memory Management Thrashing, Segmentation and Paging CS 416: Operating Systems Design, Spring 2011 Department of Computer Science Rutgers Sakai: 01:198:416 Sp11 (https://sakai.rutgers.edu)

Summary of Page Eviction Algorithms OPT (MIN) Random FIFO (Use a List to maintain the pages as allocated) Suffers from belady s anomaly LRU Using a 32 bit timestamp (Not Efficient) LRU Approximations Counter Implementation Clock Implementation (Second Chance) Counter + Clock (Nth Chance) 2

Physical Memory Map In Linux, 44 bytes of state maintained per page If we have 4GB of RAM with each page being 4KB, we will have 2^20 pages 44*2^20 pages = 44MB of storage 3

Swap Files What happens to the page that we choose to evict? Depends on what kind of page it is and what state it's in! OS maintains one or more swap files or partitions on disk Special data format for storing pages that have been swapped out 4

Page Eviction How we evict a page depends on its type. Code page: Just chuck it from memory can recover it from the executable file on disk! Unmodified (clean) data page: If the page has previously been swapped to disk, just chuck it from memory oassuming that page's backing store on disk has not been overwritten If the page has never been swapped to disk, allocate new swap space and write the page to it (This is just an optimization since swapping the page in is faster from swap space) Exception: unmodified zero page no need to write out to swap at all! Modified (dirty) data page: If the page has previously been swapped to disk, write page out to the swap space If the page has never been swapped to disk, allocate new swap space and write the page to it 5

Physical Frame Allocation How do we allocate physical memory across multiple processes? When we evict a page, which process should we evict it from? How do we ensure fairness? How do we avoid one process hogging the entire memory of the system? Fixed-space algorithms Per-process limit on the physical memory usage of each process When a process reaches its limit, it evicts pages from itself Variable-space algorithms Physical size of processes can grow and shrink over time Allow processes to evict pages from other processes One process paging can impact performance of entire system! One process that does a lot of paging will induce more disk I/O 6

Thrashing As system becomes more loaded, spends more of its time paging Eventually, no useful work gets done! System is overcommitted! If the system has too little memory, the page replacement algorithm doesn't matter Solutions? Change scheduling priorities to slow down processes that are thrashing Identify process that are hogging the system and kill them? 7

Reasons for Thrashing Process doesn t reuse memory, so caching doesn t work (past!= future) Process does reuse memory, but it does not fit Individually, all processes fit and reuse memory, but too many for system This could be solved! 8

Approach 1: Working set Dealing with Thrashing How much memory does the process need in order to make reasonable progress (its working set)? Only run processes whose memory requirements can be satisfied Approach 2: Page Fault Frequency PFF = page faults / instructions executed If PFF rises above threshold, process needs more memory onot enough memory on the system? Swap out. If PFF sinks below threshold, memory can be taken away 9

Working Set A process's working set is the set of pages that it currently needs Definition: WS(P, t, w) = the set of pages that process P accessed in the time interval [t-w, t] w is usually counted in terms of number of page references oa page is in WS if it was referenced in the last w page references Working set changes over the lifetime of the process Periods of high locality exhibit smaller working set Periods of low locality exhibit larger working set Basic idea: Give process enough memory for its working set If WS is larger than physical memory allocated to process, it will tend to swap If WS is smaller than memory allocated to process, it's wasteful This amount of memory grows and shrinks over time 10

Estimating the Working Set How do we determine the working set of a process? Simple approach Approximate with interval timer + a reference bit Example: t = 10,000 instructions Interrupts after every 5000 instructions. Keep in memory 2 bits for each page. Whenever a timer interrupts, shift the bits to right and copy the reference bit value onto the high order bit and sets the values of all reference bits to 0. If one of the bits in memory = 1 page in working set. Why is this not completely accurate? onot sure when exactly in the last 5000 time units was this page accessed Improvement = 10 bits and interrupt every 1000 instructions. 11

Working Set Now that we know the working set, how do we allocate memory? If working sets for all processes fit in physical memory, done! Otherwise, reduce memory allocation of larger processes oidea: Big processes will swap anyway, so let the small jobs run. Very similar to shortest-job-first scheduling: give smaller processes better chance of fitting in memory How do we decide the working set limit T? If T is too large, very few processes will fit in memory If T is too small, system will spend more time swapping 12

Page-Fault Frequency Scheme Page Fault Rate = (#Page Faults)/No of Executed Instructions Establish acceptable page-fault rate If actual rate too low, process loses frame If actual rate too high, process gains frame (or is swapped out) 13

Variable Partitions Remember? Allow variable sized partitions Now requires both and base and a limit register Problem with segmentation: external fragmentation Holes left in physical memory when segments are destroyed 14

Segmentation Memory-management scheme that supports user view of memory. A program is a collection of segments. A segment is a logical unit such as: main program, functions local variables, global variables, common block, stack, heap symbol table, arrays 15

Logical View of Segmentation 1 1 4 2 3 4 2 3 user space physical memory space 16

Why use segments? Segments cleanly separate different areas of memory e.g., Code, data, heap, stack, shared memory regions, etc. Use different segment registers to refer to each portion of the address space Allows hardware to enforce protection on a segment as a whole e.g., Segment descriptor can mark the entire code segment as read only Note that using page tables can accomplish the same result... But requires the OS to carefully maintain page table entries for entire segments 17

Combined Segmentation and Paging A segment is a contiguous span of virtual addresses... rather than physical addresses, as on the previous slide Described by a segment descriptor Segment descr has total segment size, access rights, base virtual address Each segment can have a corresponding page table! Segment broken into pages internally Can use either one- or two-level paging on each segment Virtual address now looks like: Segment number may be part of the address, or stored in a separate register Value of current segment register used to determine which segment to access 18

Virtual address space with segments 19

Multiple segment registers Intel X86 Segments CS: Code Segment DS: Data Segment SS: Stack Segment Also ES, FS, and GS... other segments Each instruction uses one of these segment registers For example, instruction fetch implicitly uses segment pointed to by CS Push/pop instructions implicitly use segment pointed to by SS Segment descriptor information: Virtual base address and size of segment Segment access rights (read, write, execute) All segments share the same linear address space! This means there is one set of page tables for all segments in a process Segments can overlap in linear address space, too. 20

Intel X86 address translation Logical/Virtual address Linear address Physical address 21

Do we need Segmentation and Paging? Short answer: You don t just adds overhead Most Operating systems use flat mode set base = 0, bounds = 0xffffffff in all segment registers, then forget about it Linux x86: One segment for user code, another segment for user data Both segments cover the same virtual address range! (0... 3GB) Another pair of segments for kernel virtual addresses (3GB... 4GB) 22

Summary Virtual memory is a way of introducing another level in our memory hierarchy in order to abstract away the amount of memory actually available on a particular system This is incredibly important for ease-of-programming Imagine having to explicitly check for size of physical memory and manage it in each and every one of your programs It s also useful to prevent fragmentation in multi-programming environments Can be implemented using paging (sometime segmentation or both) Page fault is expensive so can t have too many of them Important to implement good page replacement policy Have to watch out for thrashing!! 23