Operating Systems. Virtual Memory



Similar documents
Chapter 11 I/O Management and Disk Scheduling

OPERATING SYSTEM - VIRTUAL MEMORY

Chapter 11 I/O Management and Disk Scheduling

Chapter 12. Paging an Virtual Memory Systems

Operating Systems, 6 th ed. Test Bank Chapter 7

Virtual Memory Paging

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Lecture 17: Virtual Memory II. Goals of virtual memory

HY345 Operating Systems

Computer Architecture

The Classical Architecture. Storage 1 / 36

Computer Organization and Architecture. Characteristics of Memory Systems. Chapter 4 Cache Memory. Location CPU Registers and control unit memory

Timing of a Disk I/O Transfer

Computer Architecture

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

361 Computer Architecture Lecture 14: Cache Memory

Chapter 1 Computer System Overview

Storage in Database Systems. CMPSCI 445 Fall 2010

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy


Physical Data Organization

VLIW Processors. VLIW Processors

Storing Data: Disks and Files. Disks and Files. Why Not Store Everything in Main Memory? Chapter 7

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

6. Storage and File Structures

Data Storage - I: Memory Hierarchies & Disks

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

A3 Computer Architecture

Record Storage and Primary File Organization

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Chapter 7 Memory Management

Chapter 12 File Management

Chapter 12 File Management. Roadmap

COS 318: Operating Systems

CS 464/564 Introduction to Database Management System Instructor: Abdullah Mueen

File Management. Chapter 12

Managing Storage Space in a Flash and Disk Hybrid Storage System

Storage and File Structure

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems

Memory Management CS 217. Two programs can t control all of memory simultaneously

Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Computer-System Architecture

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

OS OBJECTIVE QUESTIONS

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Virtual Memory. How is it possible for each process to have contiguous addresses and so many of them? A System Using Virtual Addressing

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

Central Processing Unit (CPU)

Concept of Cache in web proxies

Page Replacement for Write References in NAND Flash Based Virtual Memory Systems

External Sorting. Chapter 13. Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1

Memory ICS 233. Computer Architecture and Assembly Language Prof. Muhamed Mudawar

COS 318: Operating Systems. Virtual Memory and Address Translation

Review: The ACID properties

In and Out of the PostgreSQL Shared Buffer Cache

Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2014/15

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Binary search tree with SIMD bandwidth optimization using SSE

Chapter 12 File Management

Storing Data: Disks and Files

Information Systems. Computer Science Department ETH Zurich Spring 2012

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

INTRODUCTION The collection of data that makes up a computerized database must be stored physically on some computer storage medium.

Database 2 Lecture I. Alessandro Artale

Performance monitoring with Intel Architecture

Introduction. What is an Operating System?

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Weighted Total Mark. Weighted Exam Mark

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

Data Storage Solutions

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Chapter 11: File System Implementation. Operating System Concepts with Java 8 th Edition

UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS

Database Management Systems

Memory Management under Linux: Issues in Linux VM development

Chapter 13: Query Processing. Basic Steps in Query Processing

Chapter 6, The Operating System Machine Level

14.1 Rent-or-buy problem

File Management. Chapter 12

Chapter 12: Mass-Storage Systems

Operating System Tutorial

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

Comparison of Memory Management Systems of BSD, Windows, and Linux

Java Virtual Machine: the key for accurated memory prefetching

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

Windows NT File System. Outline. Hardware Basics. Ausgewählte Betriebssysteme Institut Betriebssysteme Fakultät Informatik

CPU Organization and Assembly Language

Memory Hierarchy. Arquitectura de Computadoras. Centro de Investigación n y de Estudios Avanzados del IPN. adiaz@cinvestav.mx. MemoryHierarchy- 1

Transcription:

Operating Systems Virtual Memory Virtual Memory Topics. Memory Hierarchy. Why Virtual Memory. Virtual Memory Issues. Virtual Memory Solutions. Locality of Reference. Virtual Memory with Segmentation. Page Memory Architecture Topics (continued). Algorithm for Virtual Paged Memory 9. Paging Example 0. Replacement Algorithms. Working Set Characteristics

Memory Hierarchy Registers Cache Main Memory Magnetic Disk (Cache) Backups Optical Juke Box Remote Access Figure : Hierarchical Memory Why Virtual Memory? Users frequently have a speed vs. cost decision. Users want:. To extend fast but inexpensive memory with slower but cheaper memory.. To have fast access. Virtual Memory Issues. Fetch Strategy - Retrieval from secondary storage (a) Demand - on an as needed basis (b) Anticipatory - aka Prefetching or Prepaging. Placement Strategies - Where does the data go?

Virtual Memory Issues (continued). Replacement Strategies - Which page or segment to load and which one gets swapped out? We seek to avoid overly frequent replacement (called thrashing). Virtual Memory Solutions Virtual memory solutions employ:. Segmentation. Paging. Combined Paging and Segmentation - Current trend. Locality of Reference Locality of reference by observing that process that process tend to reference storage in nonuniform, highly localized patterns. Locality can be:. Temporal - Looping, subroutines, stacks, counter variables.. Spatial - Array traversals, sequential code, data encapsulation.

Locality of Reference (continued) Things that hurt locality:. Frequently taken conditional branches. Linked data structures. Random Access patterns in storage Virtual Memory with Segmentation Only Segmentation provides protection and relocation so:. The operating system can swap segments out not in use.. Data objects can be assigned their own data segments they do not fit in an existing data segment.. Sharing can be facilitated using the protection mechanism. Virtual Memory with Segmentation Only (continued). No internal fragmentation.. Main memory has external fragmentation. Pure segmented solutions are not currently in fashion. These techniques combine well with paging (as shown later)

Virtual Paged Memory Architecture Paging is frequently used to increase the logical memory space. Page frames (or frames for short) are the unit of placement/replacement. A page which has been updated since it was loaded is dirty, otherwise it is clean. A page frame may be locked in memory (not replaceable). (e.g. O/S Kernel) Virtual Paged Memory Architecture (continued) Some architectural challenges include:. Protection kept for each page.. Dirty/clean/lock status kept for each page.. Instructions may span pages.. An instructions operands may span pages.. Iterative instructions (e.g Intel 0x) may have data spanning many pages. The Algorithm for Virtual Paged Memory For each reference do the following:. If the page is resident, use it.. If there is an available page, allocate it and load the required nonresident page.

The Algorithm for Virtual Paged Memory (continued). If there is no available frames, then: (a) Select a page to be removed (the victim ). (b) If the victim is dirty, write to secondary storage. (c) Load the nonresident page into the victim s frame. The Algorithm for Virtual Paged Memory (continued) Stallings [] uses the term page fault to mean replacement operations. Many others (including your instructor) use page fault to refer to loading any pages (not just replacement). Demand Page Replacement Let M t be the set of resident pages at time t. For a program that runs T steps the memory state is: M 0, M, M,..., M T Memory is usually empty when starting a process, so M 0 = 0.

Demand Page Replacement (continued) Assume that real memory has m page frames, so M t m. Let X t be the set of newly loaded pages and Y t be the set of newly replaced pages at time t. Let y M t be some page in M t (if nonempty). Demand Page Replacement (continued) Each reference updates memory: M t = M t- X t -Y t = M t- if r t M t- M t- + r t if r t M t- M t- <m M t- + r t +y if r t M t- M t- =m Prefetching has had limited success in practice. Page Fault Rate and Locality The reference string, denoted w, is the sequence of page frames referenced by a process. w is indexed by time: w = r, r, r, r w ()

Cost Measures For Paging For each level of hierarchy, a page is either:. is resident with the probability -p and can be referenced with cost. is not resident with probability p and must be loaded with a cost F Cost Measures For Paging Cont d. The Effective Access Time (EAT) for a process is the average time to access a single memory location: EAT = + pf () Cost Measures For Paging (continued) The Duty Factor of a process measures the efficiency of the processor use by the process given its page fault pattern: DF= w () = = w (+pf) +pf EAT

A Paging Example Consider the following example [?] : STEPSIZE = ; for (i = ; i <= n; i = i + STEPSIZE) { A[ i ] = B[ i ] + C[ i ] ; } A Paging Example (continued) With the pseudo assembly code: 000 MOV STEPSIZE, # STEPSIZE = 00 MOV R, STEPSIZE # i = 00 MOV R, n # R = n 00 CMP R, R # test i > n 00 BGT 009 # exit loop 00 MOV R, B(R) # R = B [ i ] 00 ADD R, C(R) # R = B [ i ] + C [ i ] 00 ADD R, STEPSIZE # Increment R 00 JMP 00 # Back to the Test 009... # After the Loop The Example (continued) Storage Address Value 000 - FFF Storage for A 000 - FFF Storage for B 000 - FFF Storage for C 9000 Storage for n 900 Storage for STEPSIZE Table : Reference Locations 9

The Example (continued) In this example: w = 99(9) n () Locality in Paged Memory If we have at least frames of KB then we know that the program does page loads and no swaps. Otherwise, how many page faults are there? Page Frame 0 Locality in Paged Memory Access Patterns in A[i]=B[i]+C[i], i=..00 0 00 00 00 00 000 00 Time of Reference Page Trace of Example for n=00 0

FIFO Replacement Suppose that we try FIFO replacement (the oldest page gets replaced). Consider the reference string []: w=,,,,,,,,,,, () Derive the number of replacements done when: frames are used frames are used FIFO Replacement with Pages Time r i Page Page Page Fault? Faults 0 9 0 9 9 FIFO Replacement with Pages Time r i Page Page Page Page Fault? Faults 0 9 0 9 0

Belady s (aka FIFO) Anomaly Recall that when we used a FIFO replacement scheme with frames we got 9 faults and for frames we got 0 faults. Usually, one would expect fewer faults for a larger number of frames. That did not occur, what happened? Belady discovered this (counter intuitive) problem. The Optimal (OPT) Replacement Algorithm The optimal (OPT) is to select the victim such that it is the page that will be reference the longest time into the future. Note that on the following table at t=9, Page could have been the victim too. The Optimal (OPT) Replacement Algorithm Time 0 9 0 r i Page Page Page Fault? Faults

Stack Page Replacement Algorithms Let M(m,w) represent the set of pages in real memory after processing reference string w in m frames. The inclusion property is satisfied if for a given page replacement algorithm: M(m,w) M(w,m+) Stack replacement algorithms [?] satisfy the inclusion property Stack Page Replacement Algorithms (continued) e.g. FIFO is NOT a stack page replacement algorithm (inclusion is NOT satisfied), while OPT is. Least Recently Used (LRU) LRU is a popular stack page replacement strategy. Pages are ordered by time of last access, with the page used furthest into the past being removed.

LRU (continued) Time r i 0 9 0 Page Page Page Fault? Faults Clock Replacement Algorithms Clock replacement algorithms are stack algorithms. The simplest style marks pages with use= when referenced and use=0 upon a circular scan. n Page use=0 Page 9 use= next frame pointer Page use= Page use= Page 9 use= Page use=0 0 Page use=0 Page use=0 Page use= Page 9 use= First frame in circular buffer of frames that are candidates for replacement

Simple Clock Replacement The first page encountered with use=0 is selected. The final position of the previous scan is used to start the next scan. n 0 Page 9 use= Page use=0 Page use= Page use= Page 9 use= Page use=0 Page use= Page use=0 Page use=0 Page 9 use=0 Simple Clock (CLOCK) CLOCK is a popular stack page replacement strategy, pages in use are underlined.

CLOCK (continued) Time 0 9 0 r i Page Page Page Fault? Faults Gold s Clock Algorithm Another popular type of clock algorithm exploits the fact that dirty pages require an additional write, and as such make poor victims. This algorithm tracks keeps track of recent modification, with m= if updated, otherwise m=0, use bits indicate read accesses and are denoted u= for recent read otherwise u=0. Gold s Clock Algorithm (continued) The steps are:. Scan for a page with u=0, m=0. If one is found, stop otherwise after all pages have been scanned, continue.. Scan for a page with u=0, m=, setting u=0 as the scanning. If one is foundstop, otherwise continue.. Repeat step, all pages had u= before now u=0 so step or step will be satisfied. (as per CLOCK) and then if that fails, a mod bit scan is done.

Not Accessed recently, not modified. n 0 Not Accessed recently, modified. 9 P P P P P9 P9 P P9 P9 P9 P9 Accessed recently, not modified. Working Set (WS) Working set algorithms are stack algorithms using a parameter [, ]. All pages within the last reference remain resident. The number of pages allocated varies over time (upper bound is ). Working Set (continued) w Process Execution time t-w t The pages referenced by the process during this time interval constitute the process s working set W (t,w)

Working Set Characteristics Working sets can be characterized as being stable most of the time, and growing larger as program makes transitions between locals of reference. Choosing or (w) is hard. Number of primary storage pages allocated to this process First working set Transition between working sets Second working set Transition between working sets Third Working set Transition between working sets Fourth Working set Time Working Set Management Since Each process has dynamically sized working set, and the window,, is too loose an upper bound on working set size, the OS may have to select a process to deactivate.

Working Set Management Selection Criteria:. The lowest priority process. The process with the largest fault rate. The process with the largest working set.. The process with the smallest working set.. The most recently activated process. The largest remaining quantum. Lifetime Curves for Memory Access Similar to inter I/O event times, there are inter page fault times. The inter page fault time is called the lifetime of the page. Lifetime curves are plotted as a function of number of pages of memory allocated (m) and typically have a knee (performance region) where L(m)/m is maximal (a good value of m). Lifetime Curves (continued) L(m) Knee m 9

Page Fault Frequency (PFF) Page fault frequency measures the time since the last page and tracks whether each page allocated to a process (via a use bit) has been accessed since the last page fault. This information is used, processes with high fault rates are allocated more pages, while jobs with lower fault rates release pages. 0