今回の概要. ! Page replacement algorithms. ! Frame Allocation. ! Thrashing. ! Other topics. " LRU approximation algorithms

Similar documents
Operating Systems. Virtual Memory

OPERATING SYSTEM - VIRTUAL MEMORY

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

HY345 Operating Systems

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

Chapter 12. Paging an Virtual Memory Systems

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Operating Systems. Lecture 03. February 11, 2013

Operating Systems, 6 th ed. Test Bank Chapter 7

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Lecture 17: Virtual Memory II. Goals of virtual memory

Computer Architecture

Chapter 6, The Operating System Machine Level

The Classical Architecture. Storage 1 / 36

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

Chapter 10: Virtual Memory. Lesson 08: Demand Paging and Page Swapping

Chapter 11 I/O Management and Disk Scheduling

Quality of. Leadership. Quality Students of Faculty. Infrastructure

Chapter 11 I/O Management and Disk Scheduling

Operating Systems 4 th Class

Course Material English in 30 Seconds (Nan un-do)

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

The Real-Time Operating System ucos-ii

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Unwillingness to Use Social Networking Services for Autonomous Language Learning among Japanese EFL Students

Operating Systems Concepts: Chapter 7: Scheduling Strategies

レッドハット 製 品 プライスリスト 標 準 価 格. Red Hat Enterprise Linux 製 品 (RHEL Server)

2015 ASPIRE Forum Student Workshop Student Reports 参 加 学 生 報 告 書

COS 318: Operating Systems. Virtual Memory and Address Translation

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June

JPShiKen.COM 全 日 本 最 新 の IT 認 定 試 験 問 題 集

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

In-class student presentations are a common method of oral evaluation in communicationfocused

Application Guidelines for International Graduate Programs in Engineering

Linux Foundation Automotive Summit - Yokohama, Japan

Chapter 7 Memory Management

Linux Process Scheduling Policy

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Teacher Training and Certificate System

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

W4118 Operating Systems. Instructor: Junfeng Yang

Operating Systems Lecture #6: Process Management

CS162 Operating Systems and Systems Programming Lecture 15. Page Allocation and Replacement

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Document and entity information

レッドハット 製 品 プライスリスト Red Hat Enterprise Linux2013 新 製 品 (ベースサブスクリプション) 更 新 :2015 年 4 22

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

MS SQL Performance (Tuning) Best Practices:

Chapter 7 Memory Management

英 語 上 級 者 への 道 ~Listen and Speak 第 4 回 ヨーロッパからの 新 しい 考 え. Script

Virtual Memory Paging

Let s ask Tomoko and Kyle to the party. A. accept B. except C. invite D. include

Agenda. About Gengo. Our PostgreSQL usage. pgpool-ii. Lessons

Computer-System Architecture

Cost Accounting 1. B r e a k e v e n A n a l y s i s. S t r a t e g y I m p l e m e n t a t i o n B a l a n c e d S c o r e c a r d s

Chiba Institute of Technology Graduate School

Enery Efficient Dynamic Memory Bank and NV Swap Device Management

Introduction. Scheduling. Types of scheduling. The basics

レッドハット 製 品 プライスリスト Red Hat Enterprise Linux 製 品 (RHEL for HPC) 更 新 :2015 年 4 22

Electricity Business Act ( Act No. 170 of July 11, 1964)

Red Hat Linux Internals

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

(Refer Slide Time: 00:01:16 min)

Operating System Tutorial

Midterm Exam #2 Solutions November 10, 1999 CS162 Operating Systems

Chapter 5 Linux Load Balancing Mechanisms

この 外 国 弁 護 士 による 法 律 事 務 の 取 扱 いに 関 する 特 別 措 置 法 施 行 規 則 の 翻 訳 は 平

Virtual vs Physical Addresses

361 Computer Architecture Lecture 14: Cache Memory

Graduate Program in Japanese Language and Culture (Master s Program) Application Instructions

Virtualization. Pradipta De

Graduate School of Engineering. Master s Program, 2016 (October entrance)

Special Program of Engineering Science 21 st Century. for Graduate Courses in English. Graduate School of Engineering Science, OSAKA UNIVERSITY

LEAVING CERTIFICATE 2011 MARKING SCHEME JAPANESE HIGHER LEVEL

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Chapter 2: OS Overview

This tutorial will take you through step by step approach while learning Operating System concepts.

Storage in Database Systems. CMPSCI 445 Fall 2010

How To Run A Server On A Microsoft Cloud Server (For A Small Amount Of Money)

Grant Request Form. Request Form. (For continued projects)

Chapter 1 Computer System Overview

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Board Notes on Virtual Memory

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

Lesson-16: Real time clock DEVICES AND COMMUNICATION BUSES FOR DEVICES NETWORK

This Unit: Virtual Memory. CIS 501 Computer Architecture. Readings. A Computer System: Hardware

OS OBJECTIVE QUESTIONS

Operating System Manual. Realtime Communication System for netx. Kernel API Function Reference.

Lecture 3 Theoretical Foundations of RTOS

Introduction to the revised Child Care and Family Care Leave Law

The goal is to program the PLC and HMI to count with the following behaviors:

Procedures to file a request to the JPO for Patent Prosecution Highway Pilot Program between the JPO and the HPO

Transcription:

オペレーティングシステム論第 9 回

今回の概要! Page replacement algorithms " LRU approximation algorithms! Frame Allocation! Thrashing! Other topics 2

LRU approximation algorithms ()! LRU page replacement needs rather complex hardware support " Most systems does not provide full support! One possible way is to associate with each page table a reference bit " Each time a memory reference occur, the hardware set to the reference page of the corresponding page. " indicates this page is used. " indicates this page is not. " This bit is used to approximate LRU algorithm 3

LRU approximation algorithms (2)! Additional-Reference-Bit Algorithm " Add 8-bit data (History Register (HR)) to each page in a page table " At regular intervals (~ ms), OS puts the reference bit into the MSB of HR and rightshifts the remaining 7 bits. " Accordingly, HR contains the history of page use for the last eight time periods. Reference bit History register 4

LRU approximation algorithms (3)! indicates the page has not been referenced for the last eight period.! indicates the page has been referenced at least once.! > " Larger HR (as unsigned integer), the page is used more recently. " A page with the smallest HR is the LRU page. 5

LRU approximation algorithms (4)! Second-Chance Algorithm " Basically, the second-chance algorithm is a FIFO replacement algorithm " Using only the reference bit " With FIFO replacement, a page is selected then check its reference bit:! If the bit is, replace the page! If the bit is, we give a second chance to the page and move on to select the next FIFO page # In this case, the reference bit is cleared and its arrival time is reset to the current time. # This page will not be replaced until all other pages are replaced. 6

LRU approximation algorithms (5)! Second-Chance (Clock) Algorithm " Implemented using a circular queue pointer Second chance given Next victim To be replaced with a new page 7

LRU approximation algorithms (6)! Enhanced Second-Chance Algorithm " Consider the reference bit and modify bit as an ordered pair. " Each page is one of the following four cases! (,) : neither recently used nor modified # Best candidate for replacement! (,) : not recently used but modified # Not quite as good; need disk I/O before replacement! (,) : recently used but clean # Likely to be used again soon! (,) : recently used and modified # Likely to be used soon and need disk I/O " Using the clock algorithm and examining the pair, then replace a page with (,). 8

LRU approximation algorithms (7)! Counting-Based algorithms " Keep a count of the number of reference " Least-Frequently-Used algorithm! Replace the page with the smallest count " Most-Frequently-Used algorithm! Replace the page with the largest count " Not practical and do not approximate OPT well! Page-Buffering algorithm " Keep a pool of frames and the desired page is first read into a free frame before the victim is written out. 9

Allocation of Frames ()! Minimum Number of Frames " As the number of frames allocated to each process decreases, the page-fault rate increases; performance degraded. " Therefore, we must allocate a sufficient number of frames to ensure good performance. " The minimum number of frames is defined by the computer architecture.! Example : a system in which memory-reference instructions have one memory address. We require at least two frames; one for the instruction and another for the memory reference.

Allocation of Frames (2)! Another example " If a system allows one-level indirect addressing, we need at least three frames. " If a system allows multiple-level indirection, we may need entire frames. " Need limit on indirection level! A simple strategy is equal allocation " Split m frames among n processes " Example: split 93 frames among 5 processes. Each process gets 8 frames.

Allocation of Frames (3)! Proportional algorithm " Allocate memory according to its size.! Example : 62 frames among two processes, one of pages and one of 27 pages # The former process gets /37 * 62 ~ 4 frames # The latter process gets 27/37*62 ~ 57 frames! Priority allocation " Calculate proportion according to size and priority. " Higher priority process gets larger allocation 2

Allocation of Frames (4)! Global or Local Allocation " Global replacement! Allows a process to select a replacement frame from the set of all frames " Local replacement! Only allows a process to select a replacement frame from already allocated own frames. 3

Thrashing ()! If a process does not have minimum number of frames, the process faults again, and again, and again. " This kind of high paging activity is thrashing. " More precisely, a process is thrashing if it is spending more time paging than executing.! If a process is thrashing " CPU utilization is low " OS try to increase the degree of multiprogramming by brining a new process " The new process is allocated few frames and is also being thrashing 4 " Repeating this cycle makes worse the problem.

Thrashing (2)! If the degree of multiprogramming is too high, thrashing sets in 5

Thrashing (3)! A solution to thrashing " Using local replacement algorithm " Working-Set strategy " Watching page-fault rate! If the rate is higher than the upper bound, allocate additional frames to the process! If the rate is lower, remove a frame from the process 6

Thrashing (4)! Working-Set strategy " To prevent thrashing, we just provide a process as many frames as it needs. " Problem is how do we know how many frames it needs?! In the working-set strategy, we observe how many frames a process is actually using. " We define a parameter, d, as the working-set window. d is a time period where we check what page the process is actually using. " The set of pages that has been referenced in last d page reference is the working-set. 7

Thrashing (5)! Locality of reference " An idea behind the working-set model. " As a process executes, it moves from locality to locality. " Locality is a set of pages that are actively used together. 8

Thrashing (6)! Working-Set example " The size of the WS for t is 5. " The size of the WS for t 2 is 2.! Given d, we can compute the size of the WS for each process (WSS i )and sum of WSS i : D = WSSi 9

Thrashing (7)! D is total demand for frames. " If D is greater than the total number of available frames (m), D > m, thrashing will occur!! Working-Set strategy " To prevent thrashing, OS is always watching D. At some time when D > m, OS must suspend a process to make D smaller than m.! The frames allocated to the process is now free.! Later, the suspended process will be resumed. " In practice, cost to keep track of the WS is high. 2

Thrashing (8)! Approximation to the WS " Using a reference bit and a fixed interval timer " Keep last n reference bits for each page " When timer interrupt, save the reference bit and clear it. " Check whether at least one bit out of n-bits is ; if yes, this page is included in the WS " Example: d =, interval = 5! 2-bits is saved for each page! May not be accurate enough 2

Page Size Consideration! Page size is generally determined by the hardware architecture.! There are many factors to choose PS " PS affects the size of page table! Smaller PS, larger the PT " PS affects the degree of fragmentation! Smaller PS, smaller internal fragmentation " PS affects the performance of I/O for swapping.! Larger PS, better I/O performance " PS affects the degree of locality! Smaller PS, better locality hence better resolution # Isolate only the memory that is actually needed.! Typical PS is 4KB 8KB. 22

Memory interlock for I/O! To prevent swapping out of pages required to transfer data through I/O devices " Need a mechanism to lock a specific region of pages : memory interlock " A lock bit is used for this purpose.! If a lock bit of a page is, OS does not swap out the page.! I/O buffer is locked.! Some or all of the OS kernel is locked.! A page that is just bring into memory and is not used, we lock the page until it can be used at least once. 23

Program Structure ()! Demand-paging is designed to transparent to the user program. " In some case, performance can be improved if the user (or compiler) aware of demandpaging.! Problem is initialize an array of data " Assumption : PS is 28 words.! One possible code (Case A) is int i, j; int data[28][28]; for (j = ; j <28; j++) for (i = ; i < 28; i++) data[i][j] = ; 24

Program Structure (2)! Another possible code (Case B) is int i, j; int data[28][28]; for (i = ; i <28; i++) for (j = ; j < 28; j++) data[i][j] = ; " In C language, the array is stored row major order; data[][], data[][] " In this case, each row takes one pages. " If OS allocates only frames! Case A results in ~ 6, page-faults! Case B results in only 28 page-faults! Performance of case A is much worse than case B 25

中間試験について! 2/6 中間試験 (9: :3) " 場所 :M4 " 範囲 : 今日の講義分まで ( 第 回から9 回 ) " 配点 : 33 点 " 教科書 ノート 辞書等は持ち込み可です " その後の演習は休み 26