オペレーティングシステム論第 9 回
今回の概要! Page replacement algorithms " LRU approximation algorithms! Frame Allocation! Thrashing! Other topics 2
LRU approximation algorithms ()! LRU page replacement needs rather complex hardware support " Most systems does not provide full support! One possible way is to associate with each page table a reference bit " Each time a memory reference occur, the hardware set to the reference page of the corresponding page. " indicates this page is used. " indicates this page is not. " This bit is used to approximate LRU algorithm 3
LRU approximation algorithms (2)! Additional-Reference-Bit Algorithm " Add 8-bit data (History Register (HR)) to each page in a page table " At regular intervals (~ ms), OS puts the reference bit into the MSB of HR and rightshifts the remaining 7 bits. " Accordingly, HR contains the history of page use for the last eight time periods. Reference bit History register 4
LRU approximation algorithms (3)! indicates the page has not been referenced for the last eight period.! indicates the page has been referenced at least once.! > " Larger HR (as unsigned integer), the page is used more recently. " A page with the smallest HR is the LRU page. 5
LRU approximation algorithms (4)! Second-Chance Algorithm " Basically, the second-chance algorithm is a FIFO replacement algorithm " Using only the reference bit " With FIFO replacement, a page is selected then check its reference bit:! If the bit is, replace the page! If the bit is, we give a second chance to the page and move on to select the next FIFO page # In this case, the reference bit is cleared and its arrival time is reset to the current time. # This page will not be replaced until all other pages are replaced. 6
LRU approximation algorithms (5)! Second-Chance (Clock) Algorithm " Implemented using a circular queue pointer Second chance given Next victim To be replaced with a new page 7
LRU approximation algorithms (6)! Enhanced Second-Chance Algorithm " Consider the reference bit and modify bit as an ordered pair. " Each page is one of the following four cases! (,) : neither recently used nor modified # Best candidate for replacement! (,) : not recently used but modified # Not quite as good; need disk I/O before replacement! (,) : recently used but clean # Likely to be used again soon! (,) : recently used and modified # Likely to be used soon and need disk I/O " Using the clock algorithm and examining the pair, then replace a page with (,). 8
LRU approximation algorithms (7)! Counting-Based algorithms " Keep a count of the number of reference " Least-Frequently-Used algorithm! Replace the page with the smallest count " Most-Frequently-Used algorithm! Replace the page with the largest count " Not practical and do not approximate OPT well! Page-Buffering algorithm " Keep a pool of frames and the desired page is first read into a free frame before the victim is written out. 9
Allocation of Frames ()! Minimum Number of Frames " As the number of frames allocated to each process decreases, the page-fault rate increases; performance degraded. " Therefore, we must allocate a sufficient number of frames to ensure good performance. " The minimum number of frames is defined by the computer architecture.! Example : a system in which memory-reference instructions have one memory address. We require at least two frames; one for the instruction and another for the memory reference.
Allocation of Frames (2)! Another example " If a system allows one-level indirect addressing, we need at least three frames. " If a system allows multiple-level indirection, we may need entire frames. " Need limit on indirection level! A simple strategy is equal allocation " Split m frames among n processes " Example: split 93 frames among 5 processes. Each process gets 8 frames.
Allocation of Frames (3)! Proportional algorithm " Allocate memory according to its size.! Example : 62 frames among two processes, one of pages and one of 27 pages # The former process gets /37 * 62 ~ 4 frames # The latter process gets 27/37*62 ~ 57 frames! Priority allocation " Calculate proportion according to size and priority. " Higher priority process gets larger allocation 2
Allocation of Frames (4)! Global or Local Allocation " Global replacement! Allows a process to select a replacement frame from the set of all frames " Local replacement! Only allows a process to select a replacement frame from already allocated own frames. 3
Thrashing ()! If a process does not have minimum number of frames, the process faults again, and again, and again. " This kind of high paging activity is thrashing. " More precisely, a process is thrashing if it is spending more time paging than executing.! If a process is thrashing " CPU utilization is low " OS try to increase the degree of multiprogramming by brining a new process " The new process is allocated few frames and is also being thrashing 4 " Repeating this cycle makes worse the problem.
Thrashing (2)! If the degree of multiprogramming is too high, thrashing sets in 5
Thrashing (3)! A solution to thrashing " Using local replacement algorithm " Working-Set strategy " Watching page-fault rate! If the rate is higher than the upper bound, allocate additional frames to the process! If the rate is lower, remove a frame from the process 6
Thrashing (4)! Working-Set strategy " To prevent thrashing, we just provide a process as many frames as it needs. " Problem is how do we know how many frames it needs?! In the working-set strategy, we observe how many frames a process is actually using. " We define a parameter, d, as the working-set window. d is a time period where we check what page the process is actually using. " The set of pages that has been referenced in last d page reference is the working-set. 7
Thrashing (5)! Locality of reference " An idea behind the working-set model. " As a process executes, it moves from locality to locality. " Locality is a set of pages that are actively used together. 8
Thrashing (6)! Working-Set example " The size of the WS for t is 5. " The size of the WS for t 2 is 2.! Given d, we can compute the size of the WS for each process (WSS i )and sum of WSS i : D = WSSi 9
Thrashing (7)! D is total demand for frames. " If D is greater than the total number of available frames (m), D > m, thrashing will occur!! Working-Set strategy " To prevent thrashing, OS is always watching D. At some time when D > m, OS must suspend a process to make D smaller than m.! The frames allocated to the process is now free.! Later, the suspended process will be resumed. " In practice, cost to keep track of the WS is high. 2
Thrashing (8)! Approximation to the WS " Using a reference bit and a fixed interval timer " Keep last n reference bits for each page " When timer interrupt, save the reference bit and clear it. " Check whether at least one bit out of n-bits is ; if yes, this page is included in the WS " Example: d =, interval = 5! 2-bits is saved for each page! May not be accurate enough 2
Page Size Consideration! Page size is generally determined by the hardware architecture.! There are many factors to choose PS " PS affects the size of page table! Smaller PS, larger the PT " PS affects the degree of fragmentation! Smaller PS, smaller internal fragmentation " PS affects the performance of I/O for swapping.! Larger PS, better I/O performance " PS affects the degree of locality! Smaller PS, better locality hence better resolution # Isolate only the memory that is actually needed.! Typical PS is 4KB 8KB. 22
Memory interlock for I/O! To prevent swapping out of pages required to transfer data through I/O devices " Need a mechanism to lock a specific region of pages : memory interlock " A lock bit is used for this purpose.! If a lock bit of a page is, OS does not swap out the page.! I/O buffer is locked.! Some or all of the OS kernel is locked.! A page that is just bring into memory and is not used, we lock the page until it can be used at least once. 23
Program Structure ()! Demand-paging is designed to transparent to the user program. " In some case, performance can be improved if the user (or compiler) aware of demandpaging.! Problem is initialize an array of data " Assumption : PS is 28 words.! One possible code (Case A) is int i, j; int data[28][28]; for (j = ; j <28; j++) for (i = ; i < 28; i++) data[i][j] = ; 24
Program Structure (2)! Another possible code (Case B) is int i, j; int data[28][28]; for (i = ; i <28; i++) for (j = ; j < 28; j++) data[i][j] = ; " In C language, the array is stored row major order; data[][], data[][] " In this case, each row takes one pages. " If OS allocates only frames! Case A results in ~ 6, page-faults! Case B results in only 28 page-faults! Performance of case A is much worse than case B 25
中間試験について! 2/6 中間試験 (9: :3) " 場所 :M4 " 範囲 : 今日の講義分まで ( 第 回から9 回 ) " 配点 : 33 点 " 教科書 ノート 辞書等は持ち込み可です " その後の演習は休み 26