# The Working Set Model for Program Behavior

Save this PDF as:

Size: px
Start display at page:

## Transcription

4 patterns. He has shown that too much "historical" d:~t:~ can have adverse effects (witness ATLAS). n Section 3 we begin investigation of the working set concept. Even though the ideas are not entirely new [9, 14, 15], there has been no detailed documentation publicly available. 3. Working Set Model From the programmer's standpoint, the working set of information is the smallest collection of information that must be present in main memory to assure efficient execution of his program. We have already stressed that there will be no advance notice from either the programmer or the compiler regarding what information "ought" to be in main memory. t is up to the operating system to determine, on the basis of page reference patterns, whether pages are in use. Therefore the working set of information associated with a process is, from the system standpoint, the set of most recently referenced pages. We define the working set of information W(t, z) of a process at time t to be the collection of information referenced by the process during the process time interval (t -- r, t). Thus, the information a process has referenced during the last ~- seconds of its execution constitutes its working set (Figure 2). r will be called the working set parameter. We A working set W(t, r) has four important, general properties. All are properties of typical programs and need not hold in special eases. During the following discussion of these properties, assume that W(t, r) is continuously in main memory, that its process is never interrupted except for page faults, that a page is removed from main memory the moment it leaves W(t, r), and that no two working sets overlap (there is no sharing of information). P1. Size. t should be clear immediately that oo(t O) = 0 since no page reference can occur in zero time. t should be equally clear that, as a function of ~-, co(t, r) is monotonically increasing, since more pages can be refereneed in longer intervals, oo(t, r) is concave downward. To see this, note that which implies that w(t, 2~) = w(t, ~-) U w(t - r, ~-), (3) o~(t, 2r) =< oo(t, r) + w(t -- r, r). (4) Assuming statistical regularity, ~(t, r) behaves on the : average like ~(z -- r, r), so that on the average oo(t, 2r) =< 2oo(t, r). (5) The general character of oo(t, ~-) is suggested by the i smoothed curve of Figure 3..i ii }!!~! ~o(t ~) / L/////////iq//////////////////] F/////////////A m FG. 2. = process time pages referenced in this~ interval constitute w(t,~-) J Definition of W(t, ~-) 0 FO. 3. Behavior of w(t, r) ---" 3" regard the elements of W(t, r) as being pages, though they could just as well be any other named units of information. The working set size oo( t, r) is w(t, r) = number of pages in W(t, r). (1) Let the random variable x denote the process-time interval between successive references to the same page; let F,(a) = Pr [x =< a] be its distribution function; let f,(a) = df~(a)/da be its density function; and 2 denote its mean: P ~o :~ = J0 af=(c~) da. (2) These interreference intervals x are useful for expressing working set properties. P2. Prediction. We expect intuitively that the immediate past page reference behavior of a program constitutes a good prediction of its immediate future page reference behavior: for small time separations a, the set W(t, T) is a good predictor for the set W(t + a, r). To see this more clearly, suppose a < r. Then W(t + a, r) = W(t + o~, a) U W(t, r -,~). (6) Because references to the same page tend to cluster in time, the probability Pr [W(t + a, a) f'l W(t, r) = ~o] tends to be small. Therefore some pages of W(t, r) will still be in use after time t (i.e. pages in W(t + a, a) ; since 326 Communications of the ACM Volume 11 / Number 5 / May, 1968

5 ~tls() pv(t, - ~) _~ t;v(~, ) N ~,~(t + ~, ), (7) W(t, r) is a good predietor for W(t + ~, r). On the other hand, for large time separations ~ (say, o~ >> r) control will have passed tkrough a great many program modules during the interval (t, t c~), and V(t, ~) is not a good predictor for W(t + c~, r). P3. Reentry Rate. As r is reduced, co(t, r) decreases, so the probability that useful pages are not in W(t, r) increases; correspondingly the rate at which pages are recalled to W(t, r) increases. We define two functions: a process-time reentry rate X(r) defined so that the mean process time between the instants at which a given page reenters W(t, r) is 1/X(r), arid a real-time reentry rate e(r) defined so that the mean real-time between the instants at which a given page reenters W(t, r) is 1/~(r). Let {t,~},,.ao he a sequence of instants in process time at which successive references to a given page occur. The nth interreferenee interval is x~ = t~ - t,~q; but we are assuming the interreferenee intervns {Xn},~ 1 are independent, identically distributed random variables, so that for all n ->_ 1, ];~(a) = f,(a). A reentry point is a reference instant which finds the page not in W(t, r): at such an instant the page reenters W(t, r). The reference instant t~ is a reentry point if x~ > r, independently of other reference instants. Suppose to is a reentry point; we are interested in 7r,,, the probability that t~ is the first reentry point after to. The probabilities {7r.}~ 1 are distributed geometrically: ~r~ = Pr [t., first reentry after to] = ~'"-l(1 -- ~'), (8) where ~- = l?r [x =< r] = F,(r). That is, t,~is the first reentry after to if all the instants {h, ".., &_~} are not reentry instants, and t~ is a reentry. The expected number ~ of reference instants until the first reentry is = nrr~ -. (9) r*=l 1 -- ~" Each reference interval is of expected length 2 [eq. (2)], so the mean time re(r) between reentries is re(r) = ~2. Therefore re(r) Fx(r)" We define the reentry rate X(r) to be X(r) - 1 _ 1 - F,(r) (10) m(~).~ ' whm~ X(r) is the average process-time rate at which one page is,~entering W(t, r). Assuming that storage management mechanisms retain in main memory only the pages of W(t, r), every page reentering W(t, r) must be recalled from auxiliary memory and contributes to page traffic; we here estimate this contribution. n an interval A of process time, the ex- peered number of times a single page reenters W(t, r) is At(r); each reentry causes the process to enter a "page-wait" state for one traverse time T, a total of (AX(r)T) seeonds spent in page wait. Therefore the total real-time spent to recall a page AX(r) times is (A + AX(r)T). Tile return tra~c rate ~(r) is that. is, ~(r) - At(r) A + Ax(~)T' ~(~) = X(r) 1 + X(r)T' (11) where ~(r) estimates the average rem-time rate at which one page is reentering W(t, r). That is, the mean realtime between reentries is 1/e(r). Later in the paper we define "memory balance," a condition in which the collection of working sets residing in main memory at any time just consumes some predetermined portion ~ of the available M pages of main memory. That is, on the average, w0rki ng sets in main memory oo(t, r) = ~M. (12) n this ease, the average nulnber of pages in memory belonging to working sets is ~M; we define the total return trafic rate q~(r) to be the total reentry rate in real-time to main memory, when the worldng sets contained therein are not interrupted except for page waits: (r) = 5Me(r) - BMX(r) 1 + X(r)T' (13) where ~(r) estimates the average number of pages per unit real-time returning to main memory from auxiliary memory. Since "memory balance" is an equilibrium condition, there must also be a flow of pages ~(r) from main to auxiliary memory. Therefore 2 (r)measures the capacity required of the channel bridging the two memory levels. t must be emphasized that the reentry rate functions X(r), ~'(r), and ~,(r) are estimates. The important point is: starting from the probability density fnnction f,(c~) for the page interreference intervals x, it is possible to estimate the page trafiqc which results from the use of working sets for memory allocation. P4. r-sensitivity. t is useful to define a sensitivity function ~r(r) that measures how sensitive is the reentry rate X(r) to changes in r. We define the r-sensitivity of a worldng set W(t, r) to be d X(r).L(~) (14) ~(r) = --d-~ - That is, if r is decreased by dr, X(r) increases by o-(r) dr. t is obvious that z(r) => 0; reducing r can never result in a decrease in the reentry rate X(r). CHOCE OF r. The value ultimately selected for r will reflect the working set properties and etfieiency requite- Volume ]1 / Number 5 / May, 1968 Communications of the ACM 327

6 ments and will be influenced by system parameters such as core memory size and memory traverse time. Should ~- be too small, pages may be removed from main men,cry while still useful, resulting in a high traffic of returning pages. The return traffic functions X(r), ~(r), and 4'(r), and the r-sensitivity z(r), play roles in determining when r is "too small." Should r be too large, pages may renmjn in main memory long after they were used, resulting in wasted main memory. The desired nmnber of working sets simultaneously to occupy a core memory of given size plays a role in determining when r is "too large." Thus the value selected for r will have to represent a compromise between too rnueh page traffic and too much wasted memory space. The following consideration leads us to recommend for r a value comparable to the memory traverse time T. Define the residency of a page to be the fraction of time it is potentially available in core memory. Assuming that memory allocation procedures bmk at removing from main memory any page in a working set. once a page has entered W(t, r) it will remain in main memory for at least r seconds. Letting x be the interreference interval to a given page, we have: ( 1 ) f x N r, the page will reside in main memory t00 percent of the time. (2) r < z =< (r + T), the page will reside in main memory r/(r + 2T) of the time: it resides in main memory for an interval of r seconds, after Which it is dispatched to auxiliary memory; while in transit it is rerefereneed, so it nmst begin a return trip as soon as it reaches auxiliary memory, a total of two traverse times for the round trip. Therefore the page reappears in main memory (r -- 27') seconds after the previous reference. (3) f x > (r + T), the page will reside in main memory r/(x T) of the time: it residesin main memory for an interval of r seconds, after which it is dispatched to auxiliary memory; sometime after having reached auxiliary memory it is rereferenced, requiring T seconds for the return trip. Therefore the page reappears in main memory (x + T) seconds after the previous reference. Residency 00% T ~'+2T T Figure 4 shows the residency as a funeti(m of x. ti~ the i~terest of efi icieney, it is desirable that the drop from 100 percent to r/(r-f 2T) atz = r is not too severe; thus, for example, should we wish to limit the drop to 50 percent, we should have to choose r :~ 27'. D~r~:cm~N(~ W(t, r). According to our definition, W(t, r) is the set of its pages a process has referenced within the last r seconds of its execution. This suggests that memory management can be controlled with hardware mechanisms, by associating with each page of main memory a timer. Each time a page is referenced, its timer is set to r and begins to run down; if the timer succeeds in running down, a flag is set to mark the page for removal whenever the space is needed. n the Appendix we describe a hardware memory management meehanism that could be housed within the memory boxes. t has two interesting features: (1) it operates asynehronously attd independently of the supervisor, whose only responsibility in memory management is handling page faults. Quite literally, memory manages itself. (2) Analog devices such as eapacitative timers could be used to measure intervals. Unfortunately it is not practical to add hardware to existing systems. We seek a method of handling memory management within the software. The procedure we propose here samples the page table entries of pages in core memory at process-time intervals of z seconds (or is called the "sampling interval") where z = r/k, K an integer constant chosen to make the sampling intervals as "fine grain" as desired. On the basis of page references during each of the last K sampling intervals, the working set W(t, K~r) can be determined. As indicated by Figure 5, each page table entry eontains an "in-core" bit M, where M = 1 if and only if the page is present in main memory. t. also contains a string of use bits Uo, u,,..., UK. Each time a page reference occurs l -~ u0. At the end of eaeh sampling interval or, the bit in-core use bits pointer l to page.!!,..., V/////A TYPCAL PAGE TABLE l... ] i -. M...~V -~ ENTRY 328 r r+t FG. 4. Residency Communications of the ACM SHFT AT END OF SAMPLNG NTERVAL FG. 5. ['age table entries for detecting W(t, Q) Volume 11 / Number 5 / May, 196g

8 has received service since the last, time it (the checker) was run, removing pages according to the algorithm discussed at eqs. (15) and (16). it should be clear that if th(~ length of the running list is l and thepe are N processors, sampling of page tables occurs about every c~/n seconds. not every ~r seconds. Associated with process i is a counter w; giving fl~e current size of its working set. Each time a page fault occurs a new page enters W/t, r), and so w~ must be increased by one. Each time a page is removed from Vy(t, r) by the checker, w; must be decreased by one. Having completed its management duties, the checker replenishes vacancies in the running list by selecting jobs from the ready list according to the prevailing priority rule. This is diseussed in more detail below. 5. Sharing Sharing finds its place naturally. When pages are shared, working sets will overlap. f Arden's [7] suggestion concerning program structure 4 is followed, sharing of data can be accomplished without modification of the regime of Figure 7. f a page is in at least one working set, the "use bits" in the page table entry will be ON and the page will not be removed. To prevent anomalies, the checker must not be permitted to examine the same page table more than once during one of page faul~ ti ~ qi ti ~ qi ~[t~ [quantum runout] urst over] [q,tl Run on a processor for burst 0",~ ~,.2 or--'- t, ] its samples. Allocation poll(des should {ol~(l to rtm tw0 )Poce~s~ tog(~ther in tim(, ~xhe~ever they nre sharing information (8ymptomiz{'d by overt:t) of their working sets), i~ order to a.v{)id tmnecessry Pelo:~.{ti~g (}[' the s:mle inl'opnm.lion, ltow processes should be chapged fop memory fist/go whorl their working sets overlap is still ata ()pen cluesrich, and is under investigation. 6. Resource Allocation: A Balancing Problem We have already pointed ou{; {ha{. a comi)ut;ation places demands jointly on the processor thld nolllofy resoln'eos of a eompui;er system. A eomputation's processor demand manifests itself as a process; its memory demand manifests itself as a working set.. in this section we show how notions of ':demand" can be made precise and how resource allocation can be formulated as a problem of balancing processor and memory demands against available equipment. DES, rand. Our purpose here is to define "memory demand" and "processor demand," then eombi.ne these into the single notion "system demand." We define the memory demand m.; of computation i to be m = rain \M' 1 ), 0 -< m~ ~ 1, (17) where M is the number of pages of main memory, and w~ = oo~(t, ~-) is the working set count, such as maintained by the scheduler of Figure 7. f a working set eontmns more than M pages (it is bigger than main memory), we regard its demand to be rn = 1. Presumably M is large enough so that the probability (over the ensemble of all processes) Pr[m = 1] is very smm1. "Processor demand" is more difficult to define. Just as memory demand is in some sense g prediction of memory requirements in the immediate future, so processor d,~inand should be a prediction of processor requirements for the f (x) ' pagelwalt (r seconds) unblocked] Fssign tl quantum qitl/~ rs -,.-t3,._2!2 [run process i] J - - L T ~" X FG. 8. Probability density function for q t'ig, 7, mplementation of scheduling ~.0 ~...~. t~l-... "*.t[begin ~ 4 f a segment is shared, there will be an entry for it in the segment tables of each participating process; however, each entry points to the same page table. Each physical segment has exactly one page table describing it, but a name for the segment mt~y appear in many segment gables. 33{} Communications of the ACN! Volume ]1 / Number 5 / May, 1968

9 near future. There are many ways in which processor delna:nd could be defined. The method we have chosen, described below, defines a eomputation's processor demand to be the fraction of a standard interval a process is expected to use before it blocks. Let q be the random variable of processor time used by a process between interactions. (A process "interacts" when it communicates with something outside its name space, e.g. with a user, or with another process.) n general character, fq(z), the probability density function for q, is hyper-exponential (for a complete discussion, see Fife [16]) : 0<a<b, fq(z) = cae-a~ + (1 - c)be -b,, (18) O<c<l, where f~(z) is diagrammed in Figure 8; most of the probability is concentrated toward small q (i.e. frequently interacting processes), but fq(z) has a long exponential tail. Given that it has been y seconds (process time) since the last interaction, the conditional density function for the time beyond ~, until the next interaction is -- oo P + J~ fq(y) dy (19) cae -a ~+~) -t- (1 -- c)be -b(~+~) = cce-'~ + (1 -- c)e-*~ ' z >=0, which is just that portion of f,(z) for q => ~ with its area normalized to unity. The conditional expectation of q, given % is P Q(~) = 1o ao dz (c/a)e -~ + [(1 -- c)/b]e -b~ ce-'~ + (1 -- c)e-b~ (20) The conditional expectation function Q(.y) is shown in Figure 9. t starts at Q(0) = c/a + [(1 - c)/b] and rises toward a constant maximum of Q( oo ) = 1/a. Note that, for large enough % the conditional expectation becomes independent of v. The conditional expectation is a useful prediction function--if 7 seconds of processor time have been consumed by a process since its last interaction, we may expect Q(~) seconds of process time to elapse before its next interaction. t should be clear that the conditional expectation function Q(~) can be determined and updated automatically by the operating system. n order to make a definition of processor demand "reasonable," it is useful to establish symmetry between space and time. Just as we are unwilling to allocate more than M pages of memory, so we may be unwilling to allocate processor time for more than a standard interval A into the future. A can be chosen to reflect the maximum tolerable response time to a user: for if processor time is allocated to some set of processes such that their expected times till interactions total A, no process in that set expects to wait more than A time units before its own interaction. We define the processor demand p~ of computation i to be P~ Q(ti) Q(0) < P~ < Q(~) - NA ' NA = = NA" (21) where N is the number of processors and ti is the time-used quantity for process i, such as maintained by the scheduler of Figure 7. The (system) demand D~ of computation i is a pair Di = (pi, mi), (22) where p~ is its processor demand [eq. (2!)] and ml its memory demand [eq. (17)]. That the processor demand is p~ tells us to expect computation i to use p~ of the processors for the next A units of execution time, before its next interaction. 5 That the memory demand is m~ tells us to expect computation i to use (mim) pages of memory during the immediate future. BAT_~e~, Let constants a and ~ be given. The computer system is said to be balanced if simultaneously and processes in running list pro~sse8 running in list p = a, 0 < ~_~ 1; (23) m=fl, 0</3_<_ 1, (24) where p is a processor demand, m a memory demand, and a, fl are constants chosen to cause any desired fraction of resource to constitute balance. f the system is balanced, the total demand presented by running processes just consumes the available fractions of processor and memory resources. Equation (23) defines "processor balance" and eq. (24) defines "memory balance," We can write eqs. (23) and (24) in the more compact form s= D=(p,m), (25) processes in running list so that balance exists whenever eq. (25) holds; that is, whenever S = (a, f~). Q( ) O(O) Q(T) FG. 9. Conditional expectation function for q ' A reasonable choice for the quantum q~ (Fig. 7) granted to computation i might be qi = kq(t~), for some suitable constant k ~ 1. T Volume 11 / Number 5 / May, 1968 Communications of the ACM 331

### CSE 120 Principles of Operating Systems

CSE 120 Principles of Operating Systems Fall 2004 Lecture 11: Page Replacement Geoffrey M. Voelker Memory Management Last lecture on memory management: Goals of memory management To provide a convenient

### Operating Systems, 6 th ed. Test Bank Chapter 7

True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

### Operating Systems. Virtual Memory

Operating Systems Virtual Memory Virtual Memory Topics. Memory Hierarchy. Why Virtual Memory. Virtual Memory Issues. Virtual Memory Solutions. Locality of Reference. Virtual Memory with Segmentation. Page

### Chapter 2: OS Overview

Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

### CSC 539: Operating Systems Structure and Design. Spring 2006

CSC 539: Operating Systems Structure and Design Spring 2006 CPU scheduling historical perspective CPU-I/O bursts preemptive vs. nonpreemptive scheduling scheduling criteria scheduling algorithms: FCFS,

CHAPTER 9 Virtual memory can be a very interesting subject since it has so many different aspects: page faults, managing the backing store, page replacement, frame allocation, thrashing, page size. The

### Up until now we assumed that an entire program/process needed to be in memory to execute

Lecture Overview Virtual memory Demand paging Page faults Page replacement Frame allocation Thrashing Operating Systems - June 7, 00 Virtual Memory Up until now we assumed that an entire program/process

### CS 134: Operating Systems

CS 134: Operating Systems Memory Fun with Paging CS 134: Operating Systems Memory Fun with Paging 1 / 37 Overview Cost of Faults Overview Overview Cost of Faults Algorithms Easy Approaches Realistic Approaches

### Operating Systems Fall 2014 Virtual Memory, Page Faults, Demand Paging, and Page Replacement. Myungjin Lee

Operating Systems Fall 2014 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Myungjin Lee myungjin.lee@ed.ac.uk 1 Reminder: Mechanics of address translation virtual address virtual page

### William Stallings Computer Organization and Architecture

William Stallings Computer Organization and Architecture Chapter 8 Operating System Support Rev. 3.2 (2009-10) by Enrico Nardelli 8-1 Objectives and Functions Convenience Making the computer easier to

### Chapter 11 I/O Management and Disk Scheduling

Operatin g Systems: Internals and Design Principle s Chapter 11 I/O Management and Disk Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles An artifact can

### Chapter 11 I/O Management and Disk Scheduling

Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization

### Memory Management. Prof. P.C.P. Bhatt. P.C.P Bhat OS/M4/V1/2004 1

Memory Management Prof. P.C.P. Bhatt P.C.P Bhat OS/M4/V1/2004 1 What is a Von-Neumann Architecture? Von Neumann computing requires a program to reside in main memory to run. Motivation The main motivation

### ICS Principles of Operating Systems

ICS 143 - Principles of Operating Systems Lectures 15 and 16 - Virtual Memory Prof. Ardalan Amiri Sani Prof. Nalini Venkatasubramanian ardalan@uci.edu nalini@ics.uci.edu Virtual Memory Background Demand

### Chapter 7 Memory Management

Operating Systems: Internals and Design Principles Chapter 7 Memory Management Eighth Edition William Stallings Frame Page Segment A fixed-length block of main memory. A fixed-length block of data that

### Memory Management and Processor Management

C H A P T E R 6 Memory Management and Processor Management WHEN YOU FINISH READING THIS CHAPTER YOU SHOULD BE ABLE TO: Distinguish between resident and transient routines. Briefly explain concurrency.

### Chapter 5: CPU Scheduling. Operating System Concepts 7 th Edition, Jan 14, 2005

Chapter 5: CPU Scheduling Operating System Concepts 7 th Edition, Jan 14, 2005 Silberschatz, Galvin and Gagne 2005 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

### Module 10: Virtual Memory

Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing Other Considerations Demand Segmenation Applied

### Chapter 9: Virtual-Memory Management

Chapter 9: Virtual-Memory Management Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocation Kernel Memory Other

### Memory Management Thrashing, Segmentation and Paging

Memory Management Thrashing, Segmentation and Paging CS 416: Operating Systems Design, Spring 2011 Department of Computer Science Rutgers Sakai: 01:198:416 Sp11 (https://sakai.rutgers.edu) Summary of Page

### Objectives and Functions

Objectives and Functions William Stallings Computer Organization and Architecture 6 th Edition Week 10 Operating System Support Convenience Making the computer easier to use Efficiency Allowing better

### Virtual Memory. Reading: Silberschatz chapter 10 Reading: Stallings. chapter 8 EEL 358

Virtual Memory Reading: Silberschatz chapter 10 Reading: Stallings chapter 8 1 Outline Introduction Advantages Thrashing Principal of Locality VM based on Paging/Segmentation Combined Paging and Segmentation

### Analysis of a Production/Inventory System with Multiple Retailers

Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University

### Oscillations of the Sending Window in Compound TCP

Oscillations of the Sending Window in Compound TCP Alberto Blanc 1, Denis Collange 1, and Konstantin Avrachenkov 2 1 Orange Labs, 905 rue Albert Einstein, 06921 Sophia Antipolis, France 2 I.N.R.I.A. 2004

### Classification of Scheduling Activity Queuing Diagram for Scheduling

CPU Scheduling CPU Scheduling Chapter 6 We concentrate on the problem of scheduling the usage of a single processor among all the existing processes in the system The goal is to achieve High processor

### An On-Line Algorithm for Checkpoint Placement

An On-Line Algorithm for Checkpoint Placement Avi Ziv IBM Israel, Science and Technology Center MATAM - Advanced Technology Center Haifa 3905, Israel avi@haifa.vnat.ibm.com Jehoshua Bruck California Institute

### CPU Scheduling Yi Shi Fall 2015 Xi an Jiaotong University

CPU Scheduling Yi Shi Fall 2015 Xi an Jiaotong University Goals for Today CPU Schedulers Scheduling Algorithms Algorithm Evaluation Metrics Algorithm details Thread Scheduling Multiple-Processor Scheduling

### Peter J. Denning, Naval Postgraduate School, Monterey, California

VIRTUAL MEMORY Peter J. Denning, Naval Postgraduate School, Monterey, California January 2008 Rev 6/5/08 Abstract: Virtual memory is the simulation of a storage space so large that users do not need to

### Thrashing: Its causes and prevention

Thrashing: Its causes and prevention by PETER J. DENNING Princeton University* Princeton, New Jersey INTRODUCTION A particularly troublesome phenomenon, thrashing, may seriously interfere with the performance

### Chapter 3 Memory Management: Virtual Memory

Understanding Operating Systems, Fifth Edition 3-1 Chapter 3 Memory Management: Virtual Memory At a Glance Instructor s Manual Table of Contents Overview Objectives s Quick Quizzes Class Discussion Topics

### Operating Systems Memory Management

Operating Systems Memory Management ECE 344 ECE 344 Operating Systems 1 Memory Management Contiguous Memory Allocation Paged Memory Management Virtual Memory ECE 344 Operating Systems 2 Binding of Instructions

### PART OF THE PICTURE: Computer Architecture

PART OF THE PICTURE: Computer Architecture 1 PART OF THE PICTURE: Computer Architecture BY WILLIAM STALLINGS At a top level, a computer consists of processor, memory, and I/O components, with one or more

### Virtual Memory. 4.1 Demand Paging

Chapter 4 Virtual Memory All the memory management policies we have discussed so far, try to keep a number of processes in the memory simultaneously to allow multiprogramming. But they require an entire

### Chapter 6: CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms

Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Operating System Concepts 6.1 Basic Concepts Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle

### Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

### Operating Systems 4 th Class

Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science

### Breaking The Code. Ryan Lowe. Ryan Lowe is currently a Ball State senior with a double major in Computer Science and Mathematics and

Breaking The Code Ryan Lowe Ryan Lowe is currently a Ball State senior with a double major in Computer Science and Mathematics and a minor in Applied Physics. As a sophomore, he took an independent study

### Network Model APPENDIXD. D.1 Basic Concepts

APPENDIXD Network Model In the relational model, the data and the relationships among data are represented by a collection of tables. The network model differs from the relational model in that data are

### CHAPTER 8 : VIRTUAL MEMORY. Introduction

CHAPTER 8 : VIRTUAL MEMORY Introduction In simple paging and segmentation, we realised that we can : Load program pages or segments into frames that may not necessarily be contigous. Make memory address

### Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

### II. Operating Systems and Processes!

II. Operating Systems and Processes! Computer Architecture & Operating Systems: 725G84 Ahmed Rezine What is an Operating System? A program that acts as an intermediary between a user of a computer and

### Chapter 6: CPU Scheduling

1 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 6.1 2 Basic Concepts Maximum CPU utilization

### OPERATING SYSTEM - VIRTUAL MEMORY

OPERATING SYSTEM - VIRTUAL MEMORY http://www.tutorialspoint.com/operating_system/os_virtual_memory.htm Copyright tutorialspoint.com A computer can address more memory than the amount physically installed

### Virtual Memory: Demand Paging and Page Replacement

Virtual Memory: Demand Paging and Page Replacement Problems that remain to be solved: Even with 8K page size, number of pages /process is very large. Can t afford to keep the entire page table for a process

### A Comparison of General Approaches to Multiprocessor Scheduling

A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA jing@jolt.mt.att.com Michael A. Palis Department of Computer Science Rutgers University

### Game Theory and Poker

Game Theory and Poker Jason Swanson April, 2005 Abstract An extremely simplified version of poker is completely solved from a game theoretic standpoint. The actual properties of the optimal solution are

### Chapter 6: CPU Scheduling. Basic Concepts

1 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 6.1 Basic Concepts Maximum CPU utilization obtained

### CPU SCHEDULING. Scheduling Objectives. Outline. Basic Concepts. Enforcement of fairness in allocating resources to processes

Scheduling Objectives CPU SCHEDULING Enforcement of fairness in allocating resources to processes Enforcement of priorities Make best use of available system resources Give preference to processes holding

### Tasks Schedule Analysis in RTAI/Linux-GPL

Tasks Schedule Analysis in RTAI/Linux-GPL Claudio Aciti and Nelson Acosta INTIA - Depto de Computación y Sistemas - Facultad de Ciencias Exactas Universidad Nacional del Centro de la Provincia de Buenos

### Performance of networks containing both MaxNet and SumNet links

Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for

### New Hash Function Construction for Textual and Geometric Data Retrieval

Latest Trends on Computers, Vol., pp.483-489, ISBN 978-96-474-3-4, ISSN 79-45, CSCC conference, Corfu, Greece, New Hash Function Construction for Textual and Geometric Data Retrieval Václav Skala, Jan

### Announcement. HW#3 will be posted today. Exercise # 4 will be posted on Piazza today.

Announcement HW#3 will be posted today. Exercise # 4 will be posted on Piazza today. Operating Systems: Internals and Design Principles Chapter 7 Memory Management Seventh Edition William Stallings Modified

### LOGICAL AND PHYSICAL ORGANIZATION MEMORY MANAGEMENT TECHNIQUES (CONT D)

MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization

### END TERM Examination, May 2014 (Model Test Paper) B.Tech (IT) IV Semester

END TERM Examination, May 2014 (Model Test Paper) B.Tech (IT) IV Semester Paper Code: ETCS - 212 Subject: Operating Systems Time: 3 hrs Maximum Marks: 75 Note: Attempt five questions including Q. No. 1

### The Relation between Two Present Value Formulae

James Ciecka, Gary Skoog, and Gerald Martin. 009. The Relation between Two Present Value Formulae. Journal of Legal Economics 15(): pp. 61-74. The Relation between Two Present Value Formulae James E. Ciecka,

### Process and Thread Scheduling. Raju Pandey Department of Computer Sciences University of California, Davis Winter 2005

Process and Thread Scheduling Raju Pandey Department of Computer Sciences University of California, Davis Winter 2005 Scheduling Context switching an interrupt occurs (device completion, timer interrupt)

### Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Module 6 Embedded System Software Version 2 EE IIT, Kharagpur 1 Lesson 30 Real-Time Task Scheduling Part 2 Version 2 EE IIT, Kharagpur 2 Specific Instructional Objectives At the end of this lesson, the

### Module 4: Memory Management

Module 4: Memory Management The von Neumann principle for the design and operation of computers requires that a program has to be primary memory resident to execute. Also, a user requires to revisit his

### Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation Oct-03 1 Basic Concepts Maximum CPU utilization

### CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 17 by Kuo-pao Yang

CHAPTER 6 Memory 6.1 Memory 313 6.2 Types of Memory 313 6.3 The Memory Hierarchy 315 6.3.1 Locality of Reference 318 6.4 Cache Memory 319 6.4.1 Cache Mapping Schemes 321 6.4.2 Replacement Policies 333

### Chapter 2 Memory Management: Early Systems. Understanding Operating Systems, Fourth Edition

Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition Memory Management: Early Systems Memory is second only to processes in importance (and in intensity) with which

### END TERM Examination(Model Test Paper) Fourth Semester[B.Tech]

END TERM Examination(Model Test Paper) Fourth Semester[B.Tech] Paper Code: ETCS - 212 Subject: Operating System Time: 3 hrs Maximum Marks: 75 Note: Q.No.1 is compulsory. Attempt any four questions of remaining

### APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

### Chapter 9: Virtual-Memory Management

Chapter 9: Virtual-Memory Management Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocation Kernel Memory Other

### Chapter 5: CPU Scheduling!

Chapter 5: CPU Scheduling Operating System Concepts 8 th Edition, Silberschatz, Galvin and Gagne 2009 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling

### ECEN 5682 Theory and Practice of Error Control Codes

ECEN 5682 Theory and Practice of Error Control Codes Convolutional Codes University of Colorado Spring 2007 Linear (n, k) block codes take k data symbols at a time and encode them into n code symbols.

### DATA ITEM DESCRIPTION

DATA ITEM DESCRIPTION Form Approved OMB NO.0704-0188 Public reporting burden for collection of this information is estimated to average 110 hours per response, including the time for reviewing instructions,

### ICS 143 - Principles of Operating Systems

ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Note that some slides are adapted from course text slides 2008 Silberschatz. Some

### Topic 14: Virtual Memory, Demand Paging, and Working Sets

CS 414 : Operating Systems UNIVERSITY OF VIRGINIA Department of Computer Science Fall 2005 Topic 14: Virtual Memory, Demand Paging, and Working Sets Readings for this topic: Ch.9 Separation of the programmer

CHAPTER 5 CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In this chapter, we introduce

### Resource Allocation Schemes for Gang Scheduling

Resource Allocation Schemes for Gang Scheduling B. B. Zhou School of Computing and Mathematics Deakin University Geelong, VIC 327, Australia D. Walsh R. P. Brent Department of Computer Science Australian

### Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Demand Segmentation Operating System Examples 9.2 Background

### Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada ken@unb.ca Micaela Serra

### CPU Scheduling. Date. 2/2/2004 Operating Systems 1

CPU Scheduling Date 2/2/2004 Operating Systems 1 Basic concepts Maximize CPU utilization with multi programming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait.

### Scheduling for QoS Management

Scheduling for QoS Management Domenico Massimo Parrucci Condello isti information science Facoltà and di Scienze technology e Tecnologie institute 1/number 1 Outline What is Queue Management and Scheduling?

### Dynamic Load Balance Algorithm (DLBA) for IEEE 802.11 Wireless LAN

Tamkang Journal of Science and Engineering, vol. 2, No. 1 pp. 45-52 (1999) 45 Dynamic Load Balance Algorithm () for IEEE 802.11 Wireless LAN Shiann-Tsong Sheu and Chih-Chiang Wu Department of Electrical

### 2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?

### CSE 380 Computer Operating Systems. Instructor: Insup Lee. University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.

CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management 1 Memory Management q The memory management portion of the Operating System

### PIPELINING CHAPTER OBJECTIVES

CHAPTER 8 PIPELINING CHAPTER OBJECTIVES In this chapter you will learn about: Pipelining as a means for executing machine instructions concurrently Various hazards that cause performance degradation in

### Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

### Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

### Announcement. Midterm Exam is on Monday June 8th. Office hours on Friday 2-3 pm. Exam Material will be posted on Piazza.

Announcement Midterm Exam is on Monday June 8th. Office hours on Friday 2-3 pm. Exam Material will be posted on Piazza. Operating Systems: Internals and Design Principles Chapter 9 Uniprocessor Scheduling

### HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1. Department of Computer Science, University of York, York, YO1 5DD, England.

HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1 N C Audsley A Burns M F Richardson A J Wellings Department of Computer Science, University of York, York, YO1 5DD, England ABSTRACT The scheduling

### 8 Memory Management. 8.1 Requirements. 8 Memory Management Protection Relocation INTRODUCTION TO MEMORY MANAGEMENT OPERATING SYSTEMS

OPERATING SYSTEMS INTRODUCTION TO MEMORY MANAGEMENT 1 8 Memory Management In a multiprogramming system, in order to share the processor, a number of processes must be kept in. Memory management is achieved

### A Programme Implementation of Several Inventory Control Algorithms

BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume, No Sofia 20 A Programme Implementation of Several Inventory Control Algorithms Vladimir Monov, Tasho Tashev Institute of Information

### Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

1 Any modern computer system will incorporate (at least) two levels of storage: primary storage: typical capacity cost per MB \$3. typical access time burst transfer rate?? secondary storage: typical capacity

### Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

### THE STATISTICAL TREATMENT OF EXPERIMENTAL DATA 1

THE STATISTICAL TREATMET OF EXPERIMETAL DATA Introduction The subject of statistical data analysis is regarded as crucial by most scientists, since error-free measurement is impossible in virtually all

### Memory Management. memory hierarchy

Memory Management Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory cache some medium-speed, medium price main memory gigabytes of

### Module 5. Broadcast Communication Networks. Version 2 CSE IIT, Kharagpur

Module 5 Broadcast Communication Networks Lesson 9 Cellular Telephone Networks Specific Instructional Objectives At the end of this lesson, the student will be able to: Explain the operation of Cellular

### Solution printed. Do not start the test until instructed to do so! CS 3204 Operating Systems Midterm Fall Instructions:

VIRG INIA POLYTECHNIC INSTITUTE AND STATE U T PROSI M UNI VERSI TY Instructions: Print your name in the space provided below. This examination is closed book and closed notes, aside from the permitted

### Inwall 4 Input / 4 Output Module

Inwall 4 Input / 4 Output Module IO44C02KNX Product Handbook Product: Inwall 4 Input / 4 Output Module Order Code: IO44C02KNX 1/27 INDEX 1. General Introduction... 3 2. Technical data... 3 2.1 Wiring Diagram...

### (Refer Slide Time 6:09)

Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology, Madras Lecture 3 Combinational Logic Basics We defined digital signal as a signal which

### MATHEMATICS OF FINANCE AND INVESTMENT

MATHEMATICS OF FINANCE AND INVESTMENT G. I. FALIN Department of Probability Theory Faculty of Mechanics & Mathematics Moscow State Lomonosov University Moscow 119992 g.falin@mail.ru 2 G.I.Falin. Mathematics

### Attaining EDF Task Scheduling with O(1) Time Complexity

Attaining EDF Task Scheduling with O(1) Time Complexity Verber Domen University of Maribor, Faculty of Electrical Engineering and Computer Sciences, Maribor, Slovenia (e-mail: domen.verber@uni-mb.si) Abstract:

### Silberschatz and Galvin Chapter 9

Silberschatz and Galvin Chapter 9 Virtual Memory CPSC Richard Furuta /6/99 Virtual memory Virtual memory permits the execution of processes that are not entirely in physical memory Programmer benefits

### CHAPTER 3 LOAD BALANCING MECHANISM USING MOBILE AGENTS

48 CHAPTER 3 LOAD BALANCING MECHANISM USING MOBILE AGENTS 3.1 INTRODUCTION Load balancing is a mechanism used to assign the load effectively among the servers in a distributed environment. These computers