Operating Systems, 6 th ed. Test Bank Chapter 7



Similar documents
Chapter 7 Memory Management

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

OPERATING SYSTEMS SCHEDULING

Chapter 7 Memory Management

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. CPU Scheduling

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

OS OBJECTIVE QUESTIONS

Processor Scheduling. Queues Recall OS maintains various queues

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Introduction. Scheduling. Types of scheduling. The basics

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU Scheduling Outline

ICS Principles of Operating Systems

CPU Scheduling. Core Definitions

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Chapter 5 Process Scheduling

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Operating Systems. III. Scheduling.

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Operating Systems Lecture #6: Process Management

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operating System: Scheduling

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Chapter 11 I/O Management and Disk Scheduling

Operating Systems Concepts: Chapter 7: Scheduling Strategies

OPERATING SYSTEM - VIRTUAL MEMORY

Operating System Tutorial

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Chapter 2: OS Overview

Operating Systems. Virtual Memory

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Real-Time Scheduling 1 / 39

OPERATING SYSTEM - MEMORY MANAGEMENT

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

W4118 Operating Systems. Instructor: Junfeng Yang

Syllabus MCA-404 Operating System - II

Chapter 11 I/O Management and Disk Scheduling

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Chapter 1 13 Essay Question Review

Job Scheduling Model

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Far-western University Central Office, Mahendranagar Operating System

Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Chapter 1 8 Essay Question Review

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

This tutorial will take you through step by step approach while learning Operating System concepts.

Virtual vs Physical Addresses

Linux Process Scheduling Policy

Weighted Total Mark. Weighted Exam Mark

Scheduling Algorithms

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

COS 318: Operating Systems. Virtual Memory and Address Translation

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Performance Comparison of RTOS

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

A Comparative Study of CPU Scheduling Algorithms

Operating Systems 4 th Class

Lecture 17: Virtual Memory II. Goals of virtual memory

CHAPTER 15: Operating Systems: An Overview

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System

Chapter 3 Operating-System Structures

Peter J. Denning, Naval Postgraduate School, Monterey, California

Chapter 3. Operating Systems

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

The Classical Architecture. Storage 1 / 36

Timing of a Disk I/O Transfer

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Comparison of Memory Management Systems of BSD, Windows, and Linux

Chapter 12 File Management

Chapter 12 File Management. Roadmap

Chapter 10: Virtual Memory. Lesson 08: Demand Paging and Page Swapping

Transcription:

True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one for the set of processes currently being executed. 2. T / F The task of subdividing memory between the O/S and processes is performed automatically by the O/S and is called relocation. 3. T / F The practice in which a program and data are organized in such a way that various modules can be assigned the same region of memory is called overlaying. 4. T / F The concept of virtual memory is based on one or both of two basic techniques: segmentation and paging. 5. T / F A major problem with the largely obsolete Fixed Partitioning memory management technique is that of external fragmentation. 6. T / F The problem of internal fragmentation can be lessened in a system employing a fixed-partition memory management scheme by using unequal size partitions. 7. T / F In the Dynamic Partitioning technique of memory management, the bestfit placement algorithm is usually the best performer of the available algorithms. 8. T / F In the Dynamic Partitioning technique of memory management, compaction refers to shifting the processes into a contiguous block, resulting in all the free memory aggregated into in a single block. 9. T / F In the Dynamic Partitioning technique of memory management, the first-fit placement algorithm scans memory from the location of the last placement and chooses the first available block it finds that satisfies the request. 10. T / F The Buddy System is a reasonable compromise to overcome the disadvantages of both the fixed and variable partition schemes. 11. T / F A physical memory address is a reference to a memory location independent of the current assignment of data to memory. 12. T / F A memory system employing paging may suffer slightly from internal fragmentation and experiences no external fragmentation. 13. T / F In a memory system employing paging, the chunks of a process (called frames) can be assigned to available chunks of memory (called pages). 14. T / F A memory system employing segmentation may suffer slightly from external fragmentation and experience no internal fragmentation. 15. T / F A memory system employing segmentation consists of a number of user program segments that must be of the same length and have a maximum segment length. Multiple Choice Questions: 1. The task of subdividing memory between the O/S and processes is performed automatically by the O/S and is called: a. Protection c. Memory Management b. Relocation 2. The concept of Memory Management satisfies certain system requirements, including: a. Protection c. Physical organization b. Relocation 3. The practice in which a program and data are organized in such a way that various modules can be assigned the same region of memory is called: a. Overlaying c. Relocation b. Sharing 4. The concept of virtual memory is based on one or both of two basic techniques: a. Overlaying and relocation c. Segmentation and b. Segmentation and paging partitioning 5. A problem with the largely obsolete Fixed Partitioning memory management technique is that of: a. Allowing only a fixed c. Internal fragmentation number of Processes b. Inefficient use of memory 6. The problem of internal fragmentation can be lessened in systems employing a fixed-partition memory management scheme by using: a. Random size partitions c. Unequal size partitions b. Equal size partitions 7. In the Dynamic Partitioning technique of memory management, the phenomenon that results in unused blocks of memory outside of existing partitions is called: a. Internal fragmentation b. External fragmentation Page 1 of 13 Page 2 of 13

c. Compaction 8. In the Dynamic Partitioning technique of memory management, the placement algorithm that chooses the block that is closest in size to the request is called: a. Best-fit c. Next-fit b. First-fit 9. In the Dynamic Partitioning technique of memory management, the placement algorithm that scans memory from the location of the last placement and chooses the next available block that large enough to satisfy the request is called: a. Best-fit b. First-fit c. Next-fit 10. A reference to a memory location independent of the current assignment of data to memory is called a(n): a. Relative address c. Absolute address b. Logical address 11. An actual location in main memory is called a(n): a. Relative address b. Logical address 12. The page table for each process maintains: a. The frame location for each page of the process b. The page location for each frame of the process c. Absolute address c. The physical memory location of the process 13. In a system employing a paging scheme for memory management, wasted space is due to: a. External fragmentation c. Pages and frames of b. Internal fragmentation different specified sizes 14. In a system employing a segmentation scheme for memory management, wasted space is due to: a. External fragmentation c. Segments of different b. Internal fragmentation sizes 15. In a system employing a segmentation scheme for memory management, a process is divided into: a. One segment per thread c. A number of segments b. A number of segments which need not be of equal which must be of equal size size Fill-In-The-Blank Questions: 1. The task of subdividing memory between the O/S and processes is performed automatically by the O/S and is called memory management. 2. The Memory Management task of moving the process image between different areas of memory as required to support swapping is referred to as relocation. 3. The practice in which a program and data are organized in such a way that various modules can be assigned the same region of memory is called overlaying. 4. In almost all modern multiprogramming systems, memory is managed using a sophisticated technique known as virtual memory. 5. The phenomenon, in which there is wasted space internal to a partition due to the fact that the block of data loaded is smaller than the partition, is referred to as internal fragmentation. 6. The problem of internal fragmentation can be lessened in system employing a fixed-partition memory management scheme by using unequal size partitions. 7. In the Dynamic Partitioning technique of memory management, the process of shifting processes so they occupy a single contiguous block in memory is called compacting. 8. In the Dynamic Partitioning technique of memory management, the placement algorithm that chooses the block that is closest in size to the request is called best-fit. 9. In the Dynamic Partitioning technique of memory management, the phenomenon that results in unused blocks of memory outside of existing partitions is called external fragmentation. 10. Programs that employ relative addresses in memory are loaded using dynamic run-time loading. 11. A compromise between the fixed and dynamic partitioning schemes for memory management that employs aspects of both is called the Buddy System. 12. In a system that employs a paging memory management scheme, the page table shows the frame location for each page of the process. Page 3 of 13 Page 4 of 13

13. In a system that employs a paging memory management scheme, the chunks of a process can be assigned to available chunks of memory, which are called frames. 14. A system that employs a segmentation memory management scheme makes use of a segment table that provides the starting address of the corresponding segment in main memory. 15. A system that employs a segmentation memory management scheme, the program and its associated data are divided into a number of segments that need not be of the same length. True / False Questions: Chapter 8 Virtual Memory 1. T / F In a system employing a memory management strategy that doesn t require an entire process to be in main memory at one time, the portion of a process that is actually in main memory at any given time is defined to be the resident set of the process. 2. T / F The condition known as thrashing occurs when the majority of the processes in main memory require repetitive blocking on a single shared I/O device in the system. 3. T / F The modify (M) bit is a control bit in a page table entry that indicates whether the contents of the corresponding page have been altered since the page was last loaded into memory. 4. T / F A Page Fault occurs when the desired page table entry is not found in the Translation Lookaside Buffer (TLB). 5. T / F One of the advantages to the programmer of virtual memory using segmentation is that it simplifies the handling of growing data structures by allowing the segment to grow or shrink as necessary. 6. T / F In a combined paging/segmentation system, a user s address space is broken up into a number of fixed-size pages, which in turn are broken up into a number of segments. 7. T / F To achieve sharing in a segmentation system, it is possible for a segment to be referenced in the segment tables of more than one process. 8. T / F Linux is an example of an operating system that does not provide virtual memory. 9. T / F The fetch policy determines when a page should be brought into main memory. 10. T / F The Least Recently Used (LRU) replacement policy replaces the page in memory that has been referenced most recently. 11. T / F A global replacement policy considers all unlocked pages in main memory as candidates for replacement, regardless of which process owns a particular page. 12. T / F In a precleaning policy, modified pages are written to secondary memory once they have been selected for replacement. 13. T / F SVR4 and Solaris systems use two separate schemes for memory management, one for user processes and disk I/O, and another for kernel memory allocation. 14. T / F Linux makes use of a two-level page table structure, consisting of a page directory and a page table. 15. T / F Every W2K user process sees a separate 32-bit address space, allowing 4 GB of memory per process. Multiple Choice Questions: 16. The type of memory that allows for very effective multiprogramming and relieves the user of memory size constraints is referred to as: a. Real memory c. Main memory b. Virtual memory 17. The situation where the processor spends most of its time swapping process pieces rather than executing instructions is called: a. Paging c. Thrashing b. The Principle of Locality 18. The situation that occurs when the desired page table entry is not found in the Translation Lookaside Buffer (TLB) is called a: a. TLB miss c. Page fault b. TLB hit 19. The real address of a word in memory is translated from the following portions of a virtual address: Page 5 of 13 Page 6 of 13

a. Page number and frame number b. Page number and offset c. Frame number and offset 20. Segmentation has a number of advantages to the programmer over a nonsegmented address space, including: a. Simplifying the handling of c. Protection growing data structures b. Sharing among processes 21. In a combined paging/segmentation system, a user s address space is broken up into a number of: a. Segments or pages, at the into variable-sized discretion of the segments programmer c. Variable-sized Segments, b. Fixed-size pages, which which are in turn broken are in turn broken down down into fixed-size pages 22. Sharing is achieved in a segmentation system by: a. Referencing a segment in the segment tables of more than one process b. Each process segment table having a reference to the dispatcher main memory area c. Having a common data area that all processes can share 23. A fundamental choice in the design of the memory-management portion of an O/S is: a. Whether or not to use c. The algorithms employed virtual memory techniques for various aspects of b. Whether to use paging, memory management segmentation of a combination of the two 24. The fetch policy that exploits the characteristics of most secondary memory devices, such as disks, which have seek time and rotational latency is called: a. Demand paging c. Swapping b. Prepaging 25. The replacement policy that is impossible to implement because it would require the O/S to have perfect knowledge of future events is called the: a. Optimal policy c. Clock policy b. Least recently used (LRU) policy 26. The replacement policy that chooses only among the resident pages of the process that generated the page fault in selecting a page to replace is referred to as a: a. Global replacement policy b. Local replacement policy c. Variable replacement policy 27. The concept associated with determining the number of processes that will be resident in main memory is referred to as: a. A cleaning policy c. Load Control b. The page fault frequency 28. In SVR4 and Solaris systems, the memory management scheme that manages user processes and disk I/O is called the: a. Paging system c. Kernel memory allocator b. Virtual memory manager 29. The multi-level memory management scheme implemented in Linux was designed to minimize large page tables and directories in which of the following line of processors: a. 16-bit X86 architecture b. 32-bit Pentium/X86 architecture c. 64-bit Alpha architecture 30. The Windows 2000 virtual memory manager can use page sizes ranging from: a. 4 KB to 64 KB Fill-In-The-Blank Questions: 31. In a system employing a memory management strategy that doesn t require an entire process to be in main memory at one time, the portion of a process that is actually in main memory at any given time is defined to be the resident set of the process. 32. The situation where the processor spends most of its time swapping process pieces rather than executing instructions is called thrashing. 33. Most virtual memory schemes make use of a special high-speed cache for page table entries, called a translation lookaside buffer (TLB). 34. Each entry in a page table contains control bits and the corresponding frame number if the page is resident in memory. 35. In a segmentation system, each entry in a segment table contains control bits and the starting address and the length of the segment. Page 7 of 13 Page 8 of 13

36. Paging, which is transparent to the programmer, eliminates external fragmentation, provides efficient use of main memory, and has pieces of fixed, equal size. 37. Segmentation, which is visible to the programmer, has the ability to handle growing data structures, modularity, and support for sharing and protection. 38. An example of an O/S that doesn t provide virtual memory is MS-DOS or early UNIX. 39. The fetch policy where a page is brought into main memory only if a reference is made to a location on that page is called demand paging. 40. The replacement policy that treats the page frames allocated to a process as a circular buffer is called FIFO. 41. A local replacement policy chooses only among the resident ages of the process that generated the page fault in selecting a page to replace. 42. A precleaning policy writes modified pages before their page frames are needed so that pages can be written out in batches. 43. In SVR4 and Solaris systems, the memory management scheme that manages memory allocation for the kernel is called the kernel memory allocato.. 44. Linux systems use a three level page table structure in its memory management scheme to accommodate large addresses. 45. In a W2K system, although each user sees a 32-bit address space, allowing 4 GB of memory per process, a portion of this memory is reserved for O/S use, so a user process actually has access to 2 GB of virtual address space. True / False Questions: Chapter 9 Uniprocessor Scheduling 1. T / F Scheduling affects the performance of a system because it determines which processes will wait and which will progress. 2. T / F The short-term scheduler may limit the degree of multiprogramming to provide satisfactory service to the current set of processes. 3. T / F Medium-term scheduling is part of the system swapping function. 4. T / F The long-term scheduler is invoked whenever an event occurs that may lead to the suspension or preemption of the currently running process. 5. T / F The main objective of short-term scheduling is to allocate processor time in such a way as to optimize one or more aspects of system behavior. 6. T / F One problem with a pure priority scheduling scheme is that lower-priority processes may suffer deadlock. 7. T / F The selection function determines which process, among ready processes, is selected next for execution. 8. T / F First-come-first-served (FCFS) is a simple scheduling policy that tends to favor I/O-bound processes over processor bound processes. 9. T / F Round Robin is a scheduling technique is also known as time slicing, because each process is given a slice of time before being preempted. 10. T / F The Shortest Process Next (SPN) scheduling policy is often used for timesharing and transaction processing environments because of the lack of preemption. 11. T / F The Shortest Remaining Time (SRT) scheduling policy is a preemptive version of the Shortest Process Next (SPN) scheduling policy. 12. T / F In the Highest Response Ratio Next (HRRN) scheduling policy, longer jobs are favored, because they yield a larger ratio from the smaller denominator in the equation. 13. T / F A disadvantage of simulation is that results for a given run only apply to that particular collection of processes under that particular set of assumptions. 14. T / F In fair share scheduling, each user is assigned a weighting of some sort that defines that user s share of system resources as a fraction of the total usage of those resources. 15. T / F The traditional UNIX scheduler, such as those used in SVR3 and 4.3 BSD UNIX systems, employs single level feedback using round robin. Multiple Choice Questions: Page 9 of 13 Page 10 of 13

46. The type of scheduling that involves the decision to add a process to those that are at least partially in main memory and therefore available for execution is referred to as: a. Long-term scheduling b. Medium-term scheduling c. I/O scheduling 47. The decision as to which job to admit to the system next can be based on which of the following criteria: a. Simple FIFO c. I/O requirements b. Priority 48. Typically, the swapping-in function for processes is based on the need to manage: a. Process priorities c. The degree of b. Virtual memory multiprogramming 49. In terms of frequency of execution, the short-term scheduler is usually the one that executes: a. Most frequently c. About the same as the b. Least frequently other schedulers 50. Response time in an interactive system is an example of: a. System-oriented criteria for short-term scheduling policies b. User-oriented criteria for short-term scheduling policies c. System-oriented criteria for long-term scheduling policies 51. A typical way to overcome starvation of lower-priority processes in a prioritybased scheduling system is to: a. Change a process priority c. Round-robin cycling of randomly processes in a priority b. Change a process priority queue with its age 52. Which of the following scheduling policies allow the O/S to interrupt the currently running process and move it to the Ready state? a. Preemptive c. First-come-first-served b. Non-Preemptive 53. In terms of the queuing model, the total time that a process spends in a system (waiting time plus service time) is called: a. Normalized turnaround c. Turnaround or residence time (TAT) time (TAT) b. Finish time (FT) 54. In the Round Robin scheduling technique, the principle design issue is: a. Determining the fair distribution of time quanta to individual processes b. Determining the method of cycling through a given set of processes c. Determining the length of the time quantum 55. One difficulty with the Shortest Process Next (SPN) scheduling technique is: a. The need to know or estimate required processing times for each process b. The starvation of longer processes c. The lack of preemption 56. One difficulty with the Shortest Remaining Time (SRT) scheduling technique is: a. The need to know or estimate required processing times for each process b. The starvation of shorter processes c. The lack of preemption 57. Which of the following scheduling policies require prior knowledge or estimation of process length: a. Shortest Remaining Time (SRT) b. Shortest Process Next (SPN) c. Highest Response Ratio Next (HRRN) 58. It is impossible to make definitive comparisons of various scheduling policies due to dependence on factors such as: a. The probability distribution of service times of the various processes b. The efficiency of the scheduling and context switching mechanisms c. The nature of the I/O demand and performance of the I/O subsystem 59. The strategy that schedules processes based on their group affiliation is generally referred to as: a. Queuing analysis c. Fair share scheduling b. Simulation modeling 60. The traditional UNIX scheduler divides processes into fixed bands of priority levels, with the highest priority band being the: a. Swapper band c. User process band b. File manipulation band Fill-In-The-Blank Questions: Page 11 of 13 Page 12 of 13

61. The task of assigning processes to the processor or processors over time, in a way that meets system objectives is called scheduling. 62. The decision as to when to create a new process is generally driven by the desired degree of multiprogramming. 63. Medium term scheduling is part of the system swapping function. 64. The short term scheduler is invoked whenever an event occurs that may lead to the suspension or preemption of the currently running process. 65. Response time and throughput are examples of performance-related criteria for short-term scheduling 66. In a system employing priority scheduling, the scheduler always selects the process with the highest priority level for processing. 67. The decision mode which has two general categories, specifies the instants in time at which the selection function is exercised. 68. In terms of the queuing model, the total time that a process spends in a system (waiting time plus service time) is called the turnaround time (TAT). 69. The Round Robin scheduling technique is also known as time slicing, because each process is given a set amount of processor time before being preempted. 70. Shortest process next (SPN) is a scheduling policy in which the process with the shortest expected processing time is selected next, but there is no preemption. 71. Shortest remaining time (SRT) is a scheduling policy in which the process with the shortest expected processing time is selected next, and if a shorter process becomes ready in the system, the currently running process is preempted. 72. A scheduling mechanism that requires no prior knowledge of process length, yet can nevertheless favor shorter jobs, is known as the feedback scheduling mechanism. 73. Some of the difficulties of analytic modeling are overcome by using discreteevent simulation, which allows a wide range of policies to be modeled. 74. In fair share scheduling, each user is assigned a weighting of some sort that defines that user s share of system resources as a fraction of the total usage of those resources. 75. The traditional UNIX scheduler, such as those used in SVR3 and 4.3 BSD UNIX systems, divides processes into fixed bands of priority levels. Page 13 of 13