University of Waterloo Midterm Examination

Similar documents
CS414 SP 2007 Assignment 1

Operating Systems. Virtual Memory

Chapter 2: OS Overview

OPERATING SYSTEM - VIRTUAL MEMORY

An Implementation Of Multiprocessor Linux

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

Chapter 12. Paging an Virtual Memory Systems

Midterm Exam #2 November 10, 1999 CS162 Operating Systems

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Midterm Exam #2 Solutions November 10, 1999 CS162 Operating Systems

Chapter 6, The Operating System Machine Level

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

OS OBJECTIVE QUESTIONS

Performance Comparison of RTOS

Chapter 11 I/O Management and Disk Scheduling

Virtual vs Physical Addresses

SYSTEM ecos Embedded Configurable Operating System

Concepts of Concurrent Computation

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Operating Systems, 6 th ed. Test Bank Chapter 7

Linux Process Scheduling Policy

Intro to GPU computing. Spring 2015 Mark Silberstein, , Technion 1

Operating Systems 4 th Class

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Operating System Engineering: Fall 2005

A Deduplication File System & Course Review

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (I)

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

HY345 Operating Systems

Lecture 17: Virtual Memory II. Goals of virtual memory

First-class User Level Threads

Linux Scheduler. Linux Scheduler

Class Notes CS Creating and Using a Huffman Code. Ref: Weiss, page 433

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Far-western University Central Office, Mahendranagar Operating System

Operating System: Scheduling

The Real-Time Operating System ucos-ii

Operating Systems. Lecture 03. February 11, 2013

Operating System Tutorial

REAL TIME OPERATING SYSTEMS. Lesson-10:

CSC 2405: Computer Systems II

CS 61C: Great Ideas in Computer Architecture Virtual Memory Cont.

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct)

Chapter 11 I/O Management and Disk Scheduling

This tutorial will take you through step by step approach while learning Operating System concepts.

Introduction. Scheduling. Types of scheduling. The basics

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

b. A program calls malloc to request more memory from the operating system. i.what system call would malloc use to request more memory of the OS?

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Computer-System Architecture

Exceptions in MIPS. know the exception mechanism in MIPS be able to write a simple exception handler for a MIPS machine

Memory Management CS 217. Two programs can t control all of memory simultaneously

Real-Time Scheduling 1 / 39

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER

Computer Architecture

Virtual Memory. Virtual Memory. Paging. CSE 380 Computer Operating Systems. Paging (1)

We r e going to play Final (exam) Jeopardy! "Answers:" "Questions:" - 1 -

Shared Address Space Computing: Programming

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

Real Time Programming: Concepts

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Lecture 1: Data Storage & Index

File Management. Chapter 12

COS 318: Operating Systems. Virtual Machine Monitors

Operating System Overview. Otto J. Anshus

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Microprocessor & Assembly Language

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

aicas Technology Multi Core und Echtzeit Böse Überraschungen vermeiden Dr. Fridtjof Siebert CTO, aicas OOP 2011, 25 th January 2011

Operating Systems Lecture #6: Process Management

Virtual Memory Paging

MICROPROCESSOR AND MICROCOMPUTER BASICS

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture

Exception and Interrupt Handling in ARM

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Replication on Virtual Machines

COS 318: Operating Systems

Chapter 1 Computer System Overview

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Memory Management 1. Memory Management. Multitasking without memory management is like having a party in a closet.

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

W4118 Operating Systems. Instructor: Junfeng Yang

Name: Class: Date: 9. The compiler ignores all comments they are there strictly for the convenience of anyone reading the program.

The Deadlock Problem. Deadlocks. Deadlocks. Bridge Crossing Example

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Lecture 25 Symbian OS

6.828 Operating System Engineering: Fall Quiz II Solutions THIS IS AN OPEN BOOK, OPEN NOTES QUIZ.

Performance Improvement In Java Application

Java Virtual Machine Locks

PROGRAMMING CONCEPTS AND EMBEDDED PROGRAMMING IN C, C++ and JAVA: Lesson-4: Data Structures: Stacks

University of Dayton Department of Computer Science Undergraduate Programs Assessment Plan DRAFT September 14, 2011

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Storage in Database Systems. CMPSCI 445 Fall 2010

Transcription:

University of Waterloo Midterm Examination Spring, 2006 Student Name: Student ID Number: Section: Course Abbreviation and Number Course Title CS350 Operating Systems Sections 01 (14:30), 02 (11:30) Instructor A. Aboulnaga Date of Exam June 20, 2006 Exam Returned June 22, 2006 Appeal Deadline July 4, 2006 Time Period 19:00-21:00 Duration of Exam Number of Exam Pages (including this cover sheet) Exam Type Additional Materials Allowed 2 hours 16 pages Closed Book None NOTE: KEEP YOUR ANSWERS AS CONCISE AS POSSIBLE. Question 1: Question 4: Question 7: (14 marks) (12 marks) (11 marks) Question 2: Question 5: Question 8: (11 marks) (12 marks) (17 marks) Question 3: Question 6: (10 marks) (13 marks) Total: (100 marks) CS350 1 of 16

Question 1. (14 marks) a. (2 marks) What is context switching? Context switching is when the kernel switches the CPU from one process to another. This requires the kernel to save the state of the currently running process in the PCB. b. (3 marks) List three ways in which execution can switch from user space to kernel space (i.e., three ways in which an OS kernel can get back the CPU from a user proceess). 1- System calls, 2- Exceptions, 3- Interrupts c. (2 marks) What is the timer interrupt? And how does the OS use it for scheduling? Most computers have hardwre clocks (or timers) that generate periodic interrupts. These interrupts are known as the timer interrupts. The OS uses the timer interrupt to measure the passage of time and determine when the quantum of a running process has expired, at which point the process is preempted. The timer interrupt also ensures that the kernel will get the CPU back from a running process so that it can make scheduling decisions. d. (3 marks) Why is Least Recently Used (LRU) considered to be impractical for use as a replacement policy in virtual memory systems? Because it requires an LRU list or a per-page last access time stamp to be updated by the MMU with every memory access. This is prohibitively expensive. e. (2 marks) In virtual memory systems, why is replacing a clean page faster than replacing a dirty page? When the OS replaces a dirty page, it has to write this page to the swap disk. A clean page can be replaced without writing it to the swap disk. f. (2 marks) List one factor that favours having larger virtual memory pages and one factor that favours having smaller pages. Factors favouring large pages: smaller page tables, fewer entries in the TLB needed to cover the same amount of memory, more efficient I/O to swap pages in. Factors favouring small pages: reduced internal fragmentation, less likely to page in unnecessary data. Question 2. (11 marks) a. (4 marks) Processes (or threads) can be in one of three states: Running, Ready, or Blocked. In which state is the process (or thread) for each of the following four cases? (i) Waiting for data to be read from a disk. (ii) Spin-waiting for a lock to be released. (iii) Having just called wait() on a condition variable in a monitor. (iv) Having just completed an I/O and waiting to get scheduled again on the CPU. CS350 2 of 16

(i) Waiting for data to be read from a disk. Blocked (ii) Spin-waiting for a lock to be released. Running (iii) Having just called wait() on a condition variable in a monitor. Blocked (iv) Having just completed an I/O and waiting to get scheduled again on the CPU. Ready b. (4 marks) Consider the following list of actions. Put a check mark in the blank beside those actions that should be performed by the kernel, and not by user programs. (0.5 marks per action) reading the value of the program counter (PC). changing the value of the program counter (PC). changing the value of the segment table base register. changing the value of the stack pointer (SP). increasing the size of an address space. creating a memory segment that is shared between multiple processes. writing to a memory segment that is shared between multiple processes. disabling interrupts. reading the value of the program counter (PC). changing the value of the program counter (PC). XX changing the value of the segment table base register. changing the value of the stack pointer (SP). XX increasing the size of an address space. XX creating a memory segment that is shared between multiple processes. writing to a memory segment that is shared between multiple processes. XX disabling interrupts. c. (3 marks) Which of the following is shared between threads of the same process? Put a check mark in the blank beside the items that are shared. (0.5 marks per item) integer and floating point registers. program counter. heap memory. stack memory. global variables. open files. integer and floating point registers. program counter. XX heap memory. stack memory. XX global variables. XX open files. CS350 3 of 16

Question 3. (10 marks) Is each of the following statements True or False? Explain your answer. a. (2 marks) A multi-threaded program is, in general, non-deterministic. i.e., the program can give different outputs in different runs on the same input. True. The threads can be scheduled differently by the OS in different runs, depending on factors such as when I/O completes or when interrupts happen. The threads can, in general, access shared data, and the different scheduling of thredas will result in different values of the shared data and hence different output. The different scheduling of the threads can also result in different ordering of program output. b. (2 marks) When monitors are used for synchronization in a multi-threaded program, they (the monitors) eliminate the possibility of threads waiting indefinitely for a synchronization event to occur. False. Monitors simplify the task of writing concurrent programs, but indefinite waiting can still happen. For example, a thread in a monitor can call wait() on a condition variable, and there may be no thread that will call signal() on this variable. c. (2 marks) It is not possible for thrashing to occur in a system that uses a recency-based page replacement policy such as LRU. False. Thrashing happens when the multiprogramming level is too high for the available physical memory and so processes cannot keep their working sets in memory. Thrashing has nothing to do with the replacement policy. CS350 4 of 16

d. (2 marks) One way to reduce the likelihood of thrashing is to increase the amount of main memory (RAM) in the system. True. Increasing the amount of memory allows processes to keep more pages in memory, so the processes are more likely to have their working sets in memory, so the likelihood of thrashing is reduced. e. (2 marks) If every page in memory is accessed (used) between any two page faults, then the Clock replacement algorithm behaves exatcly like FIFO. True. If every page in memory is used between any two page faults, the use bits of all frames on the clock face will be set by the MMU between replacements. Thus, the clock hand will not find a frame whose use bit is clear until it makes a full cycle around the face of the clock, and comes back to the frame on which it started. That frame will have a clear use bit because the clock hand will have cleared it when it started its sweep, so this is the frame that will be replaced. Thus, if the clock hand start at frame i, it will make a full sweep of the clock face and come back to replace frame i. Next time, it will replace frame i + 1, and so on. So the frames will be replaces in the order 0, 1, 2,..., which is FIFO. CS350 5 of 16

Question 4. (12 marks) a. (4 marks) Briefly describe how the operating system gets loaded into memory and starts executing when the machine is powered up (also known as booting the operating system). 1. CPU starts executing instructions from a known location in memory when it is powered up. 2. That location is in ROM, and it has a simple program that loads a bootstrap program from a fixed location on disk and starts executing this bootstrap program. 3. The bootstrap program initializing the system (registers, memory, devices), and loads the full operating system into memory and start executing it. b. (3 marks) In class, we discussed three heuristics for memory placement when we are using variable sized memory allocation: first fit, best fit, and worst fit. Briefly describe the worst fit allocation strategy, and explain the motivation behind it. The worst fit heuristic specifies that the OS should allocate the largest free memory area or hole to satisfy a memory request. This results in the largest possible leftover hole after satisfying the memory request. The motivation behind worst fit is that such a large leftover hole would be more useful for satisfying future memory requests that the smaller leftover holes that we get with first fit or best fit. CS350 6 of 16

c. (5 marks) Consider an OS that uses local page replacement (i.e., each process gets its own set of frames independent of any other process). Assume that this OS uses the Clock page replacement policy. Consider a process in this OS that has a working set of M pages, and is allocated exactly M frames. (i) Do you expect the page fault frequency for this process to be high or low? Explain briefly. (ii) Consider the case when this process incurrs a page fault. On average, how many steps will the clock hand advance to find a victim for replacement (i.e., how many times will the victim pointer be incremented)? Explain your answer. (i) Low. A process mostly accesses pages from its working set. Since the working set is entirely in memory, page fault frequency will be low. (ii) The process incurrs page faults only infrequently. Thus, when a page fault does happen, it is likely that all the M frames will have their use bits set. The victim pointer will therefore have to go through all M frames clearing the use bits before it can find a candaidte for replacement. Thus, the clock hand will advance M times on each page fault. CS350 7 of 16

Question 5. (12 marks) a. (4 marks) Suppose that two long running processes, P 1 and P 2, are running in a system. Neither program performs any system calls that might cause it to block, and there are no other processes in the system. P 1 has 2 threads and P 2 has 1 thread. (i) What percentage of CPU time will P 1 get if the threads are kernel threads? Explain your answer. (ii) What percentage of CPU time will P 1 get if the threads are user threads? Explain your answer. (i) If the threads are kernel threads, they are independently scheduled and each of the three threads will get a share of the CPU. Thus, the 2 threads of P 1 will get 2/3 of the CPU time. That is, P 1 will get 66% of the CPU. (ii) If the threads are user threads, the threads of each process map to one kernel thread, so each process will get a share of the CPU. The kernel is unaware that P 1 has two threads. Thus, P 1 will get 50% of the CPU. b. (3 marks) When a thread is waiting for an event to occur (e.g., waiting for a lock to be released), describe a situation in which busy waiting (also known as spin-waiting) can result in better performance than blocking while waiting. Explain why busy waiting never performs better than blocking while waiting for single-cpu systems. If the thread will wait for a short amount of time (compared to the time required for a context switch), then it is better to spin-wait. Blocking while waiting requires two context switches, one to block the thread and one to unblock it. If a thread t 1 is waiting for an event in a single-cpu system (e.g., waiting for a lock to be released), then the thread t 2 that will cause the event to happen (e.g., the thread that will release the lock) must be scheduled for t 1 s waiting to end. By spin-waiting, t 1 is preventing t 2 from being scheduled, and is therefore prolonging its own wait as well as wasting system resources. In a multi-cpu system, t 2 would be scheduled on a different CPU. CS350 8 of 16

c. (5 marks) Some hardware provides a TestAndSet instruction that is executed atomically, and that can be used to implement mutual exclusion. (i) Briefly describe what the TestAndSet instruction does. (ii) Write a fragment of code or pseudo-code to show how TestAndSet can be used to implement critical sections. Be sure to indicate which variables in your code or pseudo-code are shared between the threads and which are local to each thread. (i) The TestAndSet instruction sets the value of a variable and returns the old value. It is executed atomically. (ii) boolean lock; // Shared, initially false. while (TestAndSet(&lock, true)){ } Critical Section lock = false; CS350 9 of 16

Question 6. (13 marks) Consider the following code for inserting into a hash table that will be concurrently used by multiple threads. Some details of the code are omitted for clarity. The hash table is implemented as an array of N linked lists, and each node in these linked lists contains an integer hash key, key, and its associated object, obj. void hashinsert(int key, void *obj) { int listnum = key % N; // Determine which list the item belongs to. listinsert(hasharray[listnum], key, obj); } void listinsert(node *head, int key, void *obj) { // Create a new node, nn, that contains key and obj. nn->next = head; head = nn; } a. (4 marks) Show a problem that can occur if two threads try to insert items into the hash table simultaneously. A race condition can happen that would lead to one of the insertions being lost. Thread 1 Thread 2 nn-> next = head; nn->next = head; head = nn; head = nn; Here, the insertion of Thread 2 was lost. b. (4 marks) Add a single lock to the above code to fix the problem. Feel free to annotate the code given above or to copy it in the space below. Lock l; // Lock variable shared between threads. void hashinsert(int key, void *obj) { int listnum = key % N; // Determine which list the item belongs to. l.acquire(); listinsert(hasharray[listnum], key, obj); l.release(); } CS350 10 of 16

c. (2 marks) What is the biggest performance problem with this single-lock solution? This solution serializes access to hasharray. If multiple threads want to update different lists in hasharray, they would still have to wait for each other. d. (3 marks) Show how you can use more locks to improve the performance of accessing the hash table by allowing more concurrency. We can have an array of locks, one for each entry in hasharray. Lock l[n]; // Array of lock variables shared between threads. void hashinsert(int key, void *obj) { int listnum = key % N; // Determine which list the item belongs to. l[listnum].acquire(); listinsert(hasharray[listnum], key, obj); l[listnum].release(); } CS350 11 of 16

Question 7. (11 marks) Consider the following information about resources in a system: There are two classes of allocatable resource labelled R1 and R2. There are two instances of each resource. There are four processes labelled P1 through P4. There are some resource instances already allocated to processes, as follows: one instance of R1 held by P2, another held by P3 one instance of R2 held by P1, another held by P4 Some processes have requested additional resources, as follows: P1 wants one instance of R1 P3 wants one instance of R2 a. (5 marks) Draw the resource allocation graph for this system. Use the style of diagram from the lecture notes. P1 P4 R1 R2 P2 P3 CS350 12 of 16

b. (2 marks) What is the state (runnable, waiting) of each process? For each process that is waiting, indicate what it is waiting for. P1 waiting. P2 runnable. P3 waiting. P4 runnable. c. (4 marks) Is this system deadlocked? If so, state which processes are involved. If not, give an execution sequence that eventually ends, showing resource acquisition and release at each step. Not deadlocked (even though a cycle exists in the graph). One possible execution sequence: P2, P1, P4, P3. and CS350 13 of 16

Question 8. (17 marks) a. (4 marks) Using the page references string shown below, fill in the frames and missing information to show how the OPT (optimal) page replacement algorithm would operate. Use a dash to fill in blank locations. Note that when there is more than one page that is a possible victim, always choose the one with the lowest frame number. Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D B E C G D A G D B E C Frame 1 A Frame 2 Frame 3 Frame 4 Fault? X Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D B E C G D A G D B E C Frame 1 A A A A A A A A A A A A B B C Frame 2 B B B B E E E E E E E E E E Frame 3 C C C C C G G G G G G G G Frame 4 D D D D D D D D D D D D Fault? X X X X X x X X b. (4 marks) Fill in the frames and missing information below to show how the LRU (least recently used) page replacement algorithm would operate. Use a dash to fill in blank locations. Note that when there is more than one page that is a possible victim, always choose the one with the lowest frame number. Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D B E C G D A G D B E C Frame 1 A Frame 2 Frame 3 Frame 4 Fault? X Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D B E C G D A G D B E C Frame 1 A A A A A E E E E A A A A E E Frame 2 B B B B B B B D D D D D D D Frame 3 C C C C C C C C C C B B B Frame 4 D D D D G G G G G G G C Fault? X X X X X X X X X X X CS350 14 of 16

c. (4 marks) Assume that there are 5 pages, A, B, C, D, and E. Fill in the page reference string and complete the rest of the information in the table below so that LRU is the worst page replacement algorithm (i.e., it results in the maximum number of page faults). Use a dash to fill in blank locations. Note that when there is more than one page that is a possible victim, always choose the one with the lowest frame number. Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D E Frame 1 A Frame 2 Frame 3 Frame 4 Fault? X Num 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Refs A B C D E A B C D E A B C D E Frame 1 A A A A E E E E D D D D C C C Frame 2 B B B B A A A A E E E E D D Frame 3 C C C C B B B B A A A A E Frame 4 D D D D C C C C B B B B Fault? X X X X X X X X X X X X X X X CS350 15 of 16

d. (2 marks) Describe the pattern of references in the reference string that you came up with in part (c). Repeated sequential scan of a set of pages. e. (3 marks) Suggest a replacement policy that would minimize the number of page faults for the pattern of references that you indentified in part (d). Your replacement policy cannot require advance knowledge of future page references. Most Recently Used is a good policy in this case. MRU would behave just like OPT here. CS350 16 of 16