3.2 CPU Scheduling The aim of CPU scheduling is to share out CPU access so that the objectives of the system are met: Response time (e.g. response to user commands), throughput (i.e. no. of processes completed over a fixed time), and processor efficiency (ideally, constant use of the processor without ill effects on processes). Concentrate on Uniprocessor. 3.2.1 Types of Scheduling There are three types of scheduling needed. They are categorised according to how often they are done: 1. Long-term scheduling (LTS) 2. Medium-term scheduling (MTS) 3. Short-term scheduling (STS) 3.2.1.1 Long-term scheduling (LTS) LTS involves decisions about adding more processes to the pool of processes waiting for CPU access. That is, LTS controls the level of multiprogramming. Once the LTS admits a process it is either added to the STS queue (as a ready process) or to the MTS queue (as a suspended ready process). Considerations: How many processes? o more processes = less time each (so limit needed) o If idle time exceeds some threshold then LTS may add more processes. Which process next? 3.2.1.2 Medium-term scheduling (MTS) MTS involves decisions about adding to the number of processes that are partially or fully in main memory. That is, deciding which of the suspended processes (under OS control) are admitted to the ready queues. This is largely a memory management issue. 3.2.1.3 Short-term scheduler (STS) Also called scheduler or dispatcher. Involves decision about which process gets the CPU next. Invoked each time an event happens that interrupts the current process or presents an opportunity to pre-empt the current process. E.g. clock interrupts (e.g. time slice up, scheduled event); I/O interrupts; system calls; miscellaneous signals/interrupts.
Here we concentrate on STS. LTS and MTS are mostly memory management issues. 3.2.2 ST Scheduling Algorithms: background concepts The objective of short term scheduling algorithms is to allocate processor time so as to optimise one or more aspects of system behaviour. First some background concepts. 3.2.2.1 Evaluation Criteria Evaluate Scheduling algorithms based on criteria: Two Types: User oriented and system oriented Of these some are performance related, some are not performance related. 3.2.2.1.1 User oriented Behaviour of system from user point of view. E.g. response time for interactive user. Predictability (provision of same service in different conditions) 3.2.2.1.2 System oriented Focus on effective and efficient use of CPU. E.g. Throughput (rate at which processes are completed) 3.2.2.1.3 Performance related Quantitative and easily measured. e.g. response time, throughput. 3.2.2.1.4 Not performance related Qualitative and hard to measure. e.g. predictability 3.2.2.2 Priorities Many systems assign a priority to processes; schedulers choose higher priority processes over lower ones. So there may be a separate ready queue for each priority: RQ0, RQ1, RQ2, etc. The scheduler will process RQ0 first according to some algorithm (e.g. FCFS), then RQ1, etc. Sometimes this can starve low priority processes so often introduce a scheme where a process s priority rises the longer it is waiting. 3.2.2.3 Processing Burst time
When a process is ready to run and gains access to the CPU it will have a small amount of its overall work to do and will then either be forced out of the CPU by some external interrupt or it leave voluntarily because it is waiting for some service (e.g. i/o). That small amount of work is called its processing burst. Typically it is between 1 and 5 milliseconds of work on the CPU. 3.2.2.4 Pre-emptive and non-pre-emptive scheduling algorithms A non-pre-emptive scheduling algorithm is one that only changes the running process when a convenient interruption happens (either because the process itself asks for i/o service or does some other system call, OR some other interrupt happens outside the process s control that means it must leave the CPU). A pre-emptive scheduling algorithm is one that will interrupt processes when it decides it does not wait for something else to interrupt the process on the CPU, but rather it will make sure to interrupt the running process when it decides it is time for some other process to get the CPU. Pre-emptive scheduling algorithms are more expensive (time + processing) but they provide better service overall by preventing monopolisation of CPU. (Especially if they use very efficient context switching [e.g. using lots of h/w]) 3.2.2.5 CPU-bound or I/O bound Processes If a process spends a lot of its time using the processor it is CPU bound. A process that is CPU bound will tend to have larger service burst times because it will have less need for OS services because it has the resources it needs it just needs to execute instructions to get work done. If a process spends a lot of its time doing i/o it is i/o bound. I/O bound processes tend to have very short service burst times because no sooner do they get the CPU than they initiate another I/O call which blocks them. 3.2.2.6 Starvation If a scheduling algorithm could possibly allow a situation to arise where a ready process never gets access to the CPU it is said to allow starvation. If a process is starved of access to the CPU it cannot run. This is totally unacceptable. 3.2.2.7 Interactions pre-emptive advantage of disallowing monopolisation of CPU; disadvantage of increasing cpu-bound number of process switches I/o bound no danger of monopolisation so no advantage; still has possible disadvantage of non-pre-emptive disadvantage of allowing monopolisation of CPU; advantage of not increasing number of process switches no danger of monopolisation and advantage of not increasing number of process switches
increasing number of process switches A system may have mostly i/o bound processes or mostly CPU bound processes. Or it may have a mixture. Depending on the profile of the system it is more or less advantageous to use one or other of the scheduling algorithm types. A system with a lot of CPU bound processes is better served by preemptive algorithms. A system with mostly i/o bound processes is better served by a non-pre-emptive algorithm. 3.2.2.8 Algorithm comparison In order to illustrate the following algorithms use the following kind of benchmark data: Process Arrival Service Wait Turn-around time time time time NTT ratio 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 Process 4 arrives at time 6 and requires 5 units of execution time for this burst of activity. Wait time is the length of time spent waiting for service. Turnaround time is the total time spent in the system for this burst of service (i.e. wait time + service time). NTT ratio is the Normalised Turnaround Time ratio which is turnaround time divided by service time; this gives a good indication of the relative penalty incurred by each process under the algorithm in question because it takes into account the amount of service sought when measuring turnaround. Note that, in reality, these service burst times are unknown by the scheduling algorithm. 3.2.3 First Come, First Served (FCFS) A simple queue: processes get CPU in the order they arrive in the ready queue. Non-pre-emptive (i.e. once a process gets served it runs to the end of its required service time without interruption.) Consider the following example: Process Arrival Service Wait Turn-around time time time time NTT ratio 1 0 3 0 3 1 = 3/3 2 2 6 1 7 1.17 = 7/6
3 4 4 5 9 2.25 = 9/4 4 6 5 7 12 2.40 = 12/5 5 8 2 10 12 6 = 12/2 FCFS performs well when all processes have similar service times. But when there is a mix of short processes behind long ones the short processes in the queue may suffer (see process 3 below). Process Arrival Service Wait Turn-around time time time time NTT ratio 1 0 1 0 1 1.00 2 1 100 0 100 1.00 3 2 1 99 100 100 4 3 100 99 199 1.99 Even in this extreme case FCFS performs OK for long processes (see processes 2 & 4 above). Fair in a simple minded way. Simple algorithm with low administration overhead (no extra process switches). No possibility of starvation. FCFS favours CPU-bound process over i/o bound because i/o bound processes tend to need shorter bursts of service time. This leads to inefficient use of i/o devices. In situations where there is a mix of long and short processing burst times, FCFS is unfair for short processes. Sometimes FCFS is combined with priority system to avoid its problems. 3.2.4 Round Robin (RR) FCFS is non-pre-emptive. RR is pre-emptive and thus avoids problems of FCFS; the reason short jobs are penalised with FCFS is that they must wait till long jobs are finished. So we introduce time-slices to level the playing field. Next process is the one waiting for longest (recently serviced processes go to back of queue.) Pre-emptive (Once a process gets service it runs either until it finishes or a time limit is reached, whichever is sooner.) RR is FCFS with time slice clock interrupts. With time slice = 1 Process Arrival Service Turn-around NTT Wait time time time time ratio 1 0 3 1 4 1.33 2 2 6 10 16 2.66
3 4 4 9 13 3.25 4 6 5 9 14 2.80 5 8 2 5 7 3.50 With time slice = 4 Process Arrival Service Turn-around NTT Wait time time time time ratio 1 0 3 0 3 1 2 2 6 9 15 2.5 3 4 4 3 7 1.75 4 6 5 9 14 2.80 5 8 2 9 11 5.50
3.2.4.1 Design issue How big is a slice? (or what is the length of the time quantum?) Very short? Good; short processes move through more quickly. BUT many clock interrupts = more process switches overhead = BAD. Smaller slices increase response time for typical interactions; If slices are longer than the longest running process then effectively you have FCFS. Guideline: slice should be slightly bigger than the time needed for a typical interaction. i/o bound processes get a raw deal; they tend not to use their full slice before leaving CPU, waiting and then re-joining ready queue; CPU-bound processes use their full slice and immediately re-join the ready queue. So poor performance of i/o bound processes and thus poor i/o device use. 3.2.5 Shortest Process Next (SPN) SPN is another way of avoiding the bias against short jobs in FCFS. RR was pre-emptive; SPN is non pre-emptive, i.e. it doesn t force processes off CPU. Instead the job with the shortest expected processing burst time is selected next, i.e. short jobs jump queue. Next process is the one that requires the least amount of processing time (this must be guessed see later). Non-pre-emptive. As new processes arrive, the processes are ranked again according to processing time required. If the new process requires less processing time than the rest of the waiting processes then it gets the highest ranking of waiting processes and will therefore be served next once the running process leaves. Process Arrival Service Turn-around NTT Wait time time time time ratio 1 0 3 0 3 1 2 2 6 1 7 1.17 3 4 4 7 11 2.75 4 6 5 9 14 2.80 5 8 2 1 3 1.50
Better overall response time but predictability reduced Better for shorter jobs but risks starvation of longer jobs. 3.2.5.1 Design issue How do you estimate the future processing need of a process? Keep running average of processing bursts for each process and use it as a guess of the next burst: OR Where: S n+1 is average of previous bursts as estimate of next burst; S 1 is estimated value of first burst (not calculated); S n is estimate of previous burst; T i is actual processor execution time for the i th burst; T n is actual processor execution time for the last burst. E.g. 5 previous burst times (T 5 = most recent burst; T 1 = first burst): T 5 T 4 T 3 T 2 T 1 4 2 3 2 4
So n=5 as there are 5 previous bursts. Can then calculate S 6 using S 6 = (4+2+3+2+4) / 5 = 15/5 = 3 = estimated next burst time at time n+1 (i.e. at time 6) OR Given that the estimate of the previous burst time (i.e. S 5 when n=4) was calculated as follows: S 5 = (2+3+2+4) / 4 = 11/4 = 2.75 = estimated 5 th burst time, And given that the actual burst time at time 5 (T 5 ) was = 4 Then using To calculate the next burst time when n=5, : S 6 = (1/5 * 4) + (4/5 * 2.75) = 4/5 + 11/5 = 15/5 = 3 This is the same answer as the first method. This means we can calculate a reasonable guess from less data. In the first method we must store all the previous burst times for all of the processes. With the second method it is only necessary to store the last estimate, the last burst time and the number of bursts so far. However the guess still gives equal weight to each burst. Better to give more weight to recent bursts as the next one is likely to be more like them. Consider the following data: 5 previous burst times: T 5 T 4 T 3 T 2 T 1 100 2 5 1 4
Here the average is (100+2+5+1+4) / 5 = 112 / 5 = 22.4 But 22.4 seems to fall between the very low and very high burst times and so is not a good guess this process has recently (at T 5 ) had a very high burst time and so is more likely to behave the same way in the near future. The average of 22.4 does not reflect this. So, use exponential average: S n+1 = α T n + (1 - α)s n for α = some constant between 0 and 1. This is equivalent to: S n+1 = α T n + (1 - α)αt n-1 +... + (1 - α) i αt n-i +... + (1 - α) n S 1 If α = 0.8: S n+1 = 0.8T n + 0.16T n-1 + 0.032T n-2 +... In other words the last burst contributes 80% to the guess, the previous burst contributes 16%, and its predecessor only contributes a negligible 3.2%, and so on. Each successive term contributes a smaller amount to the next guess. So, if S n+1 = α T n + (1 - α)s n for α = 0.8 S 6 = 0.8 * T n + 0.2 * S n Then, for the example data above, if the previous guess was say 3 = S 5 and the last burst was actually 100 then: S 6 = 80% of last burst + 20% of previous guess = (0.8 * 100) + (0.2 * 3) = 80 + 0.6 = 80.6 a much better estimate Thus the older the observation, the less it affects the average. Higher values of α give greater difference between successive terms. 3.2.6 Shortest Remaining Time (SRT) SRT is pre-emptive version of SPN. The job with the shortest remaining processing time is selected next. i.e. shortest remaining time jobs get CPU immediately. Next process is the one that has the least amount of processing time remaining. (guessed) Pre-emptive. As new processes arrive, the processes (including the process in the CPU) are ranked again according to remaining processing time required. If the new process requires less processing time than all other processes (including the running process) then it gets the highest ranking and is thus served next i.e. if necessary the running process is pre-empted and the new process takes over.
Process Arrival Service Wait Turn-around NTT time time time time ratio 1 0 3 0 3 1 2 2 6 7 13 2.17 3 4 4 0 4 1 4 6 5 9 14 2.80 5 8 2 0 2 1 Again an estimate of remaining process time is needed, and there is still a risk of starving longer processes. But better for interactive processes (very good NTT ratios). 3.2.7 Highest Response Ratio Next (HRRN) Both SPN and SRT have very good performance but both risk starvation of processes. HRRN maintains a good performance and avoids starvation altogether. The goal is to minimise the NTT ratio so this algorithm keeps an eye on the NTT ratios so far and if a process s NTT is the highest it is given service at the next opportunity this keeps the NTT ratio values down for all processes. HRRN can estimate the NTT so far: Estimated NTT so far = w + s / s where w=time spent waiting so far and s=expected service time. The next process is the one that has the highest anticipated NTT. Non pre-emptive. When the current process completes or is blocked the scheduler choose a process from the pool of candidates that has the highest anticipated NTT. Process Arrival Service Turn-around NTT Wait time time time time ratio 1 0 3 0 3 1 2 2 6 1 7 1.17 3 4 4 5 9 2.25 4 6 5 9 14 2.80 5 8 2 5 7 3.5 Imagine a process requiring 2 units of service and another requiring 20. If neither has been waiting then their respective NTT ratios are equal: (0 + 2) / 2 = 1 and (0 + 20) / 20 = 1 However, as time goes on their ratios will rise (but notice that the smaller process s ratio rises faster): Time units passed Ratio of short Process Ratio of long Process 0 (0 + 2) / 2 = 1 (0 + 20) / 20 = 1 1 (1 + 2) / 2 = 1.5 (1 + 20) / 20 = 1.05 2 (2 + 2) / 2 = 2 (2 + 20) / 20 = 1.1 3 (3 + 2) / 2 = 2.5 (3 + 20) / 20 = 1.15
4 (4 + 2) / 2 = 3 (4 + 20) / 20 = 1.2
So shorter processes will rise to the top of the ranking more quickly and so will get to win the competition more quickly than longer processes. But what about the danger of starvation? No danger of that. Starvation tends to happen when new shorter processes that have not waited jump ahead of waiting longer processes. However, with HRRN, if a long process has waited just 1 millisecond then its ratio will be >1 [ e.g (w+s)/s = (1+100)/100 = 1.01 ]. And a short process that has done no waiting will have a ratio = 1 [ e.g. (w+s)/s = (0+2)/2 = 1 ]. This means that if a longer process is in competition with a newly arrived short process it will win. So a long process cannot be starved by a stream of new arrivals forever. So there is no danger of starvation. Favours short jobs but also accounts for the time spent waiting so far. So longer jobs get through once they have waited long enough. SPN, SRT, and HRRN all require a guess about the future processing needs of a process. 3.2.8 Multi-Level Feedback (MLF)
It is possible to maintain a few ready queues that operate under different rules. Waiting processes can be assigned to the different queues as required and the queues can be given different priorities. Choose the process from the head of the highest priority queue that contains waiting processes. Here we try to favour shorter processes but at the same time avoid having to rely on guesswork about the processing time required by processes. Instead we depend on the amount of time a process has already spent executing as a measure of its length. Instead of favouring short jobs we penalise long jobs (essentially similar things). There are several variations. In general there are a number of priority queues. When a process enters the system it joins the top priority queue (RQ0) and when it gets the CPU it is allocated n time units. If it doesn t complete it is then assigned to the next lower queue (RQ1) where it will get m units, and so on. If a process is very long it may end up in the lowest priority queue. The scheduler deals with all processes in the higher queues before moving to the lower ones. Thus, longer processes drift down the queues and shorter ones are favoured. The different queues can be administered using different queuing policies although RR is favoured. To counteract possible starvation of long processes there are two strategies employed:
Firstly, the CPU allocation can be increased as you go down the queues e.g. RQ0 gets Time Slice (ts)=1, RQ1 gets ts=2, RQ2 gets ts=4, and so on. This strategy gives longer processes a better opportunity of finishing earlier. But starvation is still possible. A second improvement to avoid the danger of starvation involves allowing a process to ascend the priority queues based on its waiting time. If a process has been waiting a long time in a lower priority queue it can move to a higher priority queue. As time goes by a process will ascend the queues until it gets served if necessary even a very long process will be treated on an equal footing in a queue with all other newly arriving processes. Example: Try running the following benchmark data on the following version of MLF. RQ0 run on round robin basis with time slice of 1 (2 0 ). RQ1 run on round robin basis with time slice of 2 (2 1 ). RQ2 run on round robin basis with time slice of 4 (2 2 ). A process that remains in RQn for a period of consecutive time equal to twice the time slice time of RQn is moved up to join RQn-1. Process Arrival Service Wait Turnaround time time time time NTT ratio 1 0 13 23 36 28/13 (2.15) 2 2 15 21 36 31/16 (1.93) 3 4 4 8 14 14/4 (3.5) 4 6 3 4 7 7/3 (2.33) 5 7 2 5 7 7/2 (3.5) 6 8 1 1 2 2/1 (2)
R=running for TS=1 from Q0, R=running for TS=2 from Q1, R=running for TS=4 from Q2, Q0=waiting in q0, Q1=waiting in q1, etc. 2 indicates process 2 moved up a queue MLF RQi TS=2 i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 1 R R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 Q1 Q1 Q0R Q1Q1R R Q2Q2Q2Q2R R R R Q2Q2Q2 Q2 R R R END 2 Q0 R Q1 R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1Q1 R R Q2Q2R R R R Q2Q2Q2Q2R R R R Q2 Q2 Q2 R R 3 R Q1Q1 Q1Q1 Q0 R Q1 Q1 Q1 R R END 4 Q0R Q1 Q1 Q1 R R END 5 Q0 R Q1 Q1 Q1 Q1 R END 6 Q0 R END RQ01* 2* 2 3* 4 4,5 5, 6 6, 3 3 1 1 RQ1 1 2 2,3 3 3 3,4 4,5 4,5 4,5,3,1 5,3,15,3,1 3,1 2 2 2,11 1 RQ2 1 1 1 1 1,2 1,2 1,2 1,2 2 2 2 2 2 2 2,11 1 1 1,22 2 2 2,1 1 1 1 1,2 2 2 2
3.2.9 Appendix All algorithms can be simulated according to the following steps: 1. Calculate the total service units to account for (=m) and draw up a grid with m+1 columns [extra column for the process number]. 2. Enter the process numbers/names in the first column to assign one row for each process and label the remaining columns from 0 m-1 3. Mark arrival times for each process in the grid with *. 4. Every time the processor becomes available determine the processes in the competition for the CPU at the start of the next available time slot. [Note that the processor becomes available for non pre-emptive algorithms whenever the running process completes its service burst; for pre-emptive algorithms the CPU becomes available either when a time limit is reached (RR, MLF) or when a new process is favoured over the scheduled process (SRT)]: i. First include newly arrived processes in the pool of candidates; ii. Then, if the process that has just stopped running is not finished and needs more time, include it in the pool of candidates. (The order of steps i and ii is important - sometimes the time of arrival in the pool of candidates affects a process s chance of selection); 5. Decide which process is next by oredering the pool of candidates according to the algorithm and remove it from the pool of candidates; 6. Record that process as running either for its entire service need [nonpre-emptive algorithms] or for the next n milliseconds depending on the algorithm. 7. Record the other processes in the pool as waiting while the running process is running. 8. Repeat steps 4-7 until complete. 9. Make sure to mark wait times for each process from and including time of arrival to last millisecond before running [and any other pauses between runs]. 10.Count wait times and record in table. 11.Calculate Turnaround times (turnaround = wait time + service time). 12.Calculate NTT ratio (AKA Response ratio) = turnaround/service time.
FIRST COME FIRST SERVED 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Process 1 *R R R 2 *X R R R R R R 3 *X X X X X R R R R 4 *X X X X X X X R R R R R 5 *X X X X X X X X X X R R 1 2* 2 3* 3 3 3 3 3 4 4 4 4 5 5 5 5 5 FCFS 4* 4 4 4 5 5 5 5 QUEUE 5* 5 *=JUST ARRIVED. QUEUE IN COLUMN 0 IS STATE OF QUEUE AS AT START OF FIRST TIME INSTANT
Note that at time 2 below process 1 is behind process 2 because process 2 is a new arrival at time 2 and process 1 tries to re-enter the queue at precisely the same time. In these cases the new arrival goes ahead of the process that has already had some service. In contrast, at time 4 process 3 is a new arrival but is placed behind process 2. This is because, while process 2 has had some service, it was already queueing at time 3 before process 3 arrived. ROUND ROBIN; TIME SLICE=1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 *R R X R 2 *R X R X R X X R X X X R X X X R 3 *X R X X R X X X R X X X R 4 *X R X X X R X X X R X X R R 5 *X X R X X X R ROUND 1* 1 2* 1 2 3 2 4 3 2 5 4 3 2 5 4 3 2 4 4 ROBIN 1 2 3* 2 4* 3 2 5 4 3 2 5 4 3 2 4 QUEUE 3 2 5* 4 3 2 5 4 3 2 4 TS=1 4 3 2 5 4 3 2
ROUND ROBIN; TIME SLICE=4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 *R R R 2 *X R R R R X X X X X X X X R R 3 *X X X R R R R 4 *X X X X X R R R R X X X X R 5 *X X X X X X X X X R R ROUND 1* 2* 2 3* 3 3 3 4 4 4 4 2 2 2 2 5 5 4 4 ROBIN 4* 4 2 2 2 2 5 5 5 5 4 4 QUEUE 2 5* 5 5 5 4 TS=4
In this non pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the shortest process is always ranked highest. The running process is NOT included in this re-ranking. Here the running process is not interrupted but left to complete its burst. SHORTEST PROCESS NEXT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 *R R R 2 *X R R R R R R 3 *X X X X X X X R R R R 4 *X X X X X X X X X R R R R R 5 *X R R 1* 2* 2 3* 3 3 3 5* 5 3 3 4 4 4 4 SPN ranked 4* 4 3 3 4 4 POOL 4 4
In this pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the process with the shortest remaining burst time is always ranked highest. The running process is included in this re-ranking. Here the running process is interrupted if it is no longer the highest ranking process. SHORTEST REMAINING TIME 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 *R R R 2 *X R X X X X X X R R R R R 3 *R R R R 4 *X X X X X X X X X R R R R R 5 *R R 1* 1 1 2 3* 3 3 3 5* 5 2 2 2 2 2 4 4 4 4 4 SRT ranked 2* 2 2 2 2 2 2 4 4 4 4 4 POOL 4* 4 4 4 HERE RUNNING PROC'S ARE SHOWN IN RANKING WHILE RUNNING
In this non pre-emptive algorithm the ranking of the waiting processes happens only when the currently running process is complete and the processor becomes available. See calculations below. HIGHEST RESPONSE RATIO NEXT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 *R R R 2 *X R R R R R R 3 *X X X X X R R R R 4 *X X X X X X X X X R R R R R 5 *X X X X X R R 1 2 3 5 4 HRRN ranked 4 4 POOL 5 Choice of next process occurs when running process completes (or becomes blocked) No competition for 1 or 2. Then 3, 4 and 5 compete and 3 wins. Then 4 and 5 compete and 5 wins. Finally no competition for 4.
CPU becomes available at time 9 with processes 3,4, and 5 all eager: Process 3. (w+s)/s = (5+4)/4 = 9/4 = 2.25 = Response ratio (aka NTT) The 5 in the (5+4) above is the time spent by process 3 waiting up to the beginning of time point 9. Process 3 arrived at the beginning of time point 4 and so waited during time points 4,5,6,7,8 = 5 wait periods. Process 4. (w+s)/s = (3+5)/5 = 8/5 = 1.6 = Response ratio (aka NTT) Process 5. (w+s)/s = (1+2)/2 = 3/2 = 1.5 = Response ratio (aka NTT) So ranking by HRRN is 3,4,5 and process 3 gets the CPU. CPU becomes available at time 13 with processes 4 and 5 competing: Process 4. (w+s)/s = (7+5)/5 = 12/5 = 2.4 = Response ratio (aka NTT) Process 5. (w+s)/s = (5+2)/2 = 7/2 = 3.5 = Response ratio (aka NTT) So ranking by HRRN is 5,4 and process 5 gets the CPU. Note that the relative ranking of processes 4 and 5 swaps between the competition at time 9 and that at time 13. This is because, although both have done the same amount of extra waiting (4 time units), that extra 4 units of wait is a larger proportion of the service time for process 5 than it is for process 4. Process 5 adds 4/2 = 2 [i.e. (extra wait)/(service time)] to its NTT whereas process 4 only adds 4/5 = 0.8.