more processes = less time each (so limit needed) o If idle time exceeds some threshold then LTS may add more processes. Which process next?

Size: px
Start display at page:

Download "more processes = less time each (so limit needed) o If idle time exceeds some threshold then LTS may add more processes. Which process next?"

Transcription

1 3.2 CPU Scheduling The aim of CPU scheduling is to share out CPU access so that the objectives of the system are met: Response time (e.g. response to user commands), throughput (i.e. no. of processes completed over a fixed time), and processor efficiency (ideally, constant use of the processor without ill effects on processes). Concentrate on Uniprocessor Types of Scheduling There are three types of scheduling needed. They are categorised according to how often they are done: 1. Long-term scheduling (LTS) 2. Medium-term scheduling (MTS) 3. Short-term scheduling (STS) Long-term scheduling (LTS) LTS involves decisions about adding more processes to the pool of processes waiting for CPU access. That is, LTS controls the level of multiprogramming. Once the LTS admits a process it is either added to the STS queue (as a ready process) or to the MTS queue (as a suspended ready process). Considerations: How many processes? o more processes = less time each (so limit needed) o If idle time exceeds some threshold then LTS may add more processes. Which process next? Medium-term scheduling (MTS) MTS involves decisions about adding to the number of processes that are partially or fully in main memory. That is, deciding which of the suspended processes (under OS control) are admitted to the ready queues. This is largely a memory management issue Short-term scheduler (STS) Also called scheduler or dispatcher. Involves decision about which process gets the CPU next. Invoked each time an event happens that interrupts the current process or presents an opportunity to pre-empt the current process. E.g. clock interrupts (e.g. time slice up, scheduled event); I/O interrupts; system calls; miscellaneous signals/interrupts.

2 Here we concentrate on STS. LTS and MTS are mostly memory management issues ST Scheduling Algorithms: background concepts The objective of short term scheduling algorithms is to allocate processor time so as to optimise one or more aspects of system behaviour. First some background concepts Evaluation Criteria Evaluate Scheduling algorithms based on criteria: Two Types: User oriented and system oriented Of these some are performance related, some are not performance related User oriented Behaviour of system from user point of view. E.g. response time for interactive user. Predictability (provision of same service in different conditions) System oriented Focus on effective and efficient use of CPU. E.g. Throughput (rate at which processes are completed) Performance related Quantitative and easily measured. e.g. response time, throughput Not performance related Qualitative and hard to measure. e.g. predictability Priorities Many systems assign a priority to processes; schedulers choose higher priority processes over lower ones. So there may be a separate ready queue for each priority: RQ0, RQ1, RQ2, etc. The scheduler will process RQ0 first according to some algorithm (e.g. FCFS), then RQ1, etc. Sometimes this can starve low priority processes so often introduce a scheme where a process s priority rises the longer it is waiting Processing Burst time

3 When a process is ready to run and gains access to the CPU it will have a small amount of its overall work to do and will then either be forced out of the CPU by some external interrupt or it leave voluntarily because it is waiting for some service (e.g. i/o). That small amount of work is called its processing burst. Typically it is between 1 and 5 milliseconds of work on the CPU Pre-emptive and non-pre-emptive scheduling algorithms A non-pre-emptive scheduling algorithm is one that only changes the running process when a convenient interruption happens (either because the process itself asks for i/o service or does some other system call, OR some other interrupt happens outside the process s control that means it must leave the CPU). A pre-emptive scheduling algorithm is one that will interrupt processes when it decides it does not wait for something else to interrupt the process on the CPU, but rather it will make sure to interrupt the running process when it decides it is time for some other process to get the CPU. Pre-emptive scheduling algorithms are more expensive (time + processing) but they provide better service overall by preventing monopolisation of CPU. (Especially if they use very efficient context switching [e.g. using lots of h/w]) CPU-bound or I/O bound Processes If a process spends a lot of its time using the processor it is CPU bound. A process that is CPU bound will tend to have larger service burst times because it will have less need for OS services because it has the resources it needs it just needs to execute instructions to get work done. If a process spends a lot of its time doing i/o it is i/o bound. I/O bound processes tend to have very short service burst times because no sooner do they get the CPU than they initiate another I/O call which blocks them Starvation If a scheduling algorithm could possibly allow a situation to arise where a ready process never gets access to the CPU it is said to allow starvation. If a process is starved of access to the CPU it cannot run. This is totally unacceptable Interactions pre-emptive advantage of disallowing monopolisation of CPU; disadvantage of increasing cpu-bound number of process switches I/o bound no danger of monopolisation so no advantage; still has possible disadvantage of non-pre-emptive disadvantage of allowing monopolisation of CPU; advantage of not increasing number of process switches no danger of monopolisation and advantage of not increasing number of process switches

4 increasing number of process switches A system may have mostly i/o bound processes or mostly CPU bound processes. Or it may have a mixture. Depending on the profile of the system it is more or less advantageous to use one or other of the scheduling algorithm types. A system with a lot of CPU bound processes is better served by preemptive algorithms. A system with mostly i/o bound processes is better served by a non-pre-emptive algorithm Algorithm comparison In order to illustrate the following algorithms use the following kind of benchmark data: Process Arrival Service Wait Turn-around time time time time NTT ratio Process 4 arrives at time 6 and requires 5 units of execution time for this burst of activity. Wait time is the length of time spent waiting for service. Turnaround time is the total time spent in the system for this burst of service (i.e. wait time + service time). NTT ratio is the Normalised Turnaround Time ratio which is turnaround time divided by service time; this gives a good indication of the relative penalty incurred by each process under the algorithm in question because it takes into account the amount of service sought when measuring turnaround. Note that, in reality, these service burst times are unknown by the scheduling algorithm First Come, First Served (FCFS) A simple queue: processes get CPU in the order they arrive in the ready queue. Non-pre-emptive (i.e. once a process gets served it runs to the end of its required service time without interruption.) Consider the following example: Process Arrival Service Wait Turn-around time time time time NTT ratio = 3/ = 7/6

5 = 9/ = 12/ = 12/2 FCFS performs well when all processes have similar service times. But when there is a mix of short processes behind long ones the short processes in the queue may suffer (see process 3 below). Process Arrival Service Wait Turn-around time time time time NTT ratio Even in this extreme case FCFS performs OK for long processes (see processes 2 & 4 above). Fair in a simple minded way. Simple algorithm with low administration overhead (no extra process switches). No possibility of starvation. FCFS favours CPU-bound process over i/o bound because i/o bound processes tend to need shorter bursts of service time. This leads to inefficient use of i/o devices. In situations where there is a mix of long and short processing burst times, FCFS is unfair for short processes. Sometimes FCFS is combined with priority system to avoid its problems Round Robin (RR) FCFS is non-pre-emptive. RR is pre-emptive and thus avoids problems of FCFS; the reason short jobs are penalised with FCFS is that they must wait till long jobs are finished. So we introduce time-slices to level the playing field. Next process is the one waiting for longest (recently serviced processes go to back of queue.) Pre-emptive (Once a process gets service it runs either until it finishes or a time limit is reached, whichever is sooner.) RR is FCFS with time slice clock interrupts. With time slice = 1 Process Arrival Service Turn-around NTT Wait time time time time ratio

6 With time slice = 4 Process Arrival Service Turn-around NTT Wait time time time time ratio

7 Design issue How big is a slice? (or what is the length of the time quantum?) Very short? Good; short processes move through more quickly. BUT many clock interrupts = more process switches overhead = BAD. Smaller slices increase response time for typical interactions; If slices are longer than the longest running process then effectively you have FCFS. Guideline: slice should be slightly bigger than the time needed for a typical interaction. i/o bound processes get a raw deal; they tend not to use their full slice before leaving CPU, waiting and then re-joining ready queue; CPU-bound processes use their full slice and immediately re-join the ready queue. So poor performance of i/o bound processes and thus poor i/o device use Shortest Process Next (SPN) SPN is another way of avoiding the bias against short jobs in FCFS. RR was pre-emptive; SPN is non pre-emptive, i.e. it doesn t force processes off CPU. Instead the job with the shortest expected processing burst time is selected next, i.e. short jobs jump queue. Next process is the one that requires the least amount of processing time (this must be guessed see later). Non-pre-emptive. As new processes arrive, the processes are ranked again according to processing time required. If the new process requires less processing time than the rest of the waiting processes then it gets the highest ranking of waiting processes and will therefore be served next once the running process leaves. Process Arrival Service Turn-around NTT Wait time time time time ratio

8 Better overall response time but predictability reduced Better for shorter jobs but risks starvation of longer jobs Design issue How do you estimate the future processing need of a process? Keep running average of processing bursts for each process and use it as a guess of the next burst: OR Where: S n+1 is average of previous bursts as estimate of next burst; S 1 is estimated value of first burst (not calculated); S n is estimate of previous burst; T i is actual processor execution time for the i th burst; T n is actual processor execution time for the last burst. E.g. 5 previous burst times (T 5 = most recent burst; T 1 = first burst): T 5 T 4 T 3 T 2 T

9 So n=5 as there are 5 previous bursts. Can then calculate S 6 using S 6 = ( ) / 5 = 15/5 = 3 = estimated next burst time at time n+1 (i.e. at time 6) OR Given that the estimate of the previous burst time (i.e. S 5 when n=4) was calculated as follows: S 5 = ( ) / 4 = 11/4 = 2.75 = estimated 5 th burst time, And given that the actual burst time at time 5 (T 5 ) was = 4 Then using To calculate the next burst time when n=5, : S 6 = (1/5 * 4) + (4/5 * 2.75) = 4/5 + 11/5 = 15/5 = 3 This is the same answer as the first method. This means we can calculate a reasonable guess from less data. In the first method we must store all the previous burst times for all of the processes. With the second method it is only necessary to store the last estimate, the last burst time and the number of bursts so far. However the guess still gives equal weight to each burst. Better to give more weight to recent bursts as the next one is likely to be more like them. Consider the following data: 5 previous burst times: T 5 T 4 T 3 T 2 T

10 Here the average is ( ) / 5 = 112 / 5 = 22.4 But 22.4 seems to fall between the very low and very high burst times and so is not a good guess this process has recently (at T 5 ) had a very high burst time and so is more likely to behave the same way in the near future. The average of 22.4 does not reflect this. So, use exponential average: S n+1 = α T n + (1 - α)s n for α = some constant between 0 and 1. This is equivalent to: S n+1 = α T n + (1 - α)αt n (1 - α) i αt n-i (1 - α) n S 1 If α = 0.8: S n+1 = 0.8T n T n T n In other words the last burst contributes 80% to the guess, the previous burst contributes 16%, and its predecessor only contributes a negligible 3.2%, and so on. Each successive term contributes a smaller amount to the next guess. So, if S n+1 = α T n + (1 - α)s n for α = 0.8 S 6 = 0.8 * T n * S n Then, for the example data above, if the previous guess was say 3 = S 5 and the last burst was actually 100 then: S 6 = 80% of last burst + 20% of previous guess = (0.8 * 100) + (0.2 * 3) = = 80.6 a much better estimate Thus the older the observation, the less it affects the average. Higher values of α give greater difference between successive terms Shortest Remaining Time (SRT) SRT is pre-emptive version of SPN. The job with the shortest remaining processing time is selected next. i.e. shortest remaining time jobs get CPU immediately. Next process is the one that has the least amount of processing time remaining. (guessed) Pre-emptive. As new processes arrive, the processes (including the process in the CPU) are ranked again according to remaining processing time required. If the new process requires less processing time than all other processes (including the running process) then it gets the highest ranking and is thus served next i.e. if necessary the running process is pre-empted and the new process takes over.

11 Process Arrival Service Wait Turn-around NTT time time time time ratio Again an estimate of remaining process time is needed, and there is still a risk of starving longer processes. But better for interactive processes (very good NTT ratios) Highest Response Ratio Next (HRRN) Both SPN and SRT have very good performance but both risk starvation of processes. HRRN maintains a good performance and avoids starvation altogether. The goal is to minimise the NTT ratio so this algorithm keeps an eye on the NTT ratios so far and if a process s NTT is the highest it is given service at the next opportunity this keeps the NTT ratio values down for all processes. HRRN can estimate the NTT so far: Estimated NTT so far = w + s / s where w=time spent waiting so far and s=expected service time. The next process is the one that has the highest anticipated NTT. Non pre-emptive. When the current process completes or is blocked the scheduler choose a process from the pool of candidates that has the highest anticipated NTT. Process Arrival Service Turn-around NTT Wait time time time time ratio Imagine a process requiring 2 units of service and another requiring 20. If neither has been waiting then their respective NTT ratios are equal: (0 + 2) / 2 = 1 and (0 + 20) / 20 = 1 However, as time goes on their ratios will rise (but notice that the smaller process s ratio rises faster): Time units passed Ratio of short Process Ratio of long Process 0 (0 + 2) / 2 = 1 (0 + 20) / 20 = 1 1 (1 + 2) / 2 = 1.5 (1 + 20) / 20 = (2 + 2) / 2 = 2 (2 + 20) / 20 = (3 + 2) / 2 = 2.5 (3 + 20) / 20 = 1.15

12 4 (4 + 2) / 2 = 3 (4 + 20) / 20 = 1.2

13 So shorter processes will rise to the top of the ranking more quickly and so will get to win the competition more quickly than longer processes. But what about the danger of starvation? No danger of that. Starvation tends to happen when new shorter processes that have not waited jump ahead of waiting longer processes. However, with HRRN, if a long process has waited just 1 millisecond then its ratio will be >1 [ e.g (w+s)/s = (1+100)/100 = 1.01 ]. And a short process that has done no waiting will have a ratio = 1 [ e.g. (w+s)/s = (0+2)/2 = 1 ]. This means that if a longer process is in competition with a newly arrived short process it will win. So a long process cannot be starved by a stream of new arrivals forever. So there is no danger of starvation. Favours short jobs but also accounts for the time spent waiting so far. So longer jobs get through once they have waited long enough. SPN, SRT, and HRRN all require a guess about the future processing needs of a process Multi-Level Feedback (MLF)

14 It is possible to maintain a few ready queues that operate under different rules. Waiting processes can be assigned to the different queues as required and the queues can be given different priorities. Choose the process from the head of the highest priority queue that contains waiting processes. Here we try to favour shorter processes but at the same time avoid having to rely on guesswork about the processing time required by processes. Instead we depend on the amount of time a process has already spent executing as a measure of its length. Instead of favouring short jobs we penalise long jobs (essentially similar things). There are several variations. In general there are a number of priority queues. When a process enters the system it joins the top priority queue (RQ0) and when it gets the CPU it is allocated n time units. If it doesn t complete it is then assigned to the next lower queue (RQ1) where it will get m units, and so on. If a process is very long it may end up in the lowest priority queue. The scheduler deals with all processes in the higher queues before moving to the lower ones. Thus, longer processes drift down the queues and shorter ones are favoured. The different queues can be administered using different queuing policies although RR is favoured. To counteract possible starvation of long processes there are two strategies employed:

15 Firstly, the CPU allocation can be increased as you go down the queues e.g. RQ0 gets Time Slice (ts)=1, RQ1 gets ts=2, RQ2 gets ts=4, and so on. This strategy gives longer processes a better opportunity of finishing earlier. But starvation is still possible. A second improvement to avoid the danger of starvation involves allowing a process to ascend the priority queues based on its waiting time. If a process has been waiting a long time in a lower priority queue it can move to a higher priority queue. As time goes by a process will ascend the queues until it gets served if necessary even a very long process will be treated on an equal footing in a queue with all other newly arriving processes. Example: Try running the following benchmark data on the following version of MLF. RQ0 run on round robin basis with time slice of 1 (2 0 ). RQ1 run on round robin basis with time slice of 2 (2 1 ). RQ2 run on round robin basis with time slice of 4 (2 2 ). A process that remains in RQn for a period of consecutive time equal to twice the time slice time of RQn is moved up to join RQn-1. Process Arrival Service Wait Turnaround time time time time NTT ratio /13 (2.15) /16 (1.93) /4 (3.5) /3 (2.33) /2 (3.5) /1 (2)

16 R=running for TS=1 from Q0, R=running for TS=2 from Q1, R=running for TS=4 from Q2, Q0=waiting in q0, Q1=waiting in q1, etc. 2 indicates process 2 moved up a queue MLF RQi TS=2 i R R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1 Q1 Q1 Q1 Q0R Q1Q1R R Q2Q2Q2Q2R R R R Q2Q2Q2 Q2 R R R END 2 Q0 R Q1 R R Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q2 Q1Q1 R R Q2Q2R R R R Q2Q2Q2Q2R R R R Q2 Q2 Q2 R R 3 R Q1Q1 Q1Q1 Q0 R Q1 Q1 Q1 R R END 4 Q0R Q1 Q1 Q1 R R END 5 Q0 R Q1 Q1 Q1 Q1 R END 6 Q0 R END RQ01* 2* 2 3* 4 4,5 5, 6 6, RQ , ,4 4,5 4,5 4,5,3,1 5,3,15,3,1 3, ,11 1 RQ ,2 1,2 1,2 1, , , , ,

17 3.2.9 Appendix All algorithms can be simulated according to the following steps: 1. Calculate the total service units to account for (=m) and draw up a grid with m+1 columns [extra column for the process number]. 2. Enter the process numbers/names in the first column to assign one row for each process and label the remaining columns from 0 m-1 3. Mark arrival times for each process in the grid with *. 4. Every time the processor becomes available determine the processes in the competition for the CPU at the start of the next available time slot. [Note that the processor becomes available for non pre-emptive algorithms whenever the running process completes its service burst; for pre-emptive algorithms the CPU becomes available either when a time limit is reached (RR, MLF) or when a new process is favoured over the scheduled process (SRT)]: i. First include newly arrived processes in the pool of candidates; ii. Then, if the process that has just stopped running is not finished and needs more time, include it in the pool of candidates. (The order of steps i and ii is important - sometimes the time of arrival in the pool of candidates affects a process s chance of selection); 5. Decide which process is next by oredering the pool of candidates according to the algorithm and remove it from the pool of candidates; 6. Record that process as running either for its entire service need [nonpre-emptive algorithms] or for the next n milliseconds depending on the algorithm. 7. Record the other processes in the pool as waiting while the running process is running. 8. Repeat steps 4-7 until complete. 9. Make sure to mark wait times for each process from and including time of arrival to last millisecond before running [and any other pauses between runs]. 10.Count wait times and record in table. 11.Calculate Turnaround times (turnaround = wait time + service time). 12.Calculate NTT ratio (AKA Response ratio) = turnaround/service time.

18 FIRST COME FIRST SERVED Process 1 *R R R 2 *X R R R R R R 3 *X X X X X R R R R 4 *X X X X X X X R R R R R 5 *X X X X X X X X X X R R 1 2* 2 3* FCFS 4* QUEUE 5* 5 *=JUST ARRIVED. QUEUE IN COLUMN 0 IS STATE OF QUEUE AS AT START OF FIRST TIME INSTANT

19 Note that at time 2 below process 1 is behind process 2 because process 2 is a new arrival at time 2 and process 1 tries to re-enter the queue at precisely the same time. In these cases the new arrival goes ahead of the process that has already had some service. In contrast, at time 4 process 3 is a new arrival but is placed behind process 2. This is because, while process 2 has had some service, it was already queueing at time 3 before process 3 arrived. ROUND ROBIN; TIME SLICE= *R R X R 2 *R X R X R X X R X X X R X X X R 3 *X R X X R X X X R X X X R 4 *X R X X X R X X X R X X R R 5 *X X R X X X R ROUND 1* 1 2* ROBIN 1 2 3* 2 4* QUEUE 3 2 5* TS=

20 ROUND ROBIN; TIME SLICE= *R R R 2 *X R R R R X X X X X X X X R R 3 *X X X R R R R 4 *X X X X X R R R R X X X X R 5 *X X X X X X X X X R R ROUND 1* 2* 2 3* ROBIN 4* QUEUE 2 5* TS=4

21 In this non pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the shortest process is always ranked highest. The running process is NOT included in this re-ranking. Here the running process is not interrupted but left to complete its burst. SHORTEST PROCESS NEXT *R R R 2 *X R R R R R R 3 *X X X X X X X R R R R 4 *X X X X X X X X X R R R R R 5 *X R R 1* 2* 2 3* * SPN ranked 4* POOL 4 4

22 In this pre-emptive algorithm the waiting pool is re-ranked every time a new process arrives so that the process with the shortest remaining burst time is always ranked highest. The running process is included in this re-ranking. Here the running process is interrupted if it is no longer the highest ranking process. SHORTEST REMAINING TIME *R R R 2 *X R X X X X X X R R R R R 3 *R R R R 4 *X X X X X X X X X R R R R R 5 *R R 1* * * SRT ranked 2* POOL 4* HERE RUNNING PROC'S ARE SHOWN IN RANKING WHILE RUNNING

23 In this non pre-emptive algorithm the ranking of the waiting processes happens only when the currently running process is complete and the processor becomes available. See calculations below. HIGHEST RESPONSE RATIO NEXT *R R R 2 *X R R R R R R 3 *X X X X X R R R R 4 *X X X X X X X X X R R R R R 5 *X X X X X R R HRRN ranked 4 4 POOL 5 Choice of next process occurs when running process completes (or becomes blocked) No competition for 1 or 2. Then 3, 4 and 5 compete and 3 wins. Then 4 and 5 compete and 5 wins. Finally no competition for 4.

24 CPU becomes available at time 9 with processes 3,4, and 5 all eager: Process 3. (w+s)/s = (5+4)/4 = 9/4 = 2.25 = Response ratio (aka NTT) The 5 in the (5+4) above is the time spent by process 3 waiting up to the beginning of time point 9. Process 3 arrived at the beginning of time point 4 and so waited during time points 4,5,6,7,8 = 5 wait periods. Process 4. (w+s)/s = (3+5)/5 = 8/5 = 1.6 = Response ratio (aka NTT) Process 5. (w+s)/s = (1+2)/2 = 3/2 = 1.5 = Response ratio (aka NTT) So ranking by HRRN is 3,4,5 and process 3 gets the CPU. CPU becomes available at time 13 with processes 4 and 5 competing: Process 4. (w+s)/s = (7+5)/5 = 12/5 = 2.4 = Response ratio (aka NTT) Process 5. (w+s)/s = (5+2)/2 = 7/2 = 3.5 = Response ratio (aka NTT) So ranking by HRRN is 5,4 and process 5 gets the CPU. Note that the relative ranking of processes 4 and 5 swaps between the competition at time 9 and that at time 13. This is because, although both have done the same amount of extra waiting (4 time units), that extra 4 units of wait is a larger proportion of the service time for process 5 than it is for process 4. Process 5 adds 4/2 = 2 [i.e. (extra wait)/(service time)] to its NTT whereas process 4 only adds 4/5 = 0.8.

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading Announcements Reading Chapter 5 Chapter 7 (Monday or Wednesday) Basic Concepts CPU I/O burst cycle Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution What are the

More information

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into

More information

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu)

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) 4003-440/4003-713 Operating Systems I Process Scheduling Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) Review: Scheduling Policy Ideally, a scheduling policy should: Be: fair, predictable

More information

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling Scheduling 0 : Levels High level scheduling: Deciding whether another process can run is process table full? user process limit reached? load to swap space or memory? Medium level scheduling: Balancing

More information

Introduction. Scheduling. Types of scheduling. The basics

Introduction. Scheduling. Types of scheduling. The basics Introduction In multiprogramming systems, when there is more than one runable (i.e., ready), the operating system must decide which one to activate. The decision is made by the part of the operating system

More information

OPERATING SYSTEMS SCHEDULING

OPERATING SYSTEMS SCHEDULING OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform

More information

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff Process Scheduling CS 241 February 24, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements Mid-semester feedback survey (linked off web page) MP4 due Friday (not Tuesday) Midterm Next Tuesday,

More information

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts Objectives Chapter 5: CPU Scheduling Introduce CPU scheduling, which is the basis for multiprogrammed operating systems Describe various CPU-scheduling algorithms Discuss evaluation criteria for selecting

More information

W4118 Operating Systems. Instructor: Junfeng Yang

W4118 Operating Systems. Instructor: Junfeng Yang W4118 Operating Systems Instructor: Junfeng Yang Outline Introduction to scheduling Scheduling algorithms 1 Direction within course Until now: interrupts, processes, threads, synchronization Mostly mechanisms

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010. Road Map Scheduling Dickinson College Computer Science 354 Spring 2010 Past: What an OS is, why we have them, what they do. Base hardware and support for operating systems Process Management Threads Present:

More information

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies Scheduling Main Points Scheduling policy: what to do next, when there are multiple threads ready to run Or multiple packets to send, or web requests to serve, or Definitions response time, throughput,

More information

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems Concepts: Chapter 7: Scheduling Strategies Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/

More information

CPU Scheduling. CPU Scheduling

CPU Scheduling. CPU Scheduling CPU Scheduling Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

More information

PROCESS SCHEDULING ALGORITHMS: A REVIEW

PROCESS SCHEDULING ALGORITHMS: A REVIEW Volume No, Special Issue No., May ISSN (online): -7 PROCESS SCHEDULING ALGORITHMS: A REVIEW Ekta, Satinder Student, C.R. College of Education, Hisar, Haryana, (India) Assistant Professor (Extn.), Govt.

More information

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling Objectives To introduce CPU scheduling To describe various CPU-scheduling algorithms Chapter 5: Process Scheduling To discuss evaluation criteria for selecting the CPUscheduling algorithm for a particular

More information

Processor Scheduling. Queues Recall OS maintains various queues

Processor Scheduling. Queues Recall OS maintains various queues Processor Scheduling Chapters 9 and 10 of [OS4e], Chapter 6 of [OSC]: Queues Scheduling Criteria Cooperative versus Preemptive Scheduling Scheduling Algorithms Multi-level Queues Multiprocessor and Real-Time

More information

Operating Systems Lecture #6: Process Management

Operating Systems Lecture #6: Process Management Lecture #6: Process Written by based on the lecture series of Dr. Dayou Li and the book Understanding 4th ed. by I.M.Flynn and A.McIver McHoes (2006) Department of Computer Science and Technology,., 2013

More information

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Based on original slides by Silberschatz, Galvin and Gagne 1 Basic Concepts CPU I/O Burst Cycle Process execution

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d Comp 204: Computer Systems and Their Implementation Lecture 12: Scheduling Algorithms cont d 1 Today Scheduling continued Multilevel queues Examples Thread scheduling 2 Question A starvation-free job-scheduling

More information

Operating System: Scheduling

Operating System: Scheduling Process Management Operating System: Scheduling OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting

More information

CPU Scheduling. Core Definitions

CPU Scheduling. Core Definitions CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases

More information

ICS 143 - Principles of Operating Systems

ICS 143 - Principles of Operating Systems ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Note that some slides are adapted from course text slides 2008 Silberschatz. Some

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

A Comparative Study of CPU Scheduling Algorithms

A Comparative Study of CPU Scheduling Algorithms IJGIP Journal homepage: www.ifrsa.org A Comparative Study of CPU Scheduling Algorithms Neetu Goel Research Scholar,TEERTHANKER MAHAVEER UNIVERSITY Dr. R.B. Garg Professor Delhi School of Professional Studies

More information

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies Chapter 7 Process Scheduling Process Scheduler Why do we even need to a process scheduler? In simplest form, CPU must be shared by > OS > Application In reality, [multiprogramming] > OS : many separate

More information

Analysis and Comparison of CPU Scheduling Algorithms

Analysis and Comparison of CPU Scheduling Algorithms Analysis and Comparison of CPU Scheduling Algorithms Pushpraj Singh 1, Vinod Singh 2, Anjani Pandey 3 1,2,3 Assistant Professor, VITS Engineering College Satna (MP), India Abstract Scheduling is a fundamental

More information

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize? Overview of Presentation (Greek to English dictionary) introduction to : elements, purpose, goals, metrics lambda request arrival rate (e.g. 200/second) non-preemptive first-come-first-served, shortest-job-next

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

Job Scheduling Model

Job Scheduling Model Scheduling 1 Job Scheduling Model problem scenario: a set of jobs needs to be executed using a single server, on which only one job at a time may run for theith job, we have an arrival timea i and a run

More information

OS OBJECTIVE QUESTIONS

OS OBJECTIVE QUESTIONS OS OBJECTIVE QUESTIONS Which one of the following is Little s formula Where n is the average queue length, W is the time that a process waits 1)n=Lambda*W 2)n=Lambda/W 3)n=Lambda^W 4)n=Lambda*(W-n) Answer:1

More information

Linux Process Scheduling Policy

Linux Process Scheduling Policy Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm

More information

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/ Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching

More information

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 77 16 CPU Scheduling Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 Until now you have heard about processes and memory. From now on you ll hear about resources, the things operated upon

More information

Chapter 5 Process Scheduling

Chapter 5 Process Scheduling Chapter 5 Process Scheduling CPU Scheduling Objective: Basic Scheduling Concepts CPU Scheduling Algorithms Why Multiprogramming? Maximize CPU/Resources Utilization (Based on Some Criteria) CPU Scheduling

More information

Scheduling Algorithms

Scheduling Algorithms Scheduling Algorithms List Pros and Cons for each of the four scheduler types listed below. First In First Out (FIFO) Simplicity FIFO is very easy to implement. Less Overhead FIFO will allow the currently

More information

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?

More information

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking. CPU Scheduling The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler, as it is sometimes called) can be viewed as the code that divides

More information

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances: Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations

More information

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure A Group based Quantum Round Robin Algorithm using Min-Max Spread Measure Sanjaya Kumar Panda Department of CSE NIT, Rourkela Debasis Dash Department of CSE NIT, Rourkela Jitendra Kumar Rout Department

More information

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads. CPU Scheduling CPU Scheduling 101 The CPU scheduler makes a sequence of moves that determines the interleaving of threads. Programs use synchronization to prevent bad moves. but otherwise scheduling choices

More information

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline

More information

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann Andre.Brinkmann@uni-paderborn.de Universität Paderborn PC² Agenda Multiprocessor and

More information

W4118 Operating Systems. Instructor: Junfeng Yang

W4118 Operating Systems. Instructor: Junfeng Yang W4118 Operating Systems Instructor: Junfeng Yang Outline Advanced scheduling issues Multilevel queue scheduling Multiprocessor scheduling issues Real-time scheduling Scheduling in Linux Scheduling algorithm

More information

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition Chapter 5: CPU Scheduling Silberschatz, Galvin and Gagne 2009 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating

More information

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1 Module 6 Embedded System Software Version 2 EE IIT, Kharagpur 1 Lesson 30 Real-Time Task Scheduling Part 2 Version 2 EE IIT, Kharagpur 2 Specific Instructional Objectives At the end of this lesson, the

More information

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal Journal Of Harmonized Research (JOHR) Journal Of Harmonized Research in Engineering 1(2), 2013, 45-53 ISSN 2347 7393 Original Research Article REDUCING TIME: SCHEDULING JOB Nisha Yadav, Nikita Chhillar,

More information

CPU Scheduling. CSC 256/456 - Operating Systems Fall 2014. TA: Mohammad Hedayati

CPU Scheduling. CSC 256/456 - Operating Systems Fall 2014. TA: Mohammad Hedayati CPU Scheduling CSC 256/456 - Operating Systems Fall 2014 TA: Mohammad Hedayati Agenda Scheduling Policy Criteria Scheduling Policy Options (on Uniprocessor) Multiprocessor scheduling considerations CPU

More information

Scheduling. Monday, November 22, 2004

Scheduling. Monday, November 22, 2004 Scheduling Page 1 Scheduling Monday, November 22, 2004 11:22 AM The scheduling problem (Chapter 9) Decide which processes are allowed to run when. Optimize throughput, response time, etc. Subject to constraints

More information

4. Fixed-Priority Scheduling

4. Fixed-Priority Scheduling Simple workload model 4. Fixed-Priority Scheduling Credits to A. Burns and A. Wellings The application is assumed to consist of a fixed set of tasks All tasks are periodic with known periods This defines

More information

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling

More information

OPERATING SYSTEM - VIRTUAL MEMORY

OPERATING SYSTEM - VIRTUAL MEMORY OPERATING SYSTEM - VIRTUAL MEMORY http://www.tutorialspoint.com/operating_system/os_virtual_memory.htm Copyright tutorialspoint.com A computer can address more memory than the amount physically installed

More information

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Valderinek@hotmail.com. Proseminar KVBK Linux Scheduler Valderine Kom

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Valderinek@hotmail.com. Proseminar KVBK Linux Scheduler Valderine Kom Konzepte von Betriebssystem-Komponenten Linux Scheduler Kenmegne Valderinek@hotmail.com 1 Contents: 1. Introduction 2. Scheduler Policy in Operating System 2.1 Scheduling Objectives 2.2 Some Scheduling

More information

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get

More information

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems Ishwari Singh Rajput Department of Computer Science and Engineering Amity School of Engineering and Technology, Amity University,

More information

CS4410 - Fall 2008 Homework 2 Solution Due September 23, 11:59PM

CS4410 - Fall 2008 Homework 2 Solution Due September 23, 11:59PM CS4410 - Fall 2008 Homework 2 Solution Due September 23, 11:59PM Q1. Explain what goes wrong in the following version of Dekker s Algorithm: CSEnter(int i) inside[i] = true; while(inside[j]) inside[i]

More information

Real-Time Scheduling 1 / 39

Real-Time Scheduling 1 / 39 Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A

More information

Analysis of Job Scheduling Algorithms in Cloud Computing

Analysis of Job Scheduling Algorithms in Cloud Computing Analysis of Job Scheduling s in Cloud Computing Rajveer Kaur 1, Supriya Kinger 2 1 Research Fellow, Department of Computer Science and Engineering, SGGSWU, Fatehgarh Sahib, India, Punjab (140406) 2 Asst.Professor,

More information

A Review on Load Balancing In Cloud Computing 1

A Review on Load Balancing In Cloud Computing 1 www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 4 Issue 6 June 2015, Page No. 12333-12339 A Review on Load Balancing In Cloud Computing 1 Peenaz Pathak, 2 Er.Kamna

More information

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I BY DR. S. A. SODIYA 1 SECTION ONE 1.0 INTRODUCTION TO OPERATING SYSTEMS 1.1 DEFINITIONS OF OPERATING SYSTEMS An operating system (commonly abbreviated OS and

More information

Contributions to Gang Scheduling

Contributions to Gang Scheduling CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,

More information

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking? Housekeeping Paper reading assigned for next Thursday Scheduling Lab 2 due next Friday Don Porter CSE 506 Lecture goals Undergrad review Understand low-level building blocks of a scheduler Understand competing

More information

Linux scheduler history. We will be talking about the O(1) scheduler

Linux scheduler history. We will be talking about the O(1) scheduler CPU Scheduling Linux scheduler history We will be talking about the O(1) scheduler SMP Support in 2.4 and 2.6 versions 2.4 Kernel 2.6 Kernel CPU1 CPU2 CPU3 CPU1 CPU2 CPU3 Linux Scheduling 3 scheduling

More information

Syllabus MCA-404 Operating System - II

Syllabus MCA-404 Operating System - II Syllabus MCA-404 - II Review of basic concepts of operating system, threads; inter process communications, CPU scheduling criteria, CPU scheduling algorithms, process synchronization concepts, critical

More information

Operating System Tutorial

Operating System Tutorial Operating System Tutorial OPERATING SYSTEM TUTORIAL Simply Easy Learning by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Operating System Tutorial An operating system (OS) is a collection

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer. Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds

More information

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters Abhijit A. Rajguru, S.S. Apte Abstract - A distributed system can be viewed as a collection

More information

Scheduling for QoS Management

Scheduling for QoS Management Scheduling for QoS Management Domenico Massimo Parrucci Condello isti information science Facoltà and di Scienze technology e Tecnologie institute 1/number 1 Outline What is Queue Management and Scheduling?

More information

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing Efficient Parallel Processing on Public Cloud Servers Using Load Balancing Valluripalli Srinath 1, Sudheer Shetty 2 1 M.Tech IV Sem CSE, Sahyadri College of Engineering & Management, Mangalore. 2 Asso.

More information

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Efficient Parallel Processing on Public Cloud Servers using Load Balancing Manjunath K. C. M.Tech IV Sem, Department of CSE, SEA College of Engineering

More information

LECTURE - 1 INTRODUCTION TO QUEUING SYSTEM

LECTURE - 1 INTRODUCTION TO QUEUING SYSTEM LECTURE - 1 INTRODUCTION TO QUEUING SYSTEM Learning objective To introduce features of queuing system 9.1 Queue or Waiting lines Customers waiting to get service from server are represented by queue and

More information

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1 Scheduling policy ULK3e 7.1 Goals fast process response time good throughput for background jobs avoidance of process starvation reconciliation of needs of low- and high-priority processes Operating Systems:

More information

CS414 SP 2007 Assignment 1

CS414 SP 2007 Assignment 1 CS414 SP 2007 Assignment 1 Due Feb. 07 at 11:59pm Submit your assignment using CMS 1. Which of the following should NOT be allowed in user mode? Briefly explain. a) Disable all interrupts. b) Read the

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example Real-Time Scheduling (Part 1) (Working Draft) Insup Lee Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania www.cis.upenn.edu/~lee/ CIS 41,

More information

Survey on Job Schedulers in Hadoop Cluster

Survey on Job Schedulers in Hadoop Cluster IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 1 (Sep. - Oct. 2013), PP 46-50 Bincy P Andrews 1, Binu A 2 1 (Rajagiri School of Engineering and Technology,

More information

Chapter 6 Congestion Control and Resource Allocation

Chapter 6 Congestion Control and Resource Allocation Chapter 6 Congestion Control and Resource Allocation 6.3 TCP Congestion Control Additive Increase/Multiplicative Decrease (AIMD) o Basic idea: repeatedly increase transmission rate until congestion occurs;

More information

This tutorial will take you through step by step approach while learning Operating System concepts.

This tutorial will take you through step by step approach while learning Operating System concepts. About the Tutorial An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

Priority-Driven Scheduling

Priority-Driven Scheduling Priority-Driven Scheduling Advantages of Priority-Driven Scheduling Priority-driven scheduling is easy to implement. It does not require the information on the release times and execution times of the

More information

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting

More information

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches /3. 2012/13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches /3. 2012/13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1 Common approaches /1 3. Scheduling issues Clock-driven (time-driven) scheduling Scheduling decisions are made beforehand (off line) and carried out at predefined time instants The time instants normally

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner. Introduction to Operating System PCSC-301 (For UG students) (Class notes and reference books are required to complete this study) Release Date: 27.12.2014 Operating System Objectives and Functions An OS

More information

Efficiency of Batch Operating Systems

Efficiency of Batch Operating Systems Efficiency of Batch Operating Systems a Teodor Rus rus@cs.uiowa.edu The University of Iowa, Department of Computer Science a These slides have been developed by Teodor Rus. They are copyrighted materials

More information

Chapter 2: OS Overview

Chapter 2: OS Overview Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

More information

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note: Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Design and performance evaluation of Advanced Priority Based Dynamic Round Robin Scheduling Algorithm (APBDRR)

Design and performance evaluation of Advanced Priority Based Dynamic Round Robin Scheduling Algorithm (APBDRR) International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 Design and performance evaluation of Advanced Priority Based Dynamic Round

More information

Lecture 3 Theoretical Foundations of RTOS

Lecture 3 Theoretical Foundations of RTOS CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)

More information

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness Last time Scheduling overview, key trade-offs, etc. O(1) scheduler older Linux scheduler Scheduling, part 2 Don Porter CSE 506 Today: Completely Fair Scheduler (CFS) new hotness Other advanced scheduling

More information

Operating System Aspects. Real-Time Systems. Resource Management Tasks

Operating System Aspects. Real-Time Systems. Resource Management Tasks Operating System Aspects Chapter 2: Basics Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Multimedia Transfer and Control Protocols Quality of

More information

Maximizing the number of users in an interactive video-ondemand. Citation Ieee Transactions On Broadcasting, 2002, v. 48 n. 4, p.

Maximizing the number of users in an interactive video-ondemand. Citation Ieee Transactions On Broadcasting, 2002, v. 48 n. 4, p. Title Maximizing the number of users in an interactive video-ondemand system Author(s) Bakiras, S; Li, VOK Citation Ieee Transactions On Broadcasting, 2002, v. 48 n. 4, p. 281-292 Issued Date 2002 URL

More information

Comparison between scheduling algorithms in RTLinux and VxWorks

Comparison between scheduling algorithms in RTLinux and VxWorks Comparison between scheduling algorithms in RTLinux and VxWorks Linköpings Universitet Linköping 2006-11-19 Daniel Forsberg (danfo601@student.liu.se) Magnus Nilsson (magni141@student.liu.se) Abstract The

More information

Performance Analysis of Load Balancing Algorithms in Distributed System

Performance Analysis of Load Balancing Algorithms in Distributed System Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 4, Number 1 (2014), pp. 59-66 Research India Publications http://www.ripublication.com/aeee.htm Performance Analysis of Load Balancing

More information

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal Load Balancing in Distributed System Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal Objectives of This Module Show the differences between the terms CPU scheduling, Job

More information

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP) TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page

More information

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com http://amitgud.tk

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com http://amitgud.tk Linux O(1) CPU Scheduler Amit Gud amit (dot) gud (at) veritas (dot) com http://amitgud.tk April 27, 2005 Agenda CPU scheduler basics CPU scheduler algorithms overview Linux CPU scheduler goals What is

More information

Synchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers

Synchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers Synchronization Todd C. Mowry CS 740 November 24, 1998 Topics Locks Barriers Types of Synchronization Mutual Exclusion Locks Event Synchronization Global or group-based (barriers) Point-to-point tightly

More information