A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3



Similar documents
Performance metrics for parallel systems

A Comparison of General Approaches to Multiprocessor Scheduling

Evaluation of Different Task Scheduling Policies in Multi-Core Systems with Reconfigurable Hardware

SCHEDULING MULTIPROCESSOR TASKS WITH GENETIC ALGORITHMS

Parallel Algorithm for Dense Matrix Multiplication

Introduction to Scheduling Theory

Experiments on the local load balancing algorithms; part 1

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Real Time Scheduling Basic Concepts. Radek Pelánek

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

An ACO-based Approach for Scheduling Task Graphs with Communication Costs

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

Parallel Scalable Algorithms- Performance Parameters

A Comparative Study of Batch Scheduling Strategies for Parallel Computing System

@IJMTER-2015, All rights Reserved 355

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems

Performance Analysis of Load Balancing Algorithms in Distributed System

A novel load balancing algorithm for computational grid

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

FPGA area allocation for parallel C applications

How To Balance In Cloud Computing

A Locality Enhanced Scheduling Method for Multiple MapReduce Jobs In a Workflow Application

Fairness resource sharing for dynamic workflow scheduling on Heterogeneous Systems

Efficient Cloud Computing Scheduling: Comparing Classic Algorithms with Generic Algorithm

Chapter 1 Introduction to Scheduling and Load Balancing

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Policy Distribution Methods for Function Parallel Firewalls

How To Compare Load Sharing And Job Scheduling In A Network Of Workstations

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Various Schemes of Load Balancing in Distributed Systems- A Review

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks

Efficient Scheduling in Cloud Networks Using Chakoos Evolutionary Algorithm

Competitive Analysis of On line Randomized Call Control in Cellular Networks

Load Balancing using Potential Functions for Hierarchical Topologies

Load Balancing on a Grid Using Data Characteristics

A Study on the Application of Existing Load Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

A Simultaneous Solution for General Linear Equations on a Ring or Hierarchical Cluster

Automatic Mapping Tasks to Cores - Evaluating AMTHA Algorithm in Multicore Architectures

A Taxonomy of Application Scheduling Tools for High Performance Cluster Computing

Throughput constraint for Synchronous Data Flow Graphs

M. Sugumaran / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (3), 2011,

CHAPTER 1 INTRODUCTION

Load Balancing Algorithms for Peer to Peer and Client Server Distributed Environments

Fair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing

PGA Scheduling with Cloud Computing

Load Balancing in Distributed Data Base and Distributed Computing System

A Service Revenue-oriented Task Scheduling Model of Cloud Computing

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

ESQUIVEL S.C., GATICA C. R., GALLARD R.H.

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

Dynamic Load Balancing Algorithms for Distributed Networks

Global Load Balancing and Primary Backup Approach for Fault Tolerant Scheduling in Computational Grid

Performance Metrics and Scalability Analysis. Performance Metrics and Scalability Analysis

A Novel Approach for Efficient Load Balancing in Cloud Computing Environment by Using Partitioning

Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs

Parallel Fuzzy Cognitive Maps as a Tool for Modeling Software Development Projects

AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING

THE emergence of computer programs with increasingly

A Review of Customized Dynamic Load Balancing for a Network of Workstations

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

BRAESS-LIKE PARADOXES FOR NON-COOPERATIVE DYNAMIC LOAD BALANCING IN DISTRIBUTED COMPUTER SYSTEMS

A Bi-Objective Approach for Cloud Computing Systems

A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Efficient Scheduling of Arbitrary Task Graphs to Multiprocessors using A Parallel Genetic Algorithm

Grid Computing Approach for Dynamic Load Balancing

Distributed Dynamic Load Balancing for Iterative-Stencil Applications

Modeling and Performance Evaluation of Computer Systems Security Operation 1

Control 2004, University of Bath, UK, September 2004

DESIGN OF CLUSTER OF SIP SERVER BY LOAD BALANCER

The work breakdown structure can be illustrated in a block diagram:

ABSTRACT. Keywords scalable, load distribution, load balancing, work stealing

Load Balancing in cloud computing

PEELSCHED: a Simple and Parallel Scheduling Algorithm for Static Taskgraphs

Factors to Describe Job Shop Scheduling Problem

A SURVEY ON LOAD BALANCING ALGORITHMS FOR CLOUD COMPUTING

An Ants Algorithm to Improve Energy Efficient Based on Secure Autonomous Routing in WSN

RESEARCH PAPER International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009

Scalable Computing in the Multicore Era

Comparative Analysis of Congestion Control Algorithms Using ns-2

Parallel Job Scheduling in Homogeneous Distributed Systems

A Network Flow Approach in Cloud Computing

CSE 4351/5351 Notes 7: Task Scheduling & Load Balancing

IMPACT OF DISTRIBUTED SYSTEMS IN MANAGING CLOUD APPLICATION

An Approach to Load Balancing In Cloud Computing

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

A performance study of multiprocessor task scheduling algorithms

Scalability and Classifications

Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

LOAD BALANCING IN CLOUD COMPUTING

International Journal of Advanced Research in Computer Science and Software Engineering

Aperiodic Task Scheduling

MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS

DESIGN AND DEVELOPMENT OF LOAD SHARING MULTIPATH ROUTING PROTCOL FOR MOBILE AD HOC NETWORKS

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Transcription:

A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti, Nidhi Rajak 1 Department of Computer Science & Applications, Dr.H.S.Gour Central University, Sagar, India, ranjit.jnu@gmail.com School of Computer and System Sciences, Jawaharlal Nehru University, New Delhi India, cpkatti@mail.jnu.ac.in School of Computing Science & Engineering, Galgotias University, Gr.Noida, India, nidhi.bathre@gmail.com Abstract A multiprocessor system is a computer system with more than one processor. Task scheduling on a multiprocessor environment is an important area of research as it has a number of applications in scientific and commercial problems. The objective of task scheduling is to minimize the total completion time of a given application program that is represented by Directed Acyclic Graphs (DAGs). Task scheduling may be classified into static task scheduling and dynamic task scheduling. In this paper, we have studied various types of task scheduling and their properties. Also, we have studied performance metrics for task scheduling. Keywords: Task scheduling, DAG, Speedup, Parallel processing. I. INTRODUCTION Parallel computing is an important research area in Computer Science. However, it consist number of problems that are not solved in sequential machine such as designing of parallel algorithms for an application program, dividing of an application program into subtasks, synchronization and coordinating communication, and scheduling of the tasks onto the processors. Parallel computing is the next generation of computers and has made impact on various areas from scientific and engineering applications to commercial applications. Scheduling [1] a set of dependent or independent tasks for parallel execution on a set of multiple processors is an important computational problem where a large problem or an application is divided into a set of subtasks and these subtasks are distributed among the processors. The allocation of subtasks on processors and defining their order of execution is called as Task scheduling. Task scheduling is an NP-complete problem of its simple case [] and some restricted cases [,4,5,6,7]. It has been used in number of application from scientific to engineering problems. Figure.1 [8] shows the layout of transformation from an application program to task scheduling. Here, an application program is divided into a number of subtasks that are represented by Directed Acyclic Graph. They are allocated to the multiple processors and should be maintained precedence constraints [9] among the subtasks. 8

Decomposition Task Scheduling Processor 1 Application program Processor Processor Fig.1. Transforming from an application program to task scheduling II. TASK SCHEDULING The major objective of the task scheduling in multiprocessor environment is to minimize the execution time of tasks. There are number of issues in task scheduling such as communication time has to be considered or not and multiprocessors environment is either heterogeneous or homogeneous. Here, heterogeneous means all the processors are different and homogeneous means all the processors are identical. The generalized model of a task scheduler [10] consists of: Task Queue: It contains all the incoming tasks. Scheduler: It works simultaneously with the processors and also schedules arrived tasks towards dispatcher of the respective processor. Dispatch Queues: It is a mediator between scheduler and processor for the tasks. Each of the processor is associated with it. Queue 1 P 1 Task Queue Scheduler Queue P Queue P Fig..Task Scheduler Model There are mainly three components [11] for execution of task scheduling: Performance of processors. Tasks mapping onto processors. Execution order of the tasks on processors. 9

Therefore, it deals [1] with the allocation of each task to the suitable processors and assignment of proper order of task. Task scheduling is classified on the basis of following characteristics [1]: Number of tasks and their dependencies. Execution time and communication time of the tasks. Number of processors and their uniformity. Topology of task graph. []. It is classified into two categories: static task scheduling [14] and dynamic task scheduling Task Scheduling Static Task Scheduling Dynamic Task scheduling A. Static Task Scheduling Static task scheduling is the assignment of various tasks of task graph on the multiple processors during compile time. It is also called as deterministic and compiles time scheduling because the information of computational time of tasks, communication between the tasks and their precedence constraints are known in advance. There are two objective of static task scheduling[15]: minimize the completion time of the task graph and minimize the inter-task communication delay. B. Dynamic Task Scheduling Fig.. Classification of Task Scheduling. Dynamic task scheduling is reassignment of various tasks of task graph on the multiple processors during the time of execution. It is also called as non-deterministic and dynamic load balancing scheduling because the information of computational time of tasks, communication between the tasks and their precedence constraints are not known in advance. The major objective of dynamic task scheduling is to maximize the utilization of the processor in the system at all the times. Following policies [15] are used for dynamic task scheduling: Load estimation policy: It determines how to estimate the load of each processor. Task transfer policy: It determines whether or not a task is to be transferred to another processor. State information exchange policy: It determines how to exchange the system load information among the processors. Location policy: It determines the processor to which a task should be transferred. III. FORMULATION OF TASK SCHEDULING PROBLEM A task scheduling problem consists of the application model, system computing model and performance evaluation metrics. This section will discuss an application model, a system computing model followed by performance evaluation metrics. 40

A. Application model GESJ: Computer Science and Telecommunications 014 No.1(41) Task scheduling of a given application is represented by directed acyclic graph (DAG) G 1 = (T, E), where T is the finite set of m tasks {T 1, T, T T m } and E is the set of edges {e ij }between the tasks T i and T j. Here, each edge represents the precedence constraints between the tasks T i and T j such that T j can not start until T i completes its execution. Each task T i is associated with an execution time ET (T i ) and each edge e ij is associated with a communication time CT (T i, T j ) for data transfer from T i to T j. If there is a direct path from T i to T j then T i is the predecessor of T j and T j is the successor of T i. A entry task does not any predecessor, similarly, an exit task does not any successors. Layout of a DAG with six tasks is shown in figure 4 [16], 5 T 1 4 Entry task (T 1 ) Execution Time (ET) Communication Time (CT) T 4 T 4 T 4 4 5 T 5 5 T 6 Exit Task (T 6 ) Fig.4. DAG Model with six tasks where T 1, T, T, T 4, T 5 and T 6 are different tasks of the given DAG. The execution time ET (T i ) and communication time CT (T i, T j ) [16] of tasks are: ET (T 1 ) = ET (T ) = ET (T ) = 4 ET (T 4 ) = 4 ET (T 5 ) = ET (T 6 ) = CT (T 1, T ) = 5 CT (T 1, T ) = 4 CT (T, T 4 ) = CT (T, T 5 ) = 4 CT (T, T 5 ) = CT (T, T 6 ) = 5 CT (T 4, T 6 ) = 5 CT (T5, T6) = B. System Computing Model A multiprocessor system is either heterogeneous or homogeneous. Considered homogeneous processors for the purpose of methodology that can be represented by a directed graph G = (Proc, Link), where Proc is a set of p processors {P 1, P, P...Pp} connected to each other through a network. Link is a set of links between the processors P i and P j. If both of the tasks T i and T j are scheduling on the same processor, then communication time of the direct path between T i and T j is negligible. Figure 5 shows [17] three fully connected homogenous processors via common link. 41

P 1 P P Fig.5. Fully connected homogeneous processors The objective of multiprocessors task scheduling is to minimize the overall execution time of the tasks as well as to preserve the precedence constraints. If a task T i is scheduled on a processor P, then starting time of task and finishing time of task is denoted by STT (T i, P) and FTT (T i, P), respectively. The scheduling length is defined by Slen = Max { FTT (T i, P)}for all the processors. IV. TASK SCHEDULING EXAMPLES This section is considered two examples for task scheduling: a task graph with communication time and task graph without communication time with the assumption that number of processors is finite and homogeneous. The schedule of tasks on the processor is shown with help of a Gantt chart. A Gantt chart is a common graphical representation of task schedule. It consists of start time and finish time of each task on the available processor. It also represents the idle time between the tasks. A. Task Scheduling with Communication Time Task scheduling with communication time means consideration of communication time during assignment of tasks on the processors. The communication time is required due to interdependency of tasks. A dependent task will not start execution until it gets all the information from previous tasks. If two tasks are scheduled on same processor then their communication time would be negligible. Consider an example of DAG that consists of four tasks and also consider two processors for scheduling as shown in Figure 6 [18]. 1 A B C D Fig.6 DAG with four tasks 4

0 1 4 5 6 7 8 9 10 P1 A A B B B P C C C D D Fig.7. Scheduling length is nine time units. B. Task Scheduling without Communication Time Task scheduling without communication time means do not consider communication time during the scheduling of tasks on the processors. It reduces the scheduling length. Consider the same example as shown in Figure 8 without communication time with the same number of processors. A B C D Fig.8. DAG with four tasks. 0 1 4 5 6 7 8 9 10 P1 A A B B B P C C C D D Scheduling length of this system is decreased by two units as compared to previous example. V. PERFORMANCE METRICS OF TASK SCHEDULING Following metrics are used for performance evaluation of task scheduling algorithms: A. Scheduling Length Scheduling length is the maximum time required to execute the last task on a processor as shown in (1).. (1) 4

B. Total Parallel Overhead Total parallel overhead [19] is the total time spent by all PEs over and above that required by the fastest known sequential algorithms for solving the same problem on a single PE. It is denoted T and can be expressed as shown in ()., () Where T s is sequential time, T p is the parallel execution time and p is total processors for T p. C. Cost Cost [0] is defined as the product of parallel execution time (T p ) and the number of processing elements (P) used as shown in ().. () D. Speedup Speedup [1] is as the ratio of sequential execution time ( ) and parallel execution time ( ). Sequential execution time is the total execution time of each task in uni-processor environment and a parallel execution time is the last execution time of a task on bounded number of processors in multiprocessors environment. It can be expressed as shown in (4). E. Efficiency. (4) Efficiency [1] of a parallel program is the ratio of speedup and the number of processors used as shown in (5).. (5) F. Normalized Scheduling length (NSL) (6). Normalized Scheduling Length (NSL) [] of a scheduling algorithm is written as shown in. (6) G. Load Balancing Load balancing [] is the ratio of scheduling length to the average execution time over all the processors as shown in (7)., (7) where Average is the ratio of sum of processing time of each processor to the number of processors are used. 44

CONCLUSION In this paper, we have studied task scheduling, components and types of task scheduling. Here, an application program is represented by DAG. A DAG model consists of tasks and edges represent the communication time between the tasks. The edges also represent the data dependency between two tasks. Each task and edge is associated with execution time of each task and communication time between the two tasks. The multiprocessor model may be either homogeneous or heterogeneous. We have evaluated task scheduling with considered communication time and without considered communication time. The Gantt. Charts for both cases showed that the task scheduling without communication time gives minimum scheduling length as compared to task scheduling with communication time. We can also evaluated different task scheduling algorithms on the basis of performance metrics. REFERENCES 1. Shiyuan Jin, Guy Schiavona and DamlaTurgut, A performance study of multiprocessor task scheduling algorithms, Journal of Supercomputing,Vol.4,pp.77-97,008.. M.R.Garey and D.S.Johson,Computers and Intractability: A Guide to the Theory of NP - completeness,1979.. G.N Srinivas and Bruce R. Musicus. Generalized Multiprocessor Scheduling for Directed Acyclic Graphs In Third Annual ACM Symposium on Parallel Algorithms and Architectures, pp.7-46, 1994. 4. G.L.Park, BehroozShirazi, Jeff Marquis and H.Choo. Decisive Path Scheduling: A new List Scheduling Method Communication of the ACM, vol.17, no.1, pp. 47-480,Dec1974. 5. Min You Wu On Parallelization of static scheduling Algorithms, IEEE Transactions on Parallel and Distributed Systems, Vol.10, No.4,, pp 517-58,April 1999 6. C.H.Papadimitrious and M.Yannakakis, Scheduling Interval-Ordered Tasks, SIAM Journal of Computing,Vol.8,pp.405-409,1979. 7. R.Sethi, Scheduling Graphs on Two Processors, SIAM Journal of Computing,Vol.5, No.1,pp.7-8, Mar.1976. 8. Oliver Sinnen, Task Scheduling for Parallel Systems Wiley-Interscience Pulication, 007. 9. Edwin S.H and Nirwan Ansari, A Genetic Algorithm for Multiprocessor Scheduling", IEEE Transaction on Parallel and Distributed Systems, Vol.5, No., Feb.1994. 10. Hadis Heidari and Abdolah Chaechale, Scheduling in Multiprocessor System Using Genetic Algorithm, International Journal of Advanced Science and Technology,Vol.4, pp.81-9,01. 11. RavneetKaur and RamneekKaur, Multiprocessor Scheduling using Task Duplication Based Scheduling Algorithms: A Review Paper, International Journal of Application or Innovation in Engineering and Management,Vol. Issue 4,pp.11-17,April,01. 1. XiaoyongTang,Kenli Li and Guiping Liao, List Scheduling with duplication for heterogeneous computing systems, Journal of Parallel and Distribute Computing,Vol.70, pp.-9,010. 1. MostafaR.Mohamed and MedhatH.A, Hybrid Algorithms for Multiprocessor Task Scheduling, International Journal of Computer Science Issues, Vol.8, Issue. No. May,011. 14. Y.C.Chung and S.Ranka, Application and Performance Analysis of a Compile-Time Optimization Approach for List Scheduling Algorithms on Distributed Memory Multiprocessors, Proceedings of Supercomputing 9,, pp.51-51,199. 15. V.Rajaraman and C.S.R Murthy, Parallel Computers : Architecture and Programming, PHI Publication, May.01. 45

16. RanjitRajak, Comparison of Bounded Number of Processors (BNP) Class of Scheduling Algorithms Based on Metrics, GESJ: Computer Science and Telecommunication, Vol.4, No., 01. 17. RanjitRajak and C.P.Katti, Task Scheduling in Multiprocessor System using Fork-Join Method(TSFJ), International Journal of New Computer Architectures and Their Applications, Vol. No., pp.47-5, 01. 18. AhmedZakiSemarShaul and Oliver Sinnen, Optimal Scheduling of Task Graphs on Parallel System, 9 th International Conference on Parallel and Distributed Computing,Applications and Technologies,pp.-8,008. 19. Vipin Kumar and Anshul Gupta, Analysis of scalability of parallel algorithms and architectures: a survey ICS '91 Proceedings of the 5th international conference on Supercomputing, pp. 96-405, 1991. 0. Ananth Gramma, Vipin Kumar and Anshul Gupta, Introduction to Parallel Computing, Pearson Edition,009. 1. M.J.Quinn,Parallel Programming in C with MPI and OpenMP,Tata McGraw-Hill, Edition 00.. K.SikS,Myong, J.Cha, M.S.Jang, Task Scheduling Algorithm using Minimized Duplication in Homogeneous Systems, Journal of Parallel and Distributed Computing,Vol.68, pp. 1146-1156, 008.. F.A.Omara and M.Arafa, Genetic Algorithm for Task scheduling Problem, Journal of Parallel and Distributed Computing.Vol.70, pp.1-, 010. Article received: 014-01-6 46