LOAD BALANCING FOR MULTIPLE PARALLEL JOBS

Size: px
Start display at page:

Download "LOAD BALANCING FOR MULTIPLE PARALLEL JOBS"

Transcription

1 European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2000 Barcelona, September 2000 ECCOMAS LOAD BALANCING FOR MULTIPLE PARALLEL JOBS A. Ecer, Y. P. Chien, H.U Akay and J.D. Chen Purdue School of Engineering and Technology Indiana University-Purdue University Indianapolis 723 W. Michigan Street Indianapolis, Indiana 46202, U.S.A. Key words: Parallel CFD, multiple parallel jobs, load balancing. Abstract. Parallel Processing on networked heterogeneous computers has been widely adopted. Load balancing is an issue essential for taking the advantage of parallel processing. Previously reported load balancing techniques assume that there is only one parallel job running on a given set of computers. When many parallel jobs need to be executed on the same set of computers, a high level central manager is usually involved to enforce the queuing and balancing policy. Such a policy usually allows one parallel job running on the reserved computer to ensure load balancing. These arrangements discourage the sharing of hardware resources among computer owners and can be used only on computers that are dedicated for the parallel processing. In this paper, we introduce a new load balancing method that does not need a high-level central manager. This method assumes that the each parallel job has no knowledge of other parallel jobs and can start execution at different time. It also ensures that a newly jointed parallel job gets its share of computation power immediately but does not affect the load balance of other parallel jobs. 1

2 1. INTRODUCTION With the rapid advancement of computer technology, parallel processing has been implemented on parallel supercomputers, networked workstations, and combinations of workstations and supercomputers. While there are a large number of computers idling in the evening in many organizations, it is desirable to use these computing resources for parallel computing. One unsolved problem in parallel processing is how to distribute multiple mutually dependent parallel jobs fairly among computing resources and achieving optimal computation speed for each parallel job. When the mutually dependent parallel processes are not balanced on the given computers, processes that finish the computation early need to wait the other processes periodically to collect essential information to proceed the computation, thus not fully utilizing the given computing resources and not achieving the best computation speed. To increase the efficiency and speed of parallel computation, the computation load should be distributed to computers in such a way that the elapsed execution time of the slowest process is minimized. With our knowledge, all the available load-balancing methods assume that there is only one parallel job to be load balanced. There are several approaches to achieve computer load balancing for a single parallel job: (1) balancing the distribution of program instructions in compilation stage, (2) balancing the sub-domain assignment in domain decomposition stage, and (3) balancing the process distribution in program execution stage. Most load-balancing methods use the second approach [1-5]. These load-balancing methods have the following assumptions: (1) computers (or processors on parallel computers) are homogeneous, (2) computers operate in the single user mode, and (3) the computation domain can be arbitrarily divided. The first assumption can be satisfied on parallel computers and a small network of workstations. However, it is difficult to be satisfied in a large computer network that consists of computers with different brand and models. The second restriction requires the permission of the system administrator and usually causes inconvenience to other users. The third restriction is also difficult to satisfy due to structural and computation constrains in many applications. To avoid these restrictions, we adopted an approach that balances the computation load in both domain decomposition stage and program execution stage. In the domain decomposition stage, the task domain is divided into N sub-domains (blocks). The sizes of blocks need not to be the same but is desired to be similar (to be explained later). Each block may have different number of neighboring blocks. Each block needs to exchange boundary conditions periodically with its neighboring blocks. It is assumed that M networked computers (or processors on parallel computers) are used in parallel computing where N > M. These computers may have different computation speeds. A task is to find the optimal distribution of data blocks over computers, such as the overall execution time for the application is minimized. In this paper, we describe a new method for dynamic load balancing of multiple parallel jobs. The main objective of the method is to distribute parallel jobs on available computer

3 resources fairly and optimally. The basic idea of the method is to make sure that load balancing of a parallel job does not affect the load balancing of other parallel jobs. The method is developed under the following assumptions: available networked computers are heterogeneous. They may run under UNIX or Windows NT operating systems, the parallel jobs can be added or removed from the computers by the job owner at any time, loadbalancing of a parallel job is the responsibility of the owner of the application program, and the host computer informs the parallel job the availability of computers and proper time to do load balancing. The paper is organized as follows. Section 2 summarizes the load balancing method for a single parallel job [6,7] that we developed in the past. This method is used as part of our new load balancing method for multiple parallel jobs. Section 3 describes the concept used in the load balancing method for multiple parallel jobs and the procedures. Section 4 demonstrates the effectiveness of the new load balancing method. Section 5 concludes the paper. 2. BACKGROUND This section describes the work we did in the past that will be used in the load balancing of multiple parallel jobs. While most load-balancing methods achieve load balancing by adjusting the block sizes (assuming each computer executes one parallel process), we used a different approach that makes load balancing less problem-dependent. We assume that: (1) the problem domain can be pre-divided into many sub-domains (more than the number of computers), (2) each sub-domain is handled by one process, and (3) each computer can execute many processes. Load balancing is achieved by moving the processes among computers. While each computer process is responsible for solving one block of data, it has to communicate with others during the execution of the parallel job. To identify the time used for process computation and inter-process communication, the parallel programs are divided into two parts: block solvers and interface solvers [8]. While the block solver is for computing the solution for a block, the interface solver is for exchanging information between block boundaries. The execution time of each process is affected by several time-varying factors, e.g., the load of computers, the load of the network, the solution scheme used for solving each block, and sizes of blocks. Therefore, without load balancing, some processes may finish execution much earlier than other processes and wait for information from other processes. Such waiting significantly increases the elapsed execution time of the parallel job. We have developed a dynamic load balancing (DLB) method for a single parallel job on a set of heterogeneous computers. The method can properly distribute the processes among computers to ensure the communication time and the waiting time are minimized. The basic idea of the method is to minimize the elapsed execution time of the slowest parallel process. The basic assumptions used in DLB are as follows: There are a large number of computers available in different locations, which are managed by different owners. At the initiation of the run, the user defines a set of available computers. The user can access all or any subset of these computers. Each of the multi-user computers is operating under time sliced operating system (e.g. Unix or Windows NT). The parallel

4 application software is running MPI [9] or PVM [10]. When the load is balanced, the effective computation speeds of all computers to each process are the same if the communication time between parallel processes is the same. In fact, since the block size cannot be infinitely small, the word same actually means that the difference of the effective speeds of the computers cannot be further reduced. When many parallel jobs need to be executed on a large network of computers that are owned by many owners, the reservation of dedicated time for a parallel job becomes difficult. In a multi-user environment, multiple parallel jobs may be executed concurrently on the same set of computers. When the mutually dependent load is not balanced, computers that finish the computation early need to wait the other computers periodically in order to collect essential information to proceed further computation, thus loosing its share of computing resources and not achieving the best computation speed. We have tried to assign a load balancer to each parallel job and let each load balancer balance its corresponding single parallel job independently. Since there are conflict of interests among load balancers, their load balancing results interfere with each other. 3. DYNAMIC LOAD BALANCING FOR MULTIPLE PARALLEL PROCESSES To avoid the problem of process threshing in load balancing of multiple parallel jobs, we intended to develop an approach such that the load balancing of one parallel job does not affect the load balancing of other running parallel jobs. In this section, we proof that the round robin load balancing approach can satisfy such requirement. However, this approach is valid only for the parallel applications with small communication costs Nomenclature To simplify the description, the following nomenclature is used: J The set of all parallel jobs running on the system. J n Computation job with job number n. J n ={p i }, where p means process and i is the process number in the job, i =1,2,..., k n. J n can be either a parallel job or a sequential job. It is assumed that different users own different J n. Different J n may be different parallel codes. A user does not have any knowledge of the parallel job owned by other users. S m Computation speed of computer m. E m Effective computation speed of computer m to one computation process running on m. T n The elapsed computation time for J n. DLB A dynamic load balancer that balances one parallel job on a given set of computers. H A set of computer hosts available for parallel computing. HU The set of hosts used to run at least one parallel job. HU H. HN The set of computers not used by any parallel job. HU = H HU. HU n The set of computers actually used for executing J n. HU = HU n, J n J.

5 HR n The set of computers requested by J n. HR n H. The dynamic load-balancing problem is how to map J to H such that newly added parallel job is load balanced but does not affect the load balance of the existing parallel jobs. 3.2 Round robin load balancing of multiple parallel jobs The round robin load balancing algorithm for multiple parallel jobs are based on the following definitions and theorems. It is assumed that all computers are using the time slicing protocols in their operating systems. Definition 1. A parallel job J n is load balanced on H if T n is minimized. Definition 2. An extraneous process to a parallel job, J n, is a computer process that is running on the same computers with J n but cannot be started or stopped by J n. Definition 3. The effective computation speeds of two computers to each process are defined the same if moving any process from one computer to the other does not reduce the difference of the effective computation speeds between the computers. Round robin load blanching of multiple parallel jobs is based on the following arguments: If a computer that runs a parallel job has extraneous load, the parallel job with equal sized parallel processes can best utilize the given CFU share. If a parallel job is load balanced on a given set of computers and the sizes of the parallel processes of the job are the same, the effective speeds of all computers to every process of the parallel job are the same. If the loads of all load-balanced computers are changed by a same percentage, the relative speeds of hosts to each parallel process are the same. If a set of computers S is executing load balanced parallel jobs and a new parallel job J n is added to S by the J n s load balancer, the newly added J n should not affect load balancing of other existing parallel jobs. If HU m HU n and parallel jobs J m and J n are load balanced, deleting J m may affect the load balance of J n. Assume that the load distribution for all jobs is initially balanced and that the load distribution becomes unbalanced due to completion of some jobs. Sequentially add another balanced load or rebalance an existing parallel job will make all current parallel jobs more balanced. 3.3 Develop a DLB procedure to satisfy the conditions Assuming that the parallel jobs are added to the computer system sequentially one by one, the following algorithm will ensure that the load is more balanced each time a new parallel job is added to or rebalanced on the computers. In the following algorithms, it is assumed that

6 J n is added to the computers after load balanced J m is being executed. Algorithm 1: Step 1: [Determine HU n ] Once load balanced J m is being executed, J n can only request the computers in HU n such that HU n HU m or HU n HU m =, m. Step 2: [load balancing for J n ] DLB will determine an optimal load distribution for J n in HR n. The algorithm is developed based on Theorem 2. However, Theorem 2 is valid if the communication cost is negligible. If the communication cost is significant in some applications, the algorithm can be modified as follows: (A). The algorithm can be used if the communication costs between different pairs of processes are the same. This condition exists when massively connected parallel machine or local area network is used for parallel computing. In this condition, the effect of communication to the effective computation speed exists to every parallel process. Therefore, Theorem 2 can still be considered as valid. However, the optimal load balancing in such a situation may suggest using a small set of hosts to reduce the communication overhead. Therefore, the load-blancing algorithm need to be modified as follows: Algorithm 2: Step 1: [Determine HR n ] Once load balanced J m is being executed, J n can only request the computers in HR n such that HR n HU m or HR n HU m =, m. Step 2: [load balancing for J n ] DLB will determine an optimal load distribution for J n in HR n. Step 3: Find HU n HU n HR n. Since the optimal distribution may not use all the computers requested in HR n, (B). If the communication costs between different pairs of computers are very different, such as the situation that wide area networks are used for parallel computing, Theorem 2 may not hold. In this situation, algorithm 1 can be the modified as algorithm 3. Algorithm 3 does not allow two parallel jobs to share the same set of computers to prevent mutual interference of tow parallel jobs. Algorithm 3: Step 1: [Determine HR n ] Once load balanced J m is executed, J n can only request the computers in HR n such that HR n HU m =, m.

7 Step 2: [load balancing for J n ] DLB will determine an optimal load distribution for J n in HR n. Step 3: Find HU n Since the optimal distribution may not use all the computers requested in HR n, HU n HR n 4. EXPERIMENTAL RESULT The applicability of Algorithm 1 is demonstrated in the following load balancing experiment. This experiment is designed to show that, the later balanced parallel jobs do not affect the balance of previously balanced parallel jobs if the parallel jobs are load balanced one by one in round robin fashion. In this experiment, three parallel jobs, J1, J2 and J3, are executed on 5 computers, C1 to C5. The job size and the number of blocks of all parallel jobs are shown in Table 1. The load distribution of each parallel job is represented by a 5-bar bar chart. Figure 1 depicts the load distributions of three parallel jobs during the experiment. Figure 1 has three parts (2a, 2b, and 2c). Bar chart A in Figure 1(a) shows the initial unbalanced load distribution of job J1. The gray blocks in a bar chart are the blocks of J1. The black block is the extraneous load with respect to J1. The measured elapsed execution time of J1 for this load distribution is 1.76 seconds per time steps. After load balancing by DLB, the new load distribution is shown by bar chart B. The measured elapsed execution time of J1 for this load distribution is reduced to 1.01 seconds per time step. Table 1. Information of parallel jobs. Parallel job A Parallel job B Parallel job C Job size (grid points) 250, , ,000 Number of blocks When load balanced J1 is being executed, J2 is added to the computers. Column C in Figure 1(b) shows the load distribution of J1 (as in Figure 1(a)) and initial unbalanced load distribution of job J2. The gray blocks in the bar charts of job x are the blocks of x. The black blocks are the extraneous load with respect to job x. The white blocks are the processes of job x moved from other computers to this computer. The measured elapsed execution times of J1 and J2 are labeled on the top of each bar chart. After load balancing of J2 by DLB, the new load distribution of J1 and J2 are shown in column D in Figure 1(b). Both the measured elapsed execution times of J1 and J2 are reduced. When balanced J1 and J2 are being executed, J3 is added to the computers. Column E in Figure 1(c) shows the load distribution of J1, J2 (as in Figure 1(b)) and initial unbalanced load distribution of J3. The gray blocks in the bar chart of job x are the processes of x. After load balancing of J3 by DLB, the new load distributions of J1, J2 and J3 are shown in column F in Figure 1(c). The measured elapsed execution times of all three jobs are reduced. It can be seen that when an unbalanced job is introduced, the other balanced jobs becomes unbalanced

8 (see columns C and E). When the unbalanced job is balanced, the other jobs become more balanced too (see columns D and F). In this experiment, the elapsed execution time for every job decreases after the load balancing of each job. This phenomenon should not be considered as general. In fact, after the load balancing a parallel job, the elapsed execution time for the other parallel jobs may increase. The explanation is as follows. If parallel job A is not load balanced, it does not fully utilize its share of computational resources (in a time slicing system). The unused computational resources are used by the other load balanced parallel jobs on the same computers. Therefore, the other parallel jobs may run faster than it supposes to be. Once the parallel job A is load balanced, all its share of computational resources is utilized. Since the extra computational resources for other jobs is vanished, the elapsed execution time of other parallel jobs may increase. The reader may notice that the number of extraneous load on a compute is less than the number of parallel processes assigned to that computer. This phenomenon is explained as follows. Due to the time used for process synchronization, each parallel process need to wait information periodically from neighboring processes. Since only the processes on the running queue are counted statistically for extraneous load, only a part of all parallel processes on the computer can be counted. The more unbalanced distribution of the parallel jobs, the more difference between the number of processes assigned to the computer and the measured number of processes on the computer. Even when the parallel job is balanced, only about 80% of the processes can be measured. 5. CONCLUSION This paper described a practical method for dynamic load balancing of multiple parallel jobs in network computers. The applicability of the method is both theoretically proved and experimentally demonstrated. A software tool that supports the round robin load balancing method for multiple parallel jobs is currently being developed. ACKNOWLEDGEMENTS Financial support provided for this research by the NASA Glenn Research Center is gratefully acknowledged. The authors are also grateful for the computer support provided by the IBM Research Center throughout this study. REFERENCES [1] Williams, D. (1990), Performance of Dynamic Load Balancing Algorithms for Unstructured Grid Calculations, CalTech Report, C3P913. [2] Simon, H. (1991), 'Partitioning of Unstructured Problems for Parallel Processing, NASA Ames Tech Report, RNR [3] Lohner, R., Ramamurti, R., and Martin, D. (1993), A Parallelizable Load Balancing

9 Algorithm, Proceedings of 31st Aerospace Sciences Meeting & Exhibit, January 11-14, Reno, Nevada. [4] Tezduyar, T.E., Aliabadi, S., Behr, M., Johnson, A. and Mittal, S. (1993), Parallel Finite Element Computation of 3D Flows, IEEE Computer, pp [5] Maini, H., Mehrotra, K., Mohan, C., and Ranka, S., 1994, Genetic Algorithms for Graph Partitioning and Incremental Graph Partitioning, Proceedings of Supercomputing 94, Washington D.C., pp [6] Chien, Y. P., Ecer, A., Akay, H. U., and Carpenter, F. (1994), Dynamic Load Balancing on Network of Workstations for Solving Computational Fluid Dynamics Problems, Computer Methods in Applied Mechanics and Engineering, 119 (1994), pp [7] Chien, Y. P., Carpenter, F., Ecer, A., Akay, H.U., 1995, Computer Load Balancing for Parallel Computation of Fluid Dynamics Problems, Computer Methods in Applied Mechanics and Engineering, 125 (1995). [8] Akay, H. U., Blech, R., Ecer, A., Ercoskun, D., Kemle, B., Quealy, A., and Williams, A. (1993), 'A Database Management System for Parallel Processing of CFD Algorithms, Parallel Computational Fluid Dynamics '92, Edited by J. Hauser, et al., Elsevier Science Publishers, The Netherlands. [9] Snir, M., Otto, S., Huss-Lederman, S., Walker, D., and Dongarra, J., 1998, MPI: The Complete Reference, The MIT Press. [10] Geist, A., Beguelin, A., Dongarra, J., Jiang, W., Manchek, R., and Sunderam, V. (1993), PVM 3.0 User's Guide and Reference Manual, Oak Ridge National Laboratory Technical Report, ORNL/TM

10 A B 1.76 s 1.01 s J1 Figure 1(a). Load distributions of parallel job 1 before and after load balancing. C D 2.05 s 1.92 s J s 2.26 s J2 Figure 1(b). Load distributions of parallel jobs 1 and 2 before and after load balancing of job 2.

11 E F 2.70 s 2.52 s J s 2.98 s J s 2.72 s J3 Figure 1(c). Load distributions of parallel jobs 1, 2, and 3 before and after load balancing of job 3.

DYNAMIC LOAD BALANCING APPLICATIONS ON A HETEROGENEOUS UNIX/NT CLUSTER

DYNAMIC LOAD BALANCING APPLICATIONS ON A HETEROGENEOUS UNIX/NT CLUSTER European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2000 Barcelona, 11-14 September 2000 ECCOMAS DYNAMIC LOAD BALANCING APPLICATIONS ON A HETEROGENEOUS UNIX/NT CLUSTER

More information

Dynamic Load Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

Dynamic Load Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems NASA / CR--2000-209939 -0 Dynamic Load Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems A. Ecer, Y.P. Chien, J.D. Chen, T. Boenisch, and H.U. Akay Purdue School of Engineering

More information

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed

More information

Parallel Processing over Mobile Ad Hoc Networks of Handheld Machines

Parallel Processing over Mobile Ad Hoc Networks of Handheld Machines Parallel Processing over Mobile Ad Hoc Networks of Handheld Machines Michael J Jipping Department of Computer Science Hope College Holland, MI 49423 jipping@cs.hope.edu Gary Lewandowski Department of Mathematics

More information

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Dr. Maurice Eggen Nathan Franklin Department of Computer Science Trinity University San Antonio, Texas 78212 Dr. Roger Eggen Department

More information

SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study

SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study Milan E. Soklic Abstract This article introduces a new load balancing algorithm, called diffusive load balancing, and compares its performance

More information

Distributed Dynamic Load Balancing for Iterative-Stencil Applications

Distributed Dynamic Load Balancing for Iterative-Stencil Applications Distributed Dynamic Load Balancing for Iterative-Stencil Applications G. Dethier 1, P. Marchot 2 and P.A. de Marneffe 1 1 EECS Department, University of Liege, Belgium 2 Chemical Engineering Department,

More information

A Review of Customized Dynamic Load Balancing for a Network of Workstations

A Review of Customized Dynamic Load Balancing for a Network of Workstations A Review of Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester

More information

Load Balancing Techniques

Load Balancing Techniques Load Balancing Techniques 1 Lecture Outline Following Topics will be discussed Static Load Balancing Dynamic Load Balancing Mapping for load balancing Minimizing Interaction 2 1 Load Balancing Techniques

More information

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters Abhijit A. Rajguru, S.S. Apte Abstract - A distributed system can be viewed as a collection

More information

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach S. M. Ashraful Kadir 1 and Tazrian Khan 2 1 Scientific Computing, Royal Institute of Technology (KTH), Stockholm, Sweden smakadir@csc.kth.se,

More information

Virtual Machines. www.viplavkambli.com

Virtual Machines. www.viplavkambli.com 1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software

More information

Static Load Balancing

Static Load Balancing Load Balancing Load Balancing Load balancing: distributing data and/or computations across multiple processes to maximize efficiency for a parallel program. Static load-balancing: the algorithm decides

More information

A Load Balancing Tool for Structured Multi-Block Grid CFD Applications

A Load Balancing Tool for Structured Multi-Block Grid CFD Applications A Load Balancing Tool for Structured Multi-Block Grid CFD Applications K. P. Apponsah and D. W. Zingg University of Toronto Institute for Aerospace Studies (UTIAS), Toronto, ON, M3H 5T6, Canada Email:

More information

Explicit Spatial Scattering for Load Balancing in Conservatively Synchronized Parallel Discrete-Event Simulations

Explicit Spatial Scattering for Load Balancing in Conservatively Synchronized Parallel Discrete-Event Simulations Explicit Spatial ing for Load Balancing in Conservatively Synchronized Parallel Discrete-Event Simulations Sunil Thulasidasan Shiva Prasad Kasiviswanathan Stephan Eidenbenz Phillip Romero Los Alamos National

More information

A Comparison of General Approaches to Multiprocessor Scheduling

A Comparison of General Approaches to Multiprocessor Scheduling A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA jing@jolt.mt.att.com Michael A. Palis Department of Computer Science Rutgers University

More information

Parallel Scalable Algorithms- Performance Parameters

Parallel Scalable Algorithms- Performance Parameters www.bsc.es Parallel Scalable Algorithms- Performance Parameters Vassil Alexandrov, ICREA - Barcelona Supercomputing Center, Spain Overview Sources of Overhead in Parallel Programs Performance Metrics for

More information

ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS PAMUKKALE ÜNİ VERSİ TESİ MÜHENDİ SLİ K FAKÜLTESİ PAMUKKALE UNIVERSITY ENGINEERING COLLEGE MÜHENDİ SLİ K Bİ L İ MLERİ DERGİ S İ JOURNAL OF ENGINEERING SCIENCES YIL CİLT SAYI SAYFA : 2001 : 7 : 2 : 229-233

More information

Integrating job parallelism in real-time scheduling theory

Integrating job parallelism in real-time scheduling theory Integrating job parallelism in real-time scheduling theory Sébastien Collette Liliana Cucu Joël Goossens Abstract We investigate the global scheduling of sporadic, implicit deadline, real-time task systems

More information

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster , pp.11-20 http://dx.doi.org/10.14257/ ijgdc.2014.7.2.02 A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster Kehe Wu 1, Long Chen 2, Shichao Ye 2 and Yi Li 2 1 Beijing

More information

Middleware and Distributed Systems. Introduction. Dr. Martin v. Löwis

Middleware and Distributed Systems. Introduction. Dr. Martin v. Löwis Middleware and Distributed Systems Introduction Dr. Martin v. Löwis 14 3. Software Engineering What is Middleware? Bauer et al. Software Engineering, Report on a conference sponsored by the NATO SCIENCE

More information

Batch Queueing in the WINNER Resource Management System

Batch Queueing in the WINNER Resource Management System farndt freisleb thilog@informatik.uni-siegen.de kielmann@cs.vu.nl Batch Queueing in the WINNER Resource Management System Olaf Arndt1, Bernd Freisleben1, Thilo Kielmann2, Frank Thilo1 1Dept. of Electrical

More information

A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3

A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3 A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti, Nidhi Rajak 1 Department of Computer Science & Applications, Dr.H.S.Gour Central University, Sagar, India, ranjit.jnu@gmail.com

More information

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest 1. Introduction Few years ago, parallel computers could

More information

PARALLEL PROCESSING AND THE DATA WAREHOUSE

PARALLEL PROCESSING AND THE DATA WAREHOUSE PARALLEL PROCESSING AND THE DATA WAREHOUSE BY W. H. Inmon One of the essences of the data warehouse environment is the accumulation of and the management of large amounts of data. Indeed, it is said that

More information

Parallel Computing of Kernel Density Estimates with MPI

Parallel Computing of Kernel Density Estimates with MPI Parallel Computing of Kernel Density Estimates with MPI Szymon Lukasik Department of Automatic Control, Cracow University of Technology, ul. Warszawska 24, 31-155 Cracow, Poland Szymon.Lukasik@pk.edu.pl

More information

Dynamic load balancing of parallel cellular automata

Dynamic load balancing of parallel cellular automata Dynamic load balancing of parallel cellular automata Marc Mazzariol, Benoit A. Gennart, Roger D. Hersch Ecole Polytechnique Fédérale de Lausanne, EPFL * ABSTRACT We are interested in running in parallel

More information

Building an Inexpensive Parallel Computer

Building an Inexpensive Parallel Computer Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University

More information

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi

More information

GAMS, Condor and the Grid: Solving Hard Optimization Models in Parallel. Michael C. Ferris University of Wisconsin

GAMS, Condor and the Grid: Solving Hard Optimization Models in Parallel. Michael C. Ferris University of Wisconsin GAMS, Condor and the Grid: Solving Hard Optimization Models in Parallel Michael C. Ferris University of Wisconsin Parallel Optimization Aid search for global solutions (typically in non-convex or discrete)

More information

Distributed Real-Time Computing with Harness

Distributed Real-Time Computing with Harness Distributed Real-Time Computing with Harness Emanuele Di Saverio 1, Marco Cesati 1, Christian Di Biagio 2, Guido Pennella 2, and Christian Engelmann 3 1 Department of Computer Science, Systems, and Industrial

More information

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note: Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the

More information

BRAESS-LIKE PARADOXES FOR NON-COOPERATIVE DYNAMIC LOAD BALANCING IN DISTRIBUTED COMPUTER SYSTEMS

BRAESS-LIKE PARADOXES FOR NON-COOPERATIVE DYNAMIC LOAD BALANCING IN DISTRIBUTED COMPUTER SYSTEMS GESJ: Computer Science and Telecommunications 21 No.3(26) BRAESS-LIKE PARADOXES FOR NON-COOPERATIVE DYNAMIC LOAD BALANCING IN DISTRIBUTED COMPUTER SYSTEMS Said Fathy El-Zoghdy Department of Computer Science,

More information

PERFORMANCE EVALUATION OF THREE DYNAMIC LOAD BALANCING ALGORITHMS ON SPMD MODEL

PERFORMANCE EVALUATION OF THREE DYNAMIC LOAD BALANCING ALGORITHMS ON SPMD MODEL PERFORMANCE EVALUATION OF THREE DYNAMIC LOAD BALANCING ALGORITHMS ON SPMD MODEL Najib A. Kofahi Associate Professor Department of Computer Sciences Faculty of Information Technology and Computer Sciences

More information

Integrating PVaniM into WAMM for Monitoring Meta-Applications

Integrating PVaniM into WAMM for Monitoring Meta-Applications Integrating PVaniM into WAMM for Monitoring Meta-Applications R. Baraglia, M. Cosso, D. Laforenza, M. Nicosia CNUCE - Institute of the Italian National Research Council Via S. Maria, 36 - I56100 Pisa (Italy)

More information

Various Schemes of Load Balancing in Distributed Systems- A Review

Various Schemes of Load Balancing in Distributed Systems- A Review 741 Various Schemes of Load Balancing in Distributed Systems- A Review Monika Kushwaha Pranveer Singh Institute of Technology Kanpur, U.P. (208020) U.P.T.U., Lucknow Saurabh Gupta Pranveer Singh Institute

More information

Load Balancing In Concurrent Parallel Applications

Load Balancing In Concurrent Parallel Applications Load Balancing In Concurrent Parallel Applications Jeff Figler Rochester Institute of Technology Computer Engineering Department Rochester, New York 14623 May 1999 Abstract A parallel concurrent application

More information

reduction critical_section

reduction critical_section A comparison of OpenMP and MPI for the parallel CFD test case Michael Resch, Bjíorn Sander and Isabel Loebich High Performance Computing Center Stuttgart èhlrsè Allmandring 3, D-755 Stuttgart Germany resch@hlrs.de

More information

A Practical Approach of Storage Strategy for Grid Computing Environment

A Practical Approach of Storage Strategy for Grid Computing Environment A Practical Approach of Storage Strategy for Grid Computing Environment Kalim Qureshi Abstract -- An efficient and reliable fault tolerance protocol plays an important role in making the system more stable.

More information

Load Balancing Support for Grid-enabled Applications

Load Balancing Support for Grid-enabled Applications John von Neumann Institute for Computing Load Balancing Support for Grid-enabled Applications S. Rips published in Parallel Computing: Current & Future Issues of High-End Computing, Proceedings of the

More information

Grid Scheduling Dictionary of Terms and Keywords

Grid Scheduling Dictionary of Terms and Keywords Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status

More information

Multilevel Load Balancing in NUMA Computers

Multilevel Load Balancing in NUMA Computers FACULDADE DE INFORMÁTICA PUCRS - Brazil http://www.pucrs.br/inf/pos/ Multilevel Load Balancing in NUMA Computers M. Corrêa, R. Chanin, A. Sales, R. Scheer, A. Zorzo Technical Report Series Number 049 July,

More information

CPU Scheduling. CPU Scheduling

CPU Scheduling. CPU Scheduling CPU Scheduling Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

More information

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

More information

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks Comparison on Different Load Balancing Algorithms of Peer to Peer Networks K.N.Sirisha *, S.Bhagya Rekha M.Tech,Software Engineering Noble college of Engineering & Technology for Women Web Technologies

More information

A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service

A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service Payal Malekar 1, Prof. Jagruti S. Wankhede 2 Student, Information Technology,

More information

Performance of Scientific Processing in Networks of Workstations: Matrix Multiplication Example

Performance of Scientific Processing in Networks of Workstations: Matrix Multiplication Example Performance of Scientific Processing in Networks of Workstations: Matrix Multiplication Example Fernando G. Tinetti Centro de Técnicas Analógico-Digitales (CeTAD) 1 Laboratorio de Investigación y Desarrollo

More information

Advances in Smart Systems Research : ISSN 2050-8662 : http://nimbusvault.net/publications/koala/assr/ Vol. 3. No. 3 : pp.

Advances in Smart Systems Research : ISSN 2050-8662 : http://nimbusvault.net/publications/koala/assr/ Vol. 3. No. 3 : pp. Advances in Smart Systems Research : ISSN 2050-8662 : http://nimbusvault.net/publications/koala/assr/ Vol. 3. No. 3 : pp.49-54 : isrp13-005 Optimized Communications on Cloud Computer Processor by Using

More information

Source Code Transformations Strategies to Load-balance Grid Applications

Source Code Transformations Strategies to Load-balance Grid Applications Source Code Transformations Strategies to Load-balance Grid Applications Romaric David, Stéphane Genaud, Arnaud Giersch, Benjamin Schwarz, and Éric Violard LSIIT-ICPS, Université Louis Pasteur, Bd S. Brant,

More information

APPLICATION OF PARALLEL VIRTUAL MACHINE FRAMEWORK TO THE STRONG PRIME PROBLEM

APPLICATION OF PARALLEL VIRTUAL MACHINE FRAMEWORK TO THE STRONG PRIME PROBLEM Intern. J. Computer Math., 2002, Vol. 79(7), pp. 797 806 APPLICATION OF PARALLEL VIRTUAL MACHINE FRAMEWORK TO THE STRONG PRIME PROBLEM DER-CHUYAN LOU,* CHIA-LONG WU and RONG-YI OU Department of Electrical

More information

How To Understand The History Of An Operating System

How To Understand The History Of An Operating System 7 Operating Systems 7.1 Source: Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: 7.2 Understand the role of the operating system.

More information

Implementing Parameterized Dynamic Load Balancing Algorithm Using CPU and Memory

Implementing Parameterized Dynamic Load Balancing Algorithm Using CPU and Memory Implementing Parameterized Dynamic Balancing Algorithm Using CPU and Memory Pradip Wawge 1, Pritish Tijare 2 Master of Engineering, Information Technology, Sipna college of Engineering, Amravati, Maharashtra,

More information

?kt. An Unconventional Method for Load Balancing. w = C ( t m a z - ti) = p(tmaz - 0i=l. 1 Introduction. R. Alan McCoy,*

?kt. An Unconventional Method for Load Balancing. w = C ( t m a z - ti) = p(tmaz - 0i=l. 1 Introduction. R. Alan McCoy,* ENL-62052 An Unconventional Method for Load Balancing Yuefan Deng,* R. Alan McCoy,* Robert B. Marr,t Ronald F. Peierlst Abstract A new method of load balancing is introduced based on the idea of dynamically

More information

MOSIX: High performance Linux farm

MOSIX: High performance Linux farm MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm

More information

Preserving Message Integrity in Dynamic Process Migration

Preserving Message Integrity in Dynamic Process Migration Preserving Message Integrity in Dynamic Process Migration E. Heymann, F. Tinetti, E. Luque Universidad Autónoma de Barcelona Departamento de Informática 8193 - Bellaterra, Barcelona, Spain e-mail: e.heymann@cc.uab.es

More information

High Performance Computing

High Performance Computing High Performance Computing Trey Breckenridge Computing Systems Manager Engineering Research Center Mississippi State University What is High Performance Computing? HPC is ill defined and context dependent.

More information

Optimal Load Balancing in a Beowulf Cluster. Daniel Alan Adams. A Thesis. Submitted to the Faculty WORCESTER POLYTECHNIC INSTITUTE

Optimal Load Balancing in a Beowulf Cluster. Daniel Alan Adams. A Thesis. Submitted to the Faculty WORCESTER POLYTECHNIC INSTITUTE Optimal Load Balancing in a Beowulf Cluster by Daniel Alan Adams A Thesis Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Master

More information

Dynamic Load Balancing in a Network of Workstations

Dynamic Load Balancing in a Network of Workstations Dynamic Load Balancing in a Network of Workstations 95.515F Research Report By: Shahzad Malik (219762) November 29, 2000 Table of Contents 1 Introduction 3 2 Load Balancing 4 2.1 Static Load Balancing

More information

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline

More information

IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES

IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 6 June, 2013 Page No. 1914-1919 IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Ms.

More information

Advanced Task Scheduling for Cloud Service Provider Using Genetic Algorithm

Advanced Task Scheduling for Cloud Service Provider Using Genetic Algorithm IOSR Journal of Engineering (IOSRJEN) ISSN: 2250-3021 Volume 2, Issue 7(July 2012), PP 141-147 Advanced Task Scheduling for Cloud Service Provider Using Genetic Algorithm 1 Sourav Banerjee, 2 Mainak Adhikari,

More information

CHAPTER 1: OPERATING SYSTEM FUNDAMENTALS

CHAPTER 1: OPERATING SYSTEM FUNDAMENTALS CHAPTER 1: OPERATING SYSTEM FUNDAMENTALS What is an operating? A collection of software modules to assist programmers in enhancing efficiency, flexibility, and robustness An Extended Machine from the users

More information

Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing

Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing Efficient Parallel Execution of Sequence Similarity Analysis Via Dynamic Load Balancing James D. Jackson Philip J. Hatcher Department of Computer Science Kingsbury Hall University of New Hampshire Durham,

More information

Grid Computing Approach for Dynamic Load Balancing

Grid Computing Approach for Dynamic Load Balancing International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-4, Issue-1 E-ISSN: 2347-2693 Grid Computing Approach for Dynamic Load Balancing Kapil B. Morey 1*, Sachin B. Jadhav

More information

FPGA area allocation for parallel C applications

FPGA area allocation for parallel C applications 1 FPGA area allocation for parallel C applications Vlad-Mihai Sima, Elena Moscu Panainte, Koen Bertels Computer Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University

More information

Information Processing, Big Data, and the Cloud

Information Processing, Big Data, and the Cloud Information Processing, Big Data, and the Cloud James Horey Computational Sciences & Engineering Oak Ridge National Laboratory Fall Creek Falls 2010 Information Processing Systems Model Parameters Data-intensive

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.

More information

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes Eric Petit, Loïc Thebault, Quang V. Dinh May 2014 EXA2CT Consortium 2 WPs Organization Proto-Applications

More information

Load balancing Static Load Balancing

Load balancing Static Load Balancing Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection

More information

Dynamic Multi-User Load Balancing in Distributed Systems

Dynamic Multi-User Load Balancing in Distributed Systems Dynamic Multi-User Load Balancing in Distributed Systems Satish Penmatsa and Anthony T. Chronopoulos The University of Texas at San Antonio Dept. of Computer Science One UTSA Circle, San Antonio, Texas

More information

An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems

An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems Ardhendu Mandal and Subhas Chandra Pal Department of Computer Science and Application, University

More information

Resource Allocation Schemes for Gang Scheduling

Resource Allocation Schemes for Gang Scheduling Resource Allocation Schemes for Gang Scheduling B. B. Zhou School of Computing and Mathematics Deakin University Geelong, VIC 327, Australia D. Walsh R. P. Brent Department of Computer Science Australian

More information

RESEARCH PAPER International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009

RESEARCH PAPER International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009 An Algorithm for Dynamic Load Balancing in Distributed Systems with Multiple Supporting Nodes by Exploiting the Interrupt Service Parveen Jain 1, Daya Gupta 2 1,2 Delhi College of Engineering, New Delhi,

More information

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available: Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.

More information

Load Balancing and Termination Detection

Load Balancing and Termination Detection Chapter 7 Load Balancing and Termination Detection 1 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection

More information

DYNAMIC LOAD BALANCING SCHEME FOR ITERATIVE APPLICATIONS

DYNAMIC LOAD BALANCING SCHEME FOR ITERATIVE APPLICATIONS Journal homepage: www.mjret.in DYNAMIC LOAD BALANCING SCHEME FOR ITERATIVE APPLICATIONS ISSN:2348-6953 Rahul S. Wankhade, Darshan M. Marathe, Girish P. Nikam, Milind R. Jawale Department of Computer Engineering,

More information

Cellular Computing on a Linux Cluster

Cellular Computing on a Linux Cluster Cellular Computing on a Linux Cluster Alexei Agueev, Bernd Däne, Wolfgang Fengler TU Ilmenau, Department of Computer Architecture Topics 1. Cellular Computing 2. The Experiment 3. Experimental Results

More information

Task Scheduling in Hadoop

Task Scheduling in Hadoop Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed

More information

Operating System Multilevel Load Balancing

Operating System Multilevel Load Balancing Operating System Multilevel Load Balancing M. Corrêa, A. Zorzo Faculty of Informatics - PUCRS Porto Alegre, Brazil {mcorrea, zorzo}@inf.pucrs.br R. Scheer HP Brazil R&D Porto Alegre, Brazil roque.scheer@hp.com

More information

Design and Implementation of a Massively Parallel Version of DIRECT

Design and Implementation of a Massively Parallel Version of DIRECT Design and Implementation of a Massively Parallel Version of DIRECT JIAN HE Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA ALEX VERSTAK Department

More information

A SIMULATOR FOR LOAD BALANCING ANALYSIS IN DISTRIBUTED SYSTEMS

A SIMULATOR FOR LOAD BALANCING ANALYSIS IN DISTRIBUTED SYSTEMS Mihai Horia Zaharia, Florin Leon, Dan Galea (3) A Simulator for Load Balancing Analysis in Distributed Systems in A. Valachi, D. Galea, A. M. Florea, M. Craus (eds.) - Tehnologii informationale, Editura

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0047 ISSN (Online): 2279-0055 International

More information

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers Íñigo Goiri, J. Oriol Fitó, Ferran Julià, Ramón Nou, Josep Ll. Berral, Jordi Guitart and Jordi Torres

More information

Distributed Databases

Distributed Databases C H A P T E R19 Distributed Databases Practice Exercises 19.1 How might a distributed database designed for a local-area network differ from one designed for a wide-area network? Data transfer on a local-area

More information

A Survey Of Various Load Balancing Algorithms In Cloud Computing

A Survey Of Various Load Balancing Algorithms In Cloud Computing A Survey Of Various Load Balancing Algorithms In Cloud Computing Dharmesh Kashyap, Jaydeep Viradiya Abstract: Cloud computing is emerging as a new paradigm for manipulating, configuring, and accessing

More information

Load Balancing in Distributed Systems: A survey

Load Balancing in Distributed Systems: A survey Load Balancing in Distributed Systems: A survey Amit S Hanamakkanavar * and Prof. Vidya S.Handur # * (amitsh2190@gmail.com) Dept of Computer Science & Engg, B.V.B.College of Engg. & Tech, Hubli # (vidya_handur@bvb.edu)

More information

LOAD BALANCING TECHNIQUES

LOAD BALANCING TECHNIQUES LOAD BALANCING TECHNIQUES Two imporatnt characteristics of distributed systems are resource multiplicity and system transparency. In a distributed system we have a number of resources interconnected by

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Low Level. Software. Solution. extensions to handle. coarse grained task. compilers with. Data parallel. parallelism.

Low Level. Software. Solution. extensions to handle. coarse grained task. compilers with. Data parallel. parallelism. . 1 History 2 æ 1960s - First Organized Collections Problem Solving Environments for Parallel Scientiæc Computation Jack Dongarra Univ. of Tenn.èOak Ridge National Lab dongarra@cs.utk.edu æ 1970s - Advent

More information

Dynamic Load Balancing of SAMR Applications on Distributed Systems y

Dynamic Load Balancing of SAMR Applications on Distributed Systems y Dynamic Load Balancing of SAMR Applications on Distributed Systems y Zhiling Lan, Valerie E. Taylor Department of Electrical and Computer Engineering Northwestern University, Evanston, IL 60208 fzlan,

More information

SCHEDULING IN CLOUD COMPUTING

SCHEDULING IN CLOUD COMPUTING SCHEDULING IN CLOUD COMPUTING Lipsa Tripathy, Rasmi Ranjan Patra CSA,CPGS,OUAT,Bhubaneswar,Odisha Abstract Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism

More information

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF)

Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Scheduling and Load Balancing in the Parallel ROOT Facility (PROOF) Gerardo Ganis CERN E-mail: Gerardo.Ganis@cern.ch CERN Institute of Informatics, University of Warsaw E-mail: Jan.Iwaszkiewicz@cern.ch

More information

INTRODUCTION TO COMPUTING CPIT 201 WEEK 13 LECTURE 3

INTRODUCTION TO COMPUTING CPIT 201 WEEK 13 LECTURE 3 INTRODUCTION TO COMPUTING CPIT 201 WEEK 13 LECTURE 3 OPERATING SYSTEM Process manager A second function of an operating system is process management, but before discussing this concept, we need to define

More information

Control 2004, University of Bath, UK, September 2004

Control 2004, University of Bath, UK, September 2004 Control, University of Bath, UK, September ID- IMPACT OF DEPENDENCY AND LOAD BALANCING IN MULTITHREADING REAL-TIME CONTROL ALGORITHMS M A Hossain and M O Tokhi Department of Computing, The University of

More information

Scientific Computing Programming with Parallel Objects

Scientific Computing Programming with Parallel Objects Scientific Computing Programming with Parallel Objects Esteban Meneses, PhD School of Computing, Costa Rica Institute of Technology Parallel Architectures Galore Personal Computing Embedded Computing Moore

More information

Chapter 18: Database System Architectures. Centralized Systems

Chapter 18: Database System Architectures. Centralized Systems Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

More information

Customized Dynamic Load Balancing for a Network of Workstations

Customized Dynamic Load Balancing for a Network of Workstations Customized Dynamic Load Balancing for a Network of Workstations Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester, Rochester NY 4627 zaki,wei,srini

More information

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Load balancing in a heterogeneous computer system by self-organizing Kohonen network Bull. Nov. Comp. Center, Comp. Science, 25 (2006), 69 74 c 2006 NCC Publisher Load balancing in a heterogeneous computer system by self-organizing Kohonen network Mikhail S. Tarkov, Yakov S. Bezrukov Abstract.

More information

Mesh Generation and Load Balancing

Mesh Generation and Load Balancing Mesh Generation and Load Balancing Stan Tomov Innovative Computing Laboratory Computer Science Department The University of Tennessee April 04, 2012 CS 594 04/04/2012 Slide 1 / 19 Outline Motivation Reliable

More information

Principles and characteristics of distributed systems and environments

Principles and characteristics of distributed systems and environments Principles and characteristics of distributed systems and environments Definition of a distributed system Distributed system is a collection of independent computers that appears to its users as a single

More information