Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait concept. How to get the most urgent tasks done first? Another example: approximating the Mandelbrot set M. Load Balancing 1 / 24

2 Load Balancing: The Mandelbrot Set c C belongs to M iff the iteration z 0 (c) = 0, z k+1 (c) = z 2 k (c) + c remains bounded, i.e., iff z k (c) 2 for all k. (One can show that z k (c) 2 holds for all k whenever c M.) If c M, then the number of iterations required to escape varies considerably: 2 M, but c = 2 ε escapes after one iteration for every ε > 0, 1 4 M and the number of iterations for c = ε grows, when ε > 0 decreases. How to balance the load? Load Balancing 2 / 24

3 The Mandelbrot Set: Load Balancing We are given a rectangle R within C. Color pixels in R M in dependence on the number of iterations required to escape. Dynamic load balancing (idle processes receive pixels during run time) versus static load balancing (assign processes to pixels ahead of time). Static load balancing superior, if done at random. If we only have to display M within the rectangle R: use that M is connected. Dynamic load balancing wins. We work with a master-slave architecture. Initially a single slave receives R. If the slave finds that all boundary pixels belong to M, then it claims that the rectangle is a subset of M. Otherwise the rectangle is returned to the master who partitions it into two rectangles and assigns one slave for each new rectangle. This procedure continues until all slaves are busy. Load Balancing 3 / 24

4 Static Load Balancing I We have used static load balancing for all applications in parallel linear algebra, since we could predict the duration of tasks. When executing independent tasks of unknown duration: assign tasks at random. Load Balancing 4 / 24

7 Work Stealing Three methods. Random Polling: if a process runs out of work, it requests work from a randomly chosen process. Global Round Robin: whenever a process requests work, it accesses a global target variable and requests work from the specified process. Asynchronous Round Robin: whenever a process requests work, it accesses its local target variable, requests work from the specified process and then increments its target variable by one modulo p, where p is the number of processes. Which method to use? Load Balancing Work Stealing 7 / 24

8 Comparing the Three Methods The model: - Assume that total work W is initially assigned to process 1. - Whenever process i requests work from process j, then process j donates half of its current load and keeps the remaining half, provided it still has at least W /p units of work. The question: how long does it take to achieve perfect parallelization, i.e., work O(W /p) for all processes. Load Balancing Work Stealing 8 / 24

9 An Analysis of Random Polling Assume that exactly j processes have work. Let V j (p) be the expected number of requests such that each of the j processes receives at least one request. The load of a process is halved after it serves a request. After V j (p) requests, the peak load is at least halved. We show that V j (p) = O(p ln j) and hence V j (p) = O(p ln p). The peak load is bounded by O(W /p) after O(p ln 2 p) requests and hence the communication overhead is bounded by O(p ln 2 2 p). We have to determine V j (p). Load Balancing Work Stealing 9 / 24

10 Determining V (p) Assume that exactly i of the j processes with work have received requests. Let f j (i, p) be the expected number of requests such that each of the remaining j i processes receives a request. Our goal is to determine f j (0, p). fj (i, p) = (1 j i p ) (1 + f j(i, p)) + j i p (1 + f j(i + 1, p)) holds. Why? With probability (j i)/p we make progress and one more process has received work, whereas with probability 1 j i p f j(i, p) = 1 + j i p f j(i + 1, p) and hence Thus j i p f j (i, p) = p j i + f j (i + 1, p). f j (0, p) = p j p j i + f (i + 1, p) and as a consequence f (0, p) = p j 1 i=0 1 j i = p j i=1 1 i follows. we fail. Hence V j (p) = O(p ln(j)) = O(p ln p) and after O(p ln 2 p) work requests the peak load is bounded by O(W /p). To achieve constant efficiency, W = Ω(p ln 2 p) will do. Load Balancing Work Stealing 10 / 24

11 Global and Asynchronous Round Robin When does global Round Robin achieve constant efficiency? At least a constant fraction of all processes have to receive work, otherwise the computing time is not bounded by O(W /p). The global target variable has to be accessed for Ω(p) steps. To achieve constant efficiency W p = Ω(p) or equivalently W = Ω(p2 ) has to hold. The performance of asynchronous Round Robin. The best case: p log2 p requests suffice and W = Ω(p log 2 p) guarantees constant efficiency. The worst case: show that Θ(p) rounds are required. Hence W = Ω(p 2 ) guarantees constant efficiency. The performance of asynchronous Round Robin is in general better than global Round Robin, since it avoids the bottleneck of a global target variable. However, to avoid its worst case, randomization and therefore random polling is preferable. Load Balancing Work Stealing 11 / 24

12 Random Polling for Backtracking We assume the following model: An instance of backtracking generates a tree T of height h with N nodes and degree d. T has to be searched with p processes. Initially only process 1 is active and it inserts the root of T into its empty stack. If at any time an active process takes the topmost node v off its stack, then it expands v and pushes all children of v onto the stack. (We use depth-first search to traverse T.) Thus each stack is composed of generations with the current generation on top and the oldest generation at the bottom. An idle process p uses random polling to request work from a randomly chosen process q. Which tasks should a donating process hand over to the requesting process? Load Balancing Work Stealing 12 / 24

13 Donating Work If the donating process q has work, then an arbitrarily chosen request is served and q sends one half of its oldest generation. Whenever a node v is donated, then it migrates together with one half of its current siblings. The generation of v is halved in each donation step involving v. v participates in at most log2 d donation steps, where d is the degree of T. Thus the communication overhead consists of at most O(N log 2 d) node transfers and all work requests. Any parallel algorithm requires time Ω(max{N/p, h}) to search N nodes with p processes, assuming the tree has height h. Load Balancing Work Stealing 13 / 24

14 Donating Half of the Oldest Generation: An Analysis I If less than p/2 processes are idle in some given step, then we say that the step succeeds and otherwise that it fails. How many successful steps are performed? If a process is busy, then it participates in expanding a node or in a node transfer: there are at most O(N + N log 2 d) such operations. There are at most O( N p log 2 d) successful steps. We show that there are at most O(( N p + h) log 2 d) failing steps. Fix an arbitrary node v of T. At any time there is a unique process which stores v or the lowest ancestor of v in its stack. We say that v receives a request, if its process receives a request for work. After at most log 2 d h requests for work, v belongs to the oldest generation of its process. v is expanded after at most log 2 d further requests. Load Balancing Work Stealing 14 / 24

15 Donating Half of the Oldest Generation: An Analysis II After how many failing steps does v receive log 2 d (h + 1) requests? Determine the probability q that v receives a request in a failing step. In a failing step there are exactly k idle processes with k p/2. The probability that none of them requests node v is (1 1 p )k (1 1 p )p/2 e 1/2 and q 1 e 1/2 1/3 follows. Random polling performs in each failing step a random trial with success probability at least 1/3 and the expected number of successes in t trials is at least t/3. The Chernoff bound: prob[ t i=1 X i < (1 β) t/3 ] e (t/3) β2 /2. For β = 1/2: there are less than t/6 successes in t trials with probability at most e t/24. Set t = 6 log2 d (h N/p). There are less than log 2 d (h + 1) requests for v with probability at most e Ω(log 2 d (h+1+n/p)) = d Ω(h+N/p). Load Balancing Work Stealing 15 / 24

16 Summary There are at most O( N p log 2 d) successful steps. v is expanded after at most (h + 1) log 2 d requests. In 6 log 2 d (h N/p) failing steps there are less than log 2 d (h + 1) requests for v with probability at most d Ω(h+N/p). We have fixed one node v out of N nodes: With probability at least 1 N d Ω(h+N/p), random polling runs for at most O(max{ N p, h} log 2 d) steps. Its speedup is at least Ω( log 2 d ). p Load Balancing Work Stealing 16 / 24

17 Work Sharing In randomized work sharing a busy process assigns work to a randomly chosen process. If many processes are busy: The probability is high that a busy process receives more work in randomized work sharing. Even if the work assignment can be rejected, the communication overhead increases. In random polling however a busy process profits, since it may get rid of part of its work with minimal communication. If few processes are busy: The probability increases that a busy process finds an idle process to assign work to. The performance of work sharing improves. Work stealing decreases peak loads as well, but not as efficient as in work sharing. Normally only few processes are idle and random polling wins. But work sharing is required if urgent tasks have to complete fast. Load Balancing Work Sharing 17 / 24

18 Extreme Work Sharing The model: Traverse a backtracking tree of height h. The time to expand a node is unknown. Any process has its own pool of nodes. Whenever a process needs more work it picks a node from its pool and expands the node. Children of the expanded node are assigned to pools of randomly chosen processes. Assume p processes share the total compute time W. Let d = max v D(v)/ min v D(v) be the ratio of the longest and shortest duration of a node. Assume min v D(v) = 1. One can show: the total time is proportional to W p + h d. Extreme work sharing guarantees a uniform load at the expense of a large communication overhead. Compare with the total time O(max{ N p, h} log 2 d) of random polling. (Here d is the degree of the backtracking tree.) Load Balancing Work Sharing 18 / 24

19 How to Assign Work? A model scenario: - We observe p clients, where the ith client assigns its task at time i to one of p servers. - All tasks require the same run time, but clients do not know of each others actions. - We are looking for an assignment strategy that keeps the maximum load of a server as low as possible. Should we use randomized work sharing? Does it pay off to ask more than one server and to choose the server with minimal load? Are there even better methods? Load Balancing Work Sharing 19 / 24

20 The Performance of Randomized Work Sharing What is the probability q k that some server receives k tasks? Assume that k is not too large. Fix a server S. What is the probability that S receives exactly k ( tasks? p ) k ( 1 p )k (1 1 p )p k ( ) p k ( 1 p )k ( p k 1 p )Θ(k) = ( 1 k )Θ(k). Thus qk p k Θ(k), since we have fixed one of p servers. Choose k such that p k Θ(k) log, i.e., k = α 2 p log 2 log 2 p for some constant α. Our analysis is tight: the server with the heaviest burden has log Θ( 2 p log 2 log 2 p ) tasks with high probability. Compare this with the expected load 1. Load Balancing Work Sharing 20 / 24

21 Uniform Allocation Let d be a natural number. - Whenever a task is to be assigned, choose d servers at random, enquire their respective load and assign the task to the server with smallest load. - If several servers have the same minimal load, then choose a server at random. The maximum load of the uniform allocation scheme is bounded ± Θ(1). This statement holds with probability at least by log 2 log 2 (p) log 2 (d) 1 p α for some constant α > 0. A significant reduction in comparison to the maximum load log Θ( 2 p ) of randomized work sharing. log 2 log 2 p A significant reduction already for d = 2: the two-choice paradigm. Load Balancing Work Sharing 21 / 24

22 Non-uniform Allocation Partition the servers into d groups of same size and assign the task according to the always-go-left rule: Choose one server at random from each group and assign the task to the server with minimal load. If several servers have the same minimal load, then choose the leftmost server. The maximum load is bounded by log 2 log 2 (p) d log 2 (φ d ) ± Θ(1) with φ d 2. This statement holds with probability at least 1 p α, where α is a positive constant. Again a significant improvement compared with the the maximum load log 2 log 2 (p) log 2 (d) ± Θ(1) for uniform allocation. Load Balancing Work Sharing 22 / 24

23 Why is Non-uniform Allocation So Much Better? Non-uniform allocation seems nonsensical, since servers in left groups seem to get overloaded. But in subsequent attempts, servers in right groups will win new tasks and their load follows the load of servers in left groups. The combination of the group approach with always-go-left enforces therefore on one hand a larger load of left servers with the consequence that right servers have to follow suit. The preferential treatment of right groups enforces a more uniform load distribution. Load Balancing Work Sharing 23 / 24

24 Summary We have used static load balancing for all applications in parallel linear algebra, since we could predict the duration of tasks, and when assigning independent tasks of unknown duration at random. Dynamic load balancing: work stealing: idle processes ask for work. Random polling is a good strategy. work sharing: a busy process assigns work to another process. non-uniform allocation (partition into d groups, pick one process per group and apply the always-go-left rule) is the most successful strategy. Work stealing is superior, if there are relatively few idle processes, the typical scenario. Load Balancing Work Sharing 24 / 24

Load Balancing Techniques 1 Lecture Outline Following Topics will be discussed Static Load Balancing Dynamic Load Balancing Mapping for load balancing Minimizing Interaction 2 1 Load Balancing Techniques

Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection

### Load Balancing and Termination Detection

Chapter 7 Load Balancing and Termination Detection 1 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection

### Load balancing. Prof. Richard Vuduc Georgia Institute of Technology CSE/CS 8803 PNA: Parallel Numerical Algorithms [L.26] Thursday, April 17, 2008

Load balancing Prof. Richard Vuduc Georgia Institute of Technology CSE/CS 8803 PNA: Parallel Numerical Algorithms [L.26] Thursday, April 17, 2008 1 Today s sources CS 194/267 at UCB (Yelick/Demmel) Intro

### Load Balancing and Termination Detection

Chapter 7 slides7-1 Load Balancing and Termination Detection slides7-2 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination

### Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming)

Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming) Dynamic Load Balancing Dr. Ralf-Peter Mundani Center for Simulation Technology in Engineering Technische Universität München

### Asynchronous Computations

Asynchronous Computations Asynchronous Computations Computations in which individual processes operate without needing to synchronize with other processes. Synchronizing processes is an expensive operation

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### Load Balancing and Termination Detection

Chapter 7 Slide 1 Slide 2 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination

### Chapter 7 Load Balancing and Termination Detection

Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Bull. Nov. Comp. Center, Comp. Science, 25 (2006), 69 74 c 2006 NCC Publisher Load balancing in a heterogeneous computer system by self-organizing Kohonen network Mikhail S. Tarkov, Yakov S. Bezrukov Abstract.

### Load balancing. David Bindel. 12 Nov 2015

Load balancing David Bindel 12 Nov 2015 Inefficiencies in parallel code Poor single processor performance Typically in the memory system Saw this in matrix multiply assignment Overhead for parallelism

### Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Sorting revisited How did we use a binary search tree to sort an array of elements? Tree Sort Algorithm Given: An array of elements to sort 1. Build a binary search tree out of the elements 2. Traverse

### A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called heap-order property

CmSc 250 Intro to Algorithms Chapter 6. Transform and Conquer Binary Heaps 1. Definition A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### Chapter 5: CPU Scheduling. Operating System Concepts 7 th Edition, Jan 14, 2005

Chapter 5: CPU Scheduling Operating System Concepts 7 th Edition, Jan 14, 2005 Silberschatz, Galvin and Gagne 2005 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

Load balancing; Termination detection Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 14, 2013 CPD (DEI / IST) Parallel and Distributed

### Previous Lectures. B-Trees. External storage. Two types of memory. B-trees. Main principles

B-Trees Algorithms and data structures for external memory as opposed to the main memory B-Trees Previous Lectures Height balanced binary search trees: AVL trees, red-black trees. Multiway search trees:

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Merge Sort. 2004 Goodrich, Tamassia. Merge Sort 1

Merge Sort 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 Merge Sort 1 Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two disjoint subsets

### CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

81 CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 5.1 INTRODUCTION Distributed Web servers on the Internet require high scalability and availability to provide efficient services to

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

### Analysis of Algorithms I: Binary Search Trees

Analysis of Algorithms I: Binary Search Trees Xi Chen Columbia University Hash table: A data structure that maintains a subset of keys from a universe set U = {0, 1,..., p 1} and supports all three dictionary

### A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters Abhijit A. Rajguru, S.S. Apte Abstract - A distributed system can be viewed as a collection

### Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

Chapter 12: Multiprocessor Architectures Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup Objective Be familiar with basic multiprocessor architectures and be able to

### LOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS

LOAD BALANCING Load Balancing Mechanisms in Data Center Networks Load balancing vs. distributed rate limiting: an unifying framework for cloud control Load Balancing for Internet Distributed Services using

### Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Dr. Maurice Eggen Nathan Franklin Department of Computer Science Trinity University San Antonio, Texas 78212 Dr. Roger Eggen Department

### CSC2420 Spring 2015: Lecture 3

CSC2420 Spring 2015: Lecture 3 Allan Borodin January 22, 2015 1 / 1 Announcements and todays agenda Assignment 1 due next Thursday. I may add one or two additional questions today or tomorrow. Todays agenda

### How Asymmetry Helps Load Balancing

How Asymmetry Helps oad Balancing Berthold Vöcking nternational Computer Science nstitute Berkeley, CA 947041198 voecking@icsiberkeleyedu Abstract This paper deals with balls and bins processes related

### Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

### TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented

### AI and computer game playing. Artificial Intelligence. Partial game tree for Tic-Tac-Toe. Game tree search. Minimax Algorithm

AI and computer game playing Artificial Intelligence Introduction to game-playing search imax algorithm, evaluation functions, alpha-beta pruning, advanced techniques Game playing (especially chess and

CSE / Notes : Task Scheduling & Load Balancing Task Scheduling A task is a (sequential) activity that uses a set of inputs to produce a set of outputs. A task (precedence) graph is an acyclic, directed

### SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study

SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study Milan E. Soklic Abstract This article introduces a new load balancing algorithm, called diffusive load balancing, and compares its performance

### The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com

THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Efficient Parallel Processing on Public Cloud Servers using Load Balancing Manjunath K. C. M.Tech IV Sem, Department of CSE, SEA College of Engineering

### Home Page. Data Structures. Title Page. Page 1 of 24. Go Back. Full Screen. Close. Quit

Data Structures Page 1 of 24 A.1. Arrays (Vectors) n-element vector start address + ielementsize 0 +1 +2 +3 +4... +n-1 start address continuous memory block static, if size is known at compile time dynamic,

Load balancing; Termination detection Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 13, 2014 CPD (DEI / IST) Parallel and Distributed

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems G.Rajina #1, P.Nagaraju #2 #1 M.Tech, Computer Science Engineering, TallaPadmavathi Engineering College, Warangal,

### A number of tasks executing serially or in parallel. Distribute tasks on processors so that minimal execution time is achieved. Optimal distribution

Scheduling MIMD parallel program A number of tasks executing serially or in parallel Lecture : Load Balancing The scheduling problem NP-complete problem (in general) Distribute tasks on processors so that

### Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,

### Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

### Dynamic load balancing of parallel cellular automata

Dynamic load balancing of parallel cellular automata Marc Mazzariol, Benoit A. Gennart, Roger D. Hersch Ecole Polytechnique Fédérale de Lausanne, EPFL * ABSTRACT We are interested in running in parallel

### Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Load Balancing in Distributed System Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal Objectives of This Module Show the differences between the terms CPU scheduling, Job

### Distributed Dynamic Load Balancing for Iterative-Stencil Applications

Distributed Dynamic Load Balancing for Iterative-Stencil Applications G. Dethier 1, P. Marchot 2 and P.A. de Marneffe 1 1 EECS Department, University of Liege, Belgium 2 Chemical Engineering Department,

Load Balancing Load Balancing Load balancing: distributing data and/or computations across multiple processes to maximize efficiency for a parallel program. Static load-balancing: the algorithm decides

### Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing Valluripalli Srinath 1, Sudheer Shetty 2 1 M.Tech IV Sem CSE, Sahyadri College of Engineering & Management, Mangalore. 2 Asso.

### Distributed Computing over Communication Networks: Maximal Independent Set

Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.

### Network (Tree) Topology Inference Based on Prüfer Sequence

Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,

### An Ecient Dynamic Load Balancing using the Dimension Exchange. Ju-wook Jang. of balancing load among processors, most of the realworld

An Ecient Dynamic Load Balancing using the Dimension Exchange Method for Balancing of Quantized Loads on Hypercube Multiprocessors * Hwakyung Rim Dept. of Computer Science Seoul Korea 11-74 ackyung@arqlab1.sogang.ac.kr

### International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

RESEARCH ARTICLE An Efficient Service Broker Policy for Cloud Computing Environment Kunal Kishor 1, Vivek Thapar 2 Research Scholar 1, Assistant Professor 2 Department of Computer Science and Engineering,

### Policy Distribution Methods for Function Parallel Firewalls

Policy Distribution Methods for Function Parallel Firewalls Michael R. Horvath GreatWall Systems Winston-Salem, NC 27101, USA Errin W. Fulp Department of Computer Science Wake Forest University Winston-Salem,

LOAD BALANCING TECHNIQUES Two imporatnt characteristics of distributed systems are resource multiplicity and system transparency. In a distributed system we have a number of resources interconnected by

### Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

### SCALABILITY AND AVAILABILITY

SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase

### Arithmetic Coding: Introduction

Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation

### Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06

CS880: Approximations Algorithms Scribe: Matt Elder Lecturer: Shuchi Chawla Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06 3.1 Set Cover The Set Cover problem is: Given a set of

### Introduction to Scheduling Theory

Introduction to Scheduling Theory Arnaud Legrand Laboratoire Informatique et Distribution IMAG CNRS, France arnaud.legrand@imag.fr November 8, 2004 1/ 26 Outline 1 Task graphs from outer space 2 Scheduling

### Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro

Subgraph Patterns: Network Motifs and Graphlets Pedro Ribeiro Analyzing Complex Networks We have been talking about extracting information from networks Some possible tasks: General Patterns Ex: scale-free,

### B-Trees. Algorithms and data structures for external memory as opposed to the main memory B-Trees. B -trees

B-Trees Algorithms and data structures for external memory as opposed to the main memory B-Trees Previous Lectures Height balanced binary search trees: AVL trees, red-black trees. Multiway search trees:

### Operations: search;; min;; max;; predecessor;; successor. Time O(h) with h height of the tree (more on later).

Binary search tree Operations: search;; min;; max;; predecessor;; successor. Time O(h) with h height of the tree (more on later). Data strutcure fields usually include for a given node x, the following

### Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Power Management in Cloud Computing using Green Algorithm -Kushal Mehta COP 6087 University of Central Florida Motivation Global warming is the greatest environmental challenge today which is caused by

### ADAPTIVE LOAD BALANCING FOR CLUSTER USING CONTENT AWARENESS WITH TRAFFIC MONITORING Archana Nigam, Tejprakash Singh, Anuj Tiwari, Ankita Singhal

ADAPTIVE LOAD BALANCING FOR CLUSTER USING CONTENT AWARENESS WITH TRAFFIC MONITORING Archana Nigam, Tejprakash Singh, Anuj Tiwari, Ankita Singhal Abstract With the rapid growth of both information and users

### Experiments on the local load balancing algorithms; part 1

Experiments on the local load balancing algorithms; part 1 Ştefan Măruşter Institute e-austria Timisoara West University of Timişoara, Romania maruster@info.uvt.ro Abstract. In this paper the influence

### Study of Various Load Balancing Techniques in Cloud Environment- A Review

International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-4, Issue-04 E-ISSN: 2347-2693 Study of Various Load Balancing Techniques in Cloud Environment- A Review Rajdeep

### Expanding the CASEsim Framework to Facilitate Load Balancing of Social Network Simulations

Expanding the CASEsim Framework to Facilitate Load Balancing of Social Network Simulations Amara Keller, Martin Kelly, Aaron Todd 4 June 2010 Abstract This research has two components, both involving the

### , each of which contains a unique key value, say k i , R 2. such that k i equals K (or to determine that no such record exists in the collection).

The Search Problem 1 Suppose we have a collection of records, say R 1, R 2,, R N, each of which contains a unique key value, say k i. Given a particular key value, K, the search problem is to locate the

### Binary Trees and Huffman Encoding Binary Search Trees

Binary Trees and Huffman Encoding Binary Search Trees Computer Science E119 Harvard Extension School Fall 2012 David G. Sullivan, Ph.D. Motivation: Maintaining a Sorted Collection of Data A data dictionary

### Algebraic Computation Models. Algebraic Computation Models

Algebraic Computation Models Ζυγομήτρος Ευάγγελος ΜΠΛΑ 201118 Φεβρουάριος, 2013 Reasons for using algebraic models of computation The Turing machine model captures computations on bits. Many algorithms

### Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

CHAPTER 5 CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In this chapter, we introduce

### The Importance of Software License Server Monitoring

The Importance of Software License Server Monitoring NetworkComputer How Shorter Running Jobs Can Help In Optimizing Your Resource Utilization White Paper Introduction Semiconductor companies typically

### Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations

### A Practical Scheme for Wireless Network Operation

A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving

### 1/1 7/4 2/2 12/7 10/30 12/25

Binary Heaps A binary heap is dened to be a binary tree with a key in each node such that: 1. All leaves are on, at most, two adjacent levels. 2. All leaves on the lowest level occur to the left, and all

### Synchronization. Todd C. Mowry CS 740 November 24, 1998. Topics. Locks Barriers

Synchronization Todd C. Mowry CS 740 November 24, 1998 Topics Locks Barriers Types of Synchronization Mutual Exclusion Locks Event Synchronization Global or group-based (barriers) Point-to-point tightly

### Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints

Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints Olivier Beaumont,, Paul Renaud-Goud Inria & University of Bordeaux Bordeaux, France 9th Scheduling for Large Scale Systems

### Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

### Big, Fast Routers. Router Architecture

Big, Fast Routers Dave Andersen CMU CS 15-744 Data Plane Router Architecture How packets get forwarded Control Plane How routing protocols establish routes/etc. 1 Processing: Fast Path vs. Slow Path Optimize

### The Conference Call Search Problem in Wireless Networks

The Conference Call Search Problem in Wireless Networks Leah Epstein 1, and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. lea@math.haifa.ac.il 2 Department of Statistics,

### Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

### Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations Roy D. Williams, 1990 Presented by Chris Eldred Outline Summary Finite Element Solver Load Balancing Results Types Conclusions

### Decentralized Utility-based Sensor Network Design

Decentralized Utility-based Sensor Network Design Narayanan Sadagopan and Bhaskar Krishnamachari University of Southern California, Los Angeles, CA 90089-0781, USA narayans@cs.usc.edu, bkrishna@usc.edu

### Graph Algorithms. Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar

Graph Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 3. Topic Overview Definitions and Representation Minimum

### A Dynamic Approach for Load Balancing using Clusters

A Dynamic Approach for Load Balancing using Clusters ShwetaRajani 1, RenuBagoria 2 Computer Science 1,2,Global Technical Campus, Jaipur 1,JaganNath University, Jaipur 2 Email: shwetarajani28@yahoo.in 1

### Comparison on Different Load Balancing Algorithms of Peer to Peer Networks

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks K.N.Sirisha *, S.Bhagya Rekha M.Tech,Software Engineering Noble college of Engineering & Technology for Women Web Technologies

### PPD: Scheduling and Load Balancing 2

PPD: Scheduling and Load Balancing 2 Fernando Silva Computer Science Department Center for Research in Advanced Computing Systems (CRACS) University of Porto, FCUP http://www.dcc.fc.up.pt/~fds 2 (Some

### IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES

www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 6 June, 2013 Page No. 1914-1919 IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Ms.

### CS 267: Applications of Parallel Computers. Load Balancing

CS 267: Applications of Parallel Computers Load Balancing Kathy Yelick www.cs.berkeley.edu/~yelick/cs267_sp07 04/18/2007 CS267 Lecture 24 1 Outline Motivation for Load Balancing Recall graph partitioning

### Improved Algorithms for Data Migration

Improved Algorithms for Data Migration Samir Khuller 1, Yoo-Ah Kim, and Azarakhsh Malekian 1 Department of Computer Science, University of Maryland, College Park, MD 20742. Research supported by NSF Award

### Embedded Systems 20 BF - ES

Embedded Systems 20-1 - Multiprocessor Scheduling REVIEW Given n equivalent processors, a finite set M of aperiodic/periodic tasks find a schedule such that each task always meets its deadline. Assumptions:

### Data Structure [Question Bank]

Unit I (Analysis of Algorithms) 1. What are algorithms and how they are useful? 2. Describe the factor on best algorithms depends on? 3. Differentiate: Correct & Incorrect Algorithms? 4. Write short note:

### Energy Efficient MapReduce

Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing