Load Balancing. Load Balancing 1 / 24



Similar documents
Load Balancing Techniques

Load balancing Static Load Balancing

Load Balancing and Termination Detection

Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming)

Load Balancing and Termination Detection

Algorithm Design and Analysis

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Load Balancing and Termination Detection

Chapter 7 Load Balancing and Termination Detection

Load balancing; Termination detection

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called heap-order property

Approximation Algorithms

Analysis of Algorithms I: Binary Search Trees

Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

Previous Lectures. B-Trees. External storage. Two types of memory. B-trees. Main principles

Dynamic load balancing of parallel cellular automata

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

CSE 4351/5351 Notes 7: Task Scheduling & Load Balancing

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Load balancing; Termination detection

LOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS

CSC2420 Spring 2015: Lecture 3

How Asymmetry Helps Load Balancing

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

Static Load Balancing

Distributed Dynamic Load Balancing for Iterative-Stencil Applications

Distributed Computing over Communication Networks: Maximal Independent Set

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

A number of tasks executing serially or in parallel. Distribute tasks on processors so that minimal execution time is achieved. Optimal distribution

Home Page. Data Structures. Title Page. Page 1 of 24. Go Back. Full Screen. Close. Quit

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

Policy Distribution Methods for Function Parallel Firewalls

Arithmetic Coding: Introduction

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro

Operations: search;; min;; max;; predecessor;; successor. Time O(h) with h height of the tree (more on later).

Expanding the CASEsim Framework to Facilitate Load Balancing of Social Network Simulations

Binary Trees and Huffman Encoding Binary Search Trees

The International Journal Of Science & Technoledge (ISSN X)

LOAD BALANCING TECHNIQUES

Network (Tree) Topology Inference Based on Prüfer Sequence

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

PPD: Scheduling and Load Balancing 2

A Dynamic Approach for Load Balancing using Clusters

SCALABILITY AND AVAILABILITY

Improved Algorithms for Data Migration

Data Structure [Question Bank]

Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06

M. Sugumaran / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (3), 2011,

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Experiments on the local load balancing algorithms; part 1

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

A Practical Scheme for Wireless Network Operation

Algebraic Computation Models. Algebraic Computation Models

Integrating Benders decomposition within Constraint Programming

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

B-Trees. Algorithms and data structures for external memory as opposed to the main memory B-Trees. B -trees

The Importance of Software License Server Monitoring

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers

1/1 7/4 2/2 12/7 10/30 12/25

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks

Decentralized Utility-based Sensor Network Design

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Scalable Prefix Matching for Internet Packet Forwarding

Various Schemes of Load Balancing in Distributed Systems- A Review

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints

Stability of QOS. Avinash Varadarajan, Subhransu Maji

Introduction to Scheduling Theory

Parallelization: Binary Tree Traversal

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

5. A full binary tree with n leaves contains [A] n nodes. [B] log n 2 nodes. [C] 2n 1 nodes. [D] n 2 nodes.

Applied Algorithm Design Lecture 5

PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP

Transcription:

Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait concept. How to get the most urgent tasks done first? Another example: approximating the Mandelbrot set M. Load Balancing 1 / 24

Load Balancing: The Mandelbrot Set c C belongs to M iff the iteration z 0 (c) = 0, z k+1 (c) = z 2 k (c) + c remains bounded, i.e., iff z k (c) 2 for all k. (One can show that z k (c) 2 holds for all k whenever c M.) If c M, then the number of iterations required to escape varies considerably: 2 M, but c = 2 ε escapes after one iteration for every ε > 0, 1 4 M and the number of iterations for c = 1 4 + ε grows, when ε > 0 decreases. How to balance the load? Load Balancing 2 / 24

The Mandelbrot Set: Load Balancing We are given a rectangle R within C. Color pixels in R M in dependence on the number of iterations required to escape. Dynamic load balancing (idle processes receive pixels during run time) versus static load balancing (assign processes to pixels ahead of time). Static load balancing superior, if done at random. If we only have to display M within the rectangle R: use that M is connected. Dynamic load balancing wins. We work with a master-slave architecture. Initially a single slave receives R. If the slave finds that all boundary pixels belong to M, then it claims that the rectangle is a subset of M. Otherwise the rectangle is returned to the master who partitions it into two rectangles and assigns one slave for each new rectangle. This procedure continues until all slaves are busy. Load Balancing 3 / 24

Static Load Balancing I We have used static load balancing for all applications in parallel linear algebra, since we could predict the duration of tasks. When executing independent tasks of unknown duration: assign tasks at random. Load Balancing 4 / 24

Static Load Balancing II The static load balancing problem in the Mandelbrot example was easy, since the tasks are independent. In general we are given a task graph T = (T, E). The nodes of T correspond to the tasks and there is a directed edge (s, t) from task s to task t whenever task s has to complete before task t can be dealt with. We assume an ideal situation in which we know the duration w t for each task t. Partition T into p disjoints subsets T 1,..., T p such that processes carry essentially the same load, i.e., t T i w t ( t T wt)/p, and communicate as little as possible, i.e., the number of edges connecting tasks in different classes of the partition is minimal. The static load balancing problem is N P-complete and hence computationally hard. Use heuristics (Kernighan-Lin, Simulated Annealing). Also, the assumption of known durations is often unrealistic. Load Balancing 5 / 24

Dynamic Load Balancing In centralized load balancing there is a centralized priority queue of tasks, which is administered by one or more masters assigning tasks to slaves. (cp. APHID). This approach normally assumes a relatively small number of processes. Rules of thumb: try to assign larger tasks at the beginning and smaller tasks near the end to even out finish times. Take different processor speeds into account. In distributed dynamic load balancing one distinguishes methods based on work stealing or task pulling (idle processes request work) and work sharing or task pushing (overworked processes assign work). We concentrate on distributed dynamic load balancing. Load Balancing 6 / 24

Work Stealing Three methods. Random Polling: if a process runs out of work, it requests work from a randomly chosen process. Global Round Robin: whenever a process requests work, it accesses a global target variable and requests work from the specified process. Asynchronous Round Robin: whenever a process requests work, it accesses its local target variable, requests work from the specified process and then increments its target variable by one modulo p, where p is the number of processes. Which method to use? Load Balancing Work Stealing 7 / 24

Comparing the Three Methods The model: - Assume that total work W is initially assigned to process 1. - Whenever process i requests work from process j, then process j donates half of its current load and keeps the remaining half, provided it still has at least W /p units of work. The question: how long does it take to achieve perfect parallelization, i.e., work O(W /p) for all processes. Load Balancing Work Stealing 8 / 24

An Analysis of Random Polling Assume that exactly j processes have work. Let V j (p) be the expected number of requests such that each of the j processes receives at least one request. The load of a process is halved after it serves a request. After V j (p) requests, the peak load is at least halved. We show that V j (p) = O(p ln j) and hence V j (p) = O(p ln p). The peak load is bounded by O(W /p) after O(p ln 2 p) requests and hence the communication overhead is bounded by O(p ln 2 2 p). We have to determine V j (p). Load Balancing Work Stealing 9 / 24

Determining V (p) Assume that exactly i of the j processes with work have received requests. Let f j (i, p) be the expected number of requests such that each of the remaining j i processes receives a request. Our goal is to determine f j (0, p). fj (i, p) = (1 j i p ) (1 + f j(i, p)) + j i p (1 + f j(i + 1, p)) holds. Why? With probability (j i)/p we make progress and one more process has received work, whereas with probability 1 j i p f j(i, p) = 1 + j i p f j(i + 1, p) and hence Thus j i p f j (i, p) = p j i + f j (i + 1, p). f j (0, p) = p j 0 + + p j i + f (i + 1, p) and as a consequence f (0, p) = p j 1 i=0 1 j i = p j i=1 1 i follows. we fail. Hence V j (p) = O(p ln(j)) = O(p ln p) and after O(p ln 2 p) work requests the peak load is bounded by O(W /p). To achieve constant efficiency, W = Ω(p ln 2 p) will do. Load Balancing Work Stealing 10 / 24

Global and Asynchronous Round Robin When does global Round Robin achieve constant efficiency? At least a constant fraction of all processes have to receive work, otherwise the computing time is not bounded by O(W /p). The global target variable has to be accessed for Ω(p) steps. To achieve constant efficiency W p = Ω(p) or equivalently W = Ω(p2 ) has to hold. The performance of asynchronous Round Robin. The best case: p log2 p requests suffice and W = Ω(p log 2 p) guarantees constant efficiency. The worst case: show that Θ(p) rounds are required. Hence W = Ω(p 2 ) guarantees constant efficiency. The performance of asynchronous Round Robin is in general better than global Round Robin, since it avoids the bottleneck of a global target variable. However, to avoid its worst case, randomization and therefore random polling is preferable. Load Balancing Work Stealing 11 / 24

Random Polling for Backtracking We assume the following model: An instance of backtracking generates a tree T of height h with N nodes and degree d. T has to be searched with p processes. Initially only process 1 is active and it inserts the root of T into its empty stack. If at any time an active process takes the topmost node v off its stack, then it expands v and pushes all children of v onto the stack. (We use depth-first search to traverse T.) Thus each stack is composed of generations with the current generation on top and the oldest generation at the bottom. An idle process p uses random polling to request work from a randomly chosen process q. Which tasks should a donating process hand over to the requesting process? Load Balancing Work Stealing 12 / 24

Donating Work If the donating process q has work, then an arbitrarily chosen request is served and q sends one half of its oldest generation. Whenever a node v is donated, then it migrates together with one half of its current siblings. The generation of v is halved in each donation step involving v. v participates in at most log2 d donation steps, where d is the degree of T. Thus the communication overhead consists of at most O(N log 2 d) node transfers and all work requests. Any parallel algorithm requires time Ω(max{N/p, h}) to search N nodes with p processes, assuming the tree has height h. Load Balancing Work Stealing 13 / 24

Donating Half of the Oldest Generation: An Analysis I If less than p/2 processes are idle in some given step, then we say that the step succeeds and otherwise that it fails. How many successful steps are performed? If a process is busy, then it participates in expanding a node or in a node transfer: there are at most O(N + N log 2 d) such operations. There are at most O( N p log 2 d) successful steps. We show that there are at most O(( N p + h) log 2 d) failing steps. Fix an arbitrary node v of T. At any time there is a unique process which stores v or the lowest ancestor of v in its stack. We say that v receives a request, if its process receives a request for work. After at most log 2 d h requests for work, v belongs to the oldest generation of its process. v is expanded after at most log 2 d further requests. Load Balancing Work Stealing 14 / 24

Donating Half of the Oldest Generation: An Analysis II After how many failing steps does v receive log 2 d (h + 1) requests? Determine the probability q that v receives a request in a failing step. In a failing step there are exactly k idle processes with k p/2. The probability that none of them requests node v is (1 1 p )k (1 1 p )p/2 e 1/2 and q 1 e 1/2 1/3 follows. Random polling performs in each failing step a random trial with success probability at least 1/3 and the expected number of successes in t trials is at least t/3. The Chernoff bound: prob[ t i=1 X i < (1 β) t/3 ] e (t/3) β2 /2. For β = 1/2: there are less than t/6 successes in t trials with probability at most e t/24. Set t = 6 log2 d (h + 1 + N/p). There are less than log 2 d (h + 1) requests for v with probability at most e Ω(log 2 d (h+1+n/p)) = d Ω(h+N/p). Load Balancing Work Stealing 15 / 24

Summary There are at most O( N p log 2 d) successful steps. v is expanded after at most (h + 1) log 2 d requests. In 6 log 2 d (h + 1 + N/p) failing steps there are less than log 2 d (h + 1) requests for v with probability at most d Ω(h+N/p). We have fixed one node v out of N nodes: With probability at least 1 N d Ω(h+N/p), random polling runs for at most O(max{ N p, h} log 2 d) steps. Its speedup is at least Ω( log 2 d ). p Load Balancing Work Stealing 16 / 24

Work Sharing In randomized work sharing a busy process assigns work to a randomly chosen process. If many processes are busy: The probability is high that a busy process receives more work in randomized work sharing. Even if the work assignment can be rejected, the communication overhead increases. In random polling however a busy process profits, since it may get rid of part of its work with minimal communication. If few processes are busy: The probability increases that a busy process finds an idle process to assign work to. The performance of work sharing improves. Work stealing decreases peak loads as well, but not as efficient as in work sharing. Normally only few processes are idle and random polling wins. But work sharing is required if urgent tasks have to complete fast. Load Balancing Work Sharing 17 / 24

Extreme Work Sharing The model: Traverse a backtracking tree of height h. The time to expand a node is unknown. Any process has its own pool of nodes. Whenever a process needs more work it picks a node from its pool and expands the node. Children of the expanded node are assigned to pools of randomly chosen processes. Assume p processes share the total compute time W. Let d = max v D(v)/ min v D(v) be the ratio of the longest and shortest duration of a node. Assume min v D(v) = 1. One can show: the total time is proportional to W p + h d. Extreme work sharing guarantees a uniform load at the expense of a large communication overhead. Compare with the total time O(max{ N p, h} log 2 d) of random polling. (Here d is the degree of the backtracking tree.) Load Balancing Work Sharing 18 / 24

How to Assign Work? A model scenario: - We observe p clients, where the ith client assigns its task at time i to one of p servers. - All tasks require the same run time, but clients do not know of each others actions. - We are looking for an assignment strategy that keeps the maximum load of a server as low as possible. Should we use randomized work sharing? Does it pay off to ask more than one server and to choose the server with minimal load? Are there even better methods? Load Balancing Work Sharing 19 / 24

The Performance of Randomized Work Sharing What is the probability q k that some server receives k tasks? Assume that k is not too large. Fix a server S. What is the probability that S receives exactly k ( tasks? p ) k ( 1 p )k (1 1 p )p k ( ) p k ( 1 p )k ( p k 1 p )Θ(k) = ( 1 k )Θ(k). Thus qk p k Θ(k), since we have fixed one of p servers. Choose k such that p k Θ(k) log, i.e., k = α 2 p log 2 log 2 p for some constant α. Our analysis is tight: the server with the heaviest burden has log Θ( 2 p log 2 log 2 p ) tasks with high probability. Compare this with the expected load 1. Load Balancing Work Sharing 20 / 24

Uniform Allocation Let d be a natural number. - Whenever a task is to be assigned, choose d servers at random, enquire their respective load and assign the task to the server with smallest load. - If several servers have the same minimal load, then choose a server at random. The maximum load of the uniform allocation scheme is bounded ± Θ(1). This statement holds with probability at least by log 2 log 2 (p) log 2 (d) 1 p α for some constant α > 0. A significant reduction in comparison to the maximum load log Θ( 2 p ) of randomized work sharing. log 2 log 2 p A significant reduction already for d = 2: the two-choice paradigm. Load Balancing Work Sharing 21 / 24

Non-uniform Allocation Partition the servers into d groups of same size and assign the task according to the always-go-left rule: Choose one server at random from each group and assign the task to the server with minimal load. If several servers have the same minimal load, then choose the leftmost server. The maximum load is bounded by log 2 log 2 (p) d log 2 (φ d ) ± Θ(1) with φ d 2. This statement holds with probability at least 1 p α, where α is a positive constant. Again a significant improvement compared with the the maximum load log 2 log 2 (p) log 2 (d) ± Θ(1) for uniform allocation. Load Balancing Work Sharing 22 / 24

Why is Non-uniform Allocation So Much Better? Non-uniform allocation seems nonsensical, since servers in left groups seem to get overloaded. But in subsequent attempts, servers in right groups will win new tasks and their load follows the load of servers in left groups. The combination of the group approach with always-go-left enforces therefore on one hand a larger load of left servers with the consequence that right servers have to follow suit. The preferential treatment of right groups enforces a more uniform load distribution. Load Balancing Work Sharing 23 / 24

Summary We have used static load balancing for all applications in parallel linear algebra, since we could predict the duration of tasks, and when assigning independent tasks of unknown duration at random. Dynamic load balancing: work stealing: idle processes ask for work. Random polling is a good strategy. work sharing: a busy process assigns work to another process. non-uniform allocation (partition into d groups, pick one process per group and apply the always-go-left rule) is the most successful strategy. Work stealing is superior, if there are relatively few idle processes, the typical scenario. Load Balancing Work Sharing 24 / 24