PPD: Scheduling and Load Balancing 2



Similar documents
Load balancing. Prof. Richard Vuduc Georgia Institute of Technology CSE/CS 8803 PNA: Parallel Numerical Algorithms [L.26] Thursday, April 17, 2008

Load balancing. David Bindel. 12 Nov 2015

Load balancing; Termination detection

Load balancing; Termination detection

Static Load Balancing

CSE 4351/5351 Notes 7: Task Scheduling & Load Balancing

LOAD BALANCING TECHNIQUES

Load balancing Static Load Balancing

CS 267: Applications of Parallel Computers. Load Balancing

Load Balancing and Termination Detection

A number of tasks executing serially or in parallel. Distribute tasks on processors so that minimal execution time is achieved. Optimal distribution

Load Balancing and Termination Detection

Load Balancing. Load Balancing 1 / 24

Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming)

Load Balancing and Termination Detection

Design and Implementation of Efficient Load Balancing Algorithm in Grid Environment

A Review of Customized Dynamic Load Balancing for a Network of Workstations

A Dynamic Approach for Load Balancing using Clusters

Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro

Load Balancing Algorithms for Peer to Peer and Client Server Distributed Environments

A Comparison of Dynamic Load Balancing Algorithms

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

Chapter 7 Load Balancing and Termination Detection

A Study on the Application of Existing Load Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems

A Clustered Approach for Load Balancing in Distributed Systems

Asynchronous Computations

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

Dynamic Load Balancing in a Network of Workstations

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

An Effective Dynamic Load Balancing Algorithm for Grid System

Load Balancing Techniques

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

How To Compare Load Sharing And Job Scheduling In A Network Of Workstations

A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3

Dynamic Load Balancing. Using Work-Stealing 35.1 INTRODUCTION CHAPTER. Daniel Cederman and Philippas Tsigas

Study of Various Load Balancing Techniques in Cloud Environment- A Review

Keywords Load balancing, Dispatcher, Distributed Cluster Server, Static Load balancing, Dynamic Load balancing.

Design of an Optimized Virtual Server for Efficient Management of Cloud Load in Multiple Cloud Environments

Various Schemes of Load Balancing in Distributed Systems- A Review

Performance Evaluation of Mobile Agent-based Dynamic Load Balancing Algorithm

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

RESEARCH PAPER International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015

Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

A Comparison of Task Pools for Dynamic Load Balancing of Irregular Algorithms

Driving force. What future software needs. Potential research topics

Scheduling Algorithms for Grid Computing: State of the Art and Open Problems

Graph Analytics in Big Data. John Feo Pacific Northwest National Laboratory

Comet: A Communication-Efficient Load Balancing Strategy for Multi-Agent Cluster Computing

Global Load Balancing and Primary Backup Approach for Fault Tolerant Scheduling in Computational Grid

An Overview of CORBA-Based Load Balancing

Journal of Theoretical and Applied Information Technology 20 th July Vol.77. No JATIT & LLS. All rights reserved.

ABSTRACT. Keywords scalable, load distribution, load balancing, work stealing

Dynamic Load Balancing Algorithms For Cloud Computing

Hybrid Programming with MPI and OpenMP

SOFT 437. Software Performance Analysis. Chapter 4: Software Execution Model

Scheduling Task Parallelism" on Multi-Socket Multicore Systems"

Chapter 2: OS Overview

A Survey Of Various Load Balancing Algorithms In Cloud Computing

PERFORMANCE EVALUATION OF THREE DYNAMIC LOAD BALANCING ALGORITHMS ON SPMD MODEL

FPGA area allocation for parallel C applications

Load Balancing In Cloud Computing

A SURVEY ON LOAD BALANCING ALGORITHMS FOR CLOUD COMPUTING

SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study

DYNAMIC LOAD BALANCING SCHEME FOR ITERATIVE APPLICATIONS

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

COMPUTATIONAL biology attempts to simulate and understand

Improving the performance of data servers on multicore architectures. Fabien Gaud

Notation: - active node - inactive node

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

An Efficient load balancing using Genetic algorithm in Hierarchical structured distributed system

A Genetic-Fuzzy Logic Based Load Balancing Algorithm in Heterogeneous Distributed Systems

A Survey on Load Balancing and Scheduling in Cloud Computing

Y. Xiang, Constraint Satisfaction Problems

Load Balancing In Concurrent Parallel Applications

Spring 2011 Prof. Hyesoon Kim

Optimal Load Balancing in a Beowulf Cluster. Daniel Alan Adams. A Thesis. Submitted to the Faculty WORCESTER POLYTECHNIC INSTITUTE

Interconnection Networks

Krishna Institute of Engineering & Technology, Ghaziabad Department of Computer Application MCA-213 : DATA STRUCTURES USING C

Decentralized Utility-based Sensor Network Design

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

Graph Mining and Social Network Analysis

A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers

EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALLEL JOBS IN CLUSTERS

Evaluation of Different Task Scheduling Policies in Multi-Core Systems with Reconfigurable Hardware

Expanding the CASEsim Framework to Facilitate Load Balancing of Social Network Simulations

Apache Hama Design Document v0.6

An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

Performance Analysis and Optimization Tool

Design and Implementation of Distributed Process Execution Environment

Social Influence Analysis in Social Networking Big Data: Opportunities and Challenges. Presenter: Sancheng Peng Zhaoqing University

DECENTRALIZED LOAD BALANCING IN HETEROGENEOUS SYSTEMS USING DIFFUSION APPROACH

@IJMTER-2015, All rights Reserved 355

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

How To Balance In Cloud Computing

Parallel Discrepancy-based Search

Transcription:

PPD: Scheduling and Load Balancing 2 Fernando Silva Computer Science Department Center for Research in Advanced Computing Systems (CRACS) University of Porto, FCUP http://www.dcc.fc.up.pt/~fds 2 (Some slides are based on slides from Kathy Yelick, www.cs.berkeley.edu/ yelick) Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 3 1 / 21

Goals of Scheduling The concepts of load balancing and scheduling are very closely related, and are often used with the same meaning. The goal of scheduling strategy is to maximize the performance of a parallel system, by transferring tasks from busy processors to other processors that are less busy, or even idle. A scheduling strategy involves two important decisions: determine tasks that can be executed in parallel, and determine where to execute the parallel tasks. A decision is normally taken either based on prior knowledge, or on information gathered during execution. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 4 2 / 21

Difficulties in Devising a Scheduling Strategy The design of a scheduling strategy depends on tasks properties: Cost of tasks do all tasks have the same computation cost? if they do not, then when are those costs known? before the execution, when a task is created, or only when it terminates? Dependencies between tasks can we execute the tasks in any order? if not, when are the dependencies between tasks known? Locality before the execution, when the task is created, or only when it terminates? is it important that some tasks execute in the same processor to reduce communication costs? when do we know the communication requirements? Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 5 3 / 21

Costs of Tasks Scheduling a set of tasks in the following cases: Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 6 4 / 21

Dependencies between tasks Scheduling a graph of tasks in the following cases: Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 7 5 / 21

Locality of tasks Scheduling a set of tasks in the following cases: Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 8 6 / 21

Scheduling Solutions Static Scheduling - decisions are taken at compilation time. static analysis of programs is used to estimate size of tasks; this information is hard to obtain and often incomplete. static mapping of the search tree on to the parallel architecture (optimal mapping is NP-complete). buidl a directed graph with the nodes representing the tasks and the links representing data or communication dependencies; determine the execution order for tasks so that execution time is minimized. Dynamic Scheduling (or adaptive work sharing) - makes use of computational state information during execution to make decisions. Example: verifies the load of each processor and ensures a dynamic load balancing among all processors. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 9 7 / 21

Why Dynamic Scheduling? A large class of problems have a solution space that corresponds to a search tree. These problems are commonly: computationally demanding allow different parallelization strategies require dynamic load balancing Examples: enumerating subgraphs of size k of a given graph pattern mining in social and biological networks problem of placing a queens in a chessboard divide-and-conquer and branch-and-bound problems Prolog execution tree Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 10 8 / 21

Search Tree the tree is dynamically built during or with execution there can be common sub-problems in different paths Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 11 9 / 21

Parallel search Consider: a tree depth-first search static scheduling: for each non-busy processor, assign the next new task. We can and must do better then this! Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 12 10 / 21

Strategies for Dynamic Scheduling The dynamic scheduling strategies can be: centralized a central scheduler holds all information about the system, namely work-load, and takes decisions for work sharing and transfer. works quite well in shared memory, but with a reduced number of processors. it is inefficient in distributed memory, as it requires much communication to keep scheduling information up to date. distributed there is one scheduler per processor taking autonomous decisions regarding work sharing. the goal is to maintain its processor busy and balance the workload in the system. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 13 11 / 21

Centralized Scheduling The scheduler has a unique queue of tasks 1. answers to requests from processors (or workers), or 2. the workers access autonomously to the centralized work-queue, synchronizing the access using locks. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 14 12 / 21

Master-Worker Centralized strategy normally for distributed memory systems Work distribution decisions is done only by the master Worker execute a loop: ask for work (task) receive work (task) execute work (task) send results Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 15 13 / 21

Difficulties with a centralized strategy Avoiding contention in accessing the shared work-queue Suppose that tasks correspond to independent iteration of a loop. Let K be the size of a task: If K is large, contention in accessing the work-queue is reduced If K is small, load balancing workload is simpler and more efficient Ideas: Use larger tasks at the beginning of execution (more iterations in one task) to avoid excessive overheads and use smaller task sizes near the end of the computation. On access order i to the work-queue, select a task with size Ri /p, where R i is the number of processors. The size K i is a function of the remaining work, but it is also a variance of the cost of the task. the variance is estimated using historical information larger variance smaller sizes reduced variance larger sizes Accommodate differences in processing capacity (heterogeneity). Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 16 14 / 21

Distributed work-queues (or work-pools) Extends naturally for distributed architectures, but also works for shared memory Allows for idle workers to take initiative in searching for work, or for busy workers to voluntarily share tasks. Useful when: for distributed memory architectures there is much synchronization or many small tasks Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 17 15 / 21

Distributed Scheduling The work sharing decisions can be: sender initiated (or work distribution) - the busy workers search for less busy workers to give them more tasks. better for smaller workloads. receiver-initiated (or work-stealing) - the workers that become idle, without work, search for a worker with work and requests or steals work. better for greater workloads in the system Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 18 16 / 21

How do we select a worker to request work from? Round-robin: targetk = (target k + 1) mod procs. random polling/stealing When a worker needs work, selects randomly another worker and sends a work request. Ask to the last worker: locality might favor requesting to the same worker from whom received work. if this does not have work, apply one of the previous strategies. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 19 17 / 21

How to divide work? Number of tasks to distribute? Divide in half? Which tasks: topmost: tasks from the beginning of the queue (older tasks) bottomost: tasks from the end of the queue (most recent) Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 20 18 / 21

Sharing strategies 1/2 Sender-initiated When a worker produces a new task, its scheduler searches for a free (idle) processor and assigns it the tsk for execution. If it does not succeed, keeps the task in its queue. Receiver-initiated Tasks produced by workers are stored in their local work-queues. When a worker searches for a task for execution, its scheduler searches first in its local queue. If it is empty, then requests or steals work from another worker. Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 21 19 / 21

Sharing strategies 2/2 Adaptative I Combines the strategies sender-initiated and receiver-initiated. The workers are classified as Senders or Receivers depending on the value of a Threshold parameter. A worker is sender if the number of tasks in its work-queue is above the threshold, it is receiver otherwise. Adaptative II Improves the heuristic Adaptative I, by introducing two parameters Low and High to classify the workers as Senders, Receivers or Neutrals. A worker changes dynamically its behavior, function of the amount of work in its work-queue: #tasks < high it behaves as sender #tasks < low it behaves as receiver neutral if low #tasks high Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 22 20 / 21

Example of the evolution in time of the work-queues Fernando Silva (DCC-FCUP) PPD: Scheduling and Load Balancing 23 21 / 21