Research on Job Scheduling Algorithm in Hadoop



Similar documents
Survey on Scheduling Algorithm in MapReduce Framework

Analysis of Information Management and Scheduling Technology in Hadoop

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE

Hadoop Fair Scheduler Design Document

The Improved Job Scheduling Algorithm of Hadoop Platform

Survey on Job Schedulers in Hadoop Cluster

Big Data Analysis and Its Scheduling Policy Hadoop

Task Scheduling in Hadoop

Big Data and Apache Hadoop s MapReduce

A Hybrid Scheduling Approach for Scalable Heterogeneous Hadoop Systems

Efficient Data Replication Scheme based on Hadoop Distributed File System

Hadoop Architecture. Part 1

Evaluating Task Scheduling in Hadoop-based Cloud Systems

An Adaptive Scheduling Algorithm for Dynamic Heterogeneous Hadoop Systems

Matchmaking: A New MapReduce Scheduling Technique

Guidelines for Selecting Hadoop Schedulers based on System Heterogeneity

Cloud Computing based on the Hadoop Platform

Improving MapReduce Performance in Heterogeneous Environments

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Keywords: Big Data, HDFS, Map Reduce, Hadoop

Fair Scheduler. Table of contents

Job Scheduling for Multi-User MapReduce Clusters

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks

INFO5011. Cloud Computing Semester 2, 2011 Lecture 11, Cloud Scheduling

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

A Cost-Benefit Analysis of Indexing Big Data with Map-Reduce

Distributed File Systems

Apache Hadoop. Alexandru Costan

Job Scheduling for MapReduce

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Load Rebalancing for File System in Public Cloud Roopa R.L 1, Jyothi Patil 2

Mobile Cloud Computing for Data-Intensive Applications

Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture

Lifetime Management of Cache Memory using Hadoop Snehal Deshmukh 1 Computer, PGMCOE, Wagholi, Pune, India

Fault Tolerance in Hadoop for Work Migration

Introduction to Hadoop

Scheduling Data Intensive Workloads through Virtualization on MapReduce based Clouds

The Google File System

Adaptive Task Scheduling for Multi Job MapReduce

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

CSE-E5430 Scalable Cloud Computing Lecture 2

The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform

How MapReduce Works 資碩一 戴睿宸

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

THE HADOOP DISTRIBUTED FILE SYSTEM

Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

A REAL TIME MEMORY SLOT UTILIZATION DESIGN FOR MAPREDUCE MEMORY CLUSTERS

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

A Task Scheduling Algorithm for Hadoop Platform

Accelerating and Simplifying Apache

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Computing at Scale: Resource Scheduling Architectural Evolution and Introduction to Fuxi System

Scheduling Algorithms in MapReduce Distributed Mind

Performance Issues of Heterogeneous Hadoop Clusters in Cloud Computing

Energy Efficient MapReduce

Introduction to Apache YARN Schedulers & Queues

Towards a Resource Aware Scheduler in Hadoop

Distributed File Systems

Hadoop Scheduler w i t h Deadline Constraint

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Optimization and analysis of large scale data sorting algorithm based on Hadoop

- Behind The Cloud -

An Application of Hadoop and Horizontal Scaling to Conjunction Assessment. Mike Prausa The MITRE Corporation Norman Facas The MITRE Corporation

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems

Hadoop on a Low-Budget General Purpose HPC Cluster in Academia

Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications

MapReduce and Hadoop Distributed File System

Introduction to Hadoop

Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

Self Learning Based Optimal Resource Provisioning For Map Reduce Tasks with the Evaluation of Cost Functions

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

An improved task assignment scheme for Hadoop running in the clouds

Non-intrusive Slot Layering in Hadoop

Chapter 7. Using Hadoop Cluster and MapReduce

Residual Traffic Based Task Scheduling in Hadoop

Hadoop IST 734 SS CHUNG

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

An efficient Mapreduce scheduling algorithm in hadoop R.Thangaselvi 1, S.Ananthbabu 2, R.Aruna 3

Analysis and Modeling of MapReduce s Performance on Hadoop YARN

Improving Job Scheduling in Hadoop

Do You Feel the Lag of Your Hadoop?

Big Data and Hadoop with components like Flume, Pig, Hive and Jaql

R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5

The WAMS Power Data Processing based on Hadoop

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Telecom Data processing and analysis based on Hadoop

A Game Theory Based MapReduce Scheduling Algorithm

MapReduce optimization algorithm based on machine learning in heterogeneous cloud environment

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Evaluating HDFS I/O Performance on Virtualized Systems

Transcription:

Journal of Computational Information Systems 7: 6 () 5769-5775 Available at http://www.jofcis.com Research on Job Scheduling Algorithm in Hadoop Yang XIA, Lei WANG, Qiang ZHAO, Gongxuan ZHANG School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 6, China School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 94, China Abstract On the basis of researching Fair Scheduling Strategy deeply in Hadoop cluster,the Node Health Degree is defined by constructing the relationship function between node load and job fail rate, and a job scheduling algorithm based on Node Health Degree is proposed in this paper. Nodes are grouped, according to Node Health Degree, into three categories in order to assign corresponding job in accordance with load and guarantee resource load balance. By comparing with FIFO and Fair scheduling algorithm, the simulation results show that this algorithm can ensure to reduce job fail rate and improve cluster throughput. Keywords: Hadoop; MapReduce; Node Health Degree; Fair Scheduling Strategy. Introduction Hadoop[] is a large distributed software framework[] and an open source project realizing core mechanism of Google's GFS[3] and MapReduce[4], most of cloud computing platform[5] use MapReduce parallel programming model of Hadoop to develop programs. Job scheduling algorithm is one of the core technologies of Hadoop platform, its main function is to control the order of job execution and assign user s job to run up on resources. Not only many types of application on Hadoop platform shared among multiple users are growing more, but also mixture of batch long jobs and interactive short ones which access same data set has become a typical way of job running. Therefore, in homogeneous environment, the main purpose of multi-user scheduling algorithm is to achieve the balance between efficiency of Hadoop cluster and efficiency of resource allocation among jobs. Many scheduling strategies for MapReduce cluster have been put out, for example, FIFO scheduler[6], HOD scheduler[7], LATE scheduler[8], Capacity scheduler[9] and Fair scheduler[]. Every scheduling algorithm has its definite applicability, but also has its limitations, for example, balance of resource allocation between long jobs and short jobs is not taken into account in FIFO scheduler, but Fair scheduler balances job resource allocation by setting minimum shares and fair shares between jobs and job pools. Job scheduling algorithm based on Node Health Degree (named FS-HD) is put forward in this paper. The algorithm improves Fair scheduler strategy and defines the Node Health Degree according to the relationship between node load and task fail rate, in which jobs with corresponding load demand are allocated to nodes in the cluster. It overcomes the problems in Fair scheduler strategy by reducing the fail rate, increasing throughput and improving resource load balance. Corresponding author. Email addresses: xia-y@63.com (Yang XIA). 553-95/ Copyright Binary Information Press December,

Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 577. Fair Scheduler Strategy Fair scheduling is a method of assigning resources to jobs such that all jobs get, on average, an equal share of resources over time. If there is a single job running, the job uses the entire cluster. When other jobs are submitted, free task slots are assigned to the new jobs, so that each job gets roughly the same amount of CPU time. Unlike the Hadoop s default scheduler, which forms a queue of jobs, it lets short jobs complete within a reasonable time while not starving long jobs. The scheduler actually organizes jobs by resource pool, and shares resources fairly between these pools. By default, there is a separate pool for each user. There are the limits of concurrently running Map and Reduce tasks on TaskTracker of node. The fair scheduler can limit the number of concurrently running jobs from each user and from each pool. Running job limits are implemented by marking jobs as not runnable if there are too many jobs submitted by the same user or pool. When there are idle task slots, the scheduler processes in two steps, at first idle task slots are allocated between the job pools and secondly are allocated between jobs in the job pools... Minimum Shares and Fair Shares Minimum Shares are set by the users and can always ensure the job to get enough resources. Pools can be given weights to achieve unequal sharing of the cluster. The computation of Fair Shares of job pools is to assign total cluster resources to all running job pools according to weight of job pools. In the case of default, the weight of job pool is based on the priority and jobs are ordered first by priority. The priority increases one level with the corresponding doubling weight. Usually minimum shares are less than fair shares in a job pool... Task Slot Scheduling Algorithm between Pools Fair scheduling algorithm between pools is described as follows: ) If there are job pools not reaching Minimum Shares, then c is the sum of the task slots assigned to a pool, m is the Minimum Shares, task slots will be allocated to the job pool with the minimum value of c /m firstly; ) if each pool has gotten the Minimum Shares, w is the weight of job pool, then task slots will be allocated to the job pool with the minimum value of c / w firstly; 3) Fair scheduler maintains two variables for each job pool, the time limit of Minimum Shares T min and the time limit Fair Shares T fair. When a job pool does not reach Minimum Shares in T min or does not reach half of the Fair Shares in T fair, the scheduler will kill the most recently launched tasks from overscheduled pools, to minimize the amount of computation wasted by preemption. There are limits to the number of the concurrently running jobs in each job pool. If there are too many jobs submitted, the follow-up jobs will wait in the scheduling queue until previous jobs complete and release task slots..3. Task Slot Scheduling Algorithm in Pools Because of the poor data locality, fair schedule strategy also provides a Delay Scheduling Algorithm between jobs in pool to improve data locality. Within a pool, jobs queue according to Fair Scheduler or FIFO. When there are idle task slots, if the headof-line job cannot launch a local task on the TaskTracker, then it is skipped, and other running jobs are looked at in order of pool shares and in-pool scheduling rules to find a job with a local task. However, if the head-of-line job has been skipped for a sufficiently long time, it is allowed to launch rack-local tasks. But there are still some problems. In Hadoop cluster, load of nodes depends on the resource cost of the running tasks rather than the number of tasks. Fair scheduler does not consider the case, so the load of

577 Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 nodes is not balanced. In addition, the cluster running Hadoop platform is shared by many systems, such as MPI, MemCache..., so for each node the stability of running jobs will inevitably be affected, which makes task error rate on nodes higher. 3. Job Scheduling Algorithm based on Node Health Degree This algorithm improves the fair schedule strategy. Fair scheduler is still used between job pools, and an algorithm based on Node Health Degree is adopted between jobs of pool. It assigns corresponding tasks to nodes according to the Node Health Degree. 3.. Node Load Model Homogeneous and multi-users of networks and correct jobs are premises of the scheduling strategy. We define node set on cluster as R={r r j r n } (r j denotes the j nodes, there are n nodes on the cluster) and set of load weight of n nodes as L= {l l j l n } (l j is dynamic load weight of node r j ). If CPU usage, memory usage and I/O access rate of node r j separately is C %, M %, I %, then the dynamic load weight of node r j can be expressed as: 3 l j = λ C j %+ λ M j %+ λ 3 I j % () In this expression, λ n =,weights λ n reflect the importance of the various load parameters, j=, m. n= 3.. Characteristic Variables The characteristic variables of task describe resource usage and demand for MapReduce task. The values of these variables can be obtained by analyzing historical information of job and stored in Job Tracker. The characteristic variables used in this paper are the average CPU utilizationc usage, the average I/O utilization I usage and average memory usage M usage of task. If the values of these characteristic variables of MapReduce task of a job are small, the job is a light one, otherwise is a relative heavy job. The characteristic variables of node represent the status and quality of computing resources on a node in a period of time and the amount of resources supplied to the task. Node s characteristic variables used in this paper are the average CPU idle rate C ldle, the average I/O idle rate I ldle and the average free memory M ldle. These variables on nodes are computed periodically and sent to Job Tracker from heartbeat information of Task Tracker. 3.3. Node Health Degree Fair scheduler chooses these tasks whose progress is less than the average % as backward tasks and launches backup tasks for them. If there are too many backward tasks and error tasks on a node, the load of the node may be too high or the running environment of the node may be caught by problems. Many Delay Scheduler experiments indicate that average fail rate of node is directly proportional to node load. The dynamic load weight of node l indicates independent variable and the average error rate of node E= α represents dependent variable, by computing large amounts of β data generated in experimental and using curve fitting method, approximate function expression is found: ( l.387). 384 E=f( l )= -.6 e +.69 l ()

Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 577 average error rate of node E(%) E=-.6*Exp(((l-.387)^)/-.384)+.69*l 7 6 5 4 3 3 4 5 6 7 8 9 Node load L(%) Fig. Node Load Effect on the Average Error Rate of Node If E j is the average fail rate of node j actually received in the improved algorithm experiment, then the Node Health Degree is defined as E j / f (l j ).Nodes can be classified according to Health Degree as table below: Table Classification of the Nodes According to Health Degree Node E j / f(l j ) Range Health node E j / f(l j ). Subhealthy node.<e j / f(l j ).5 Unhealthy node.5<e j / f(l j ) 3.4. Algorithm Description Between job pools this algorithm (FS-HD) uses Fair scheduling algorithm, a better strategy which has been proven. In job pool the job scheduling algorithm based on Health Degree of node is described as follows: ) When an idle task slot can be used, a pool is chosen according to the Fair scheduler; ) if the node on which the idle task is located is healthy, then go to 5; 3) if the node on which the idle task is located is sub healthy, characteristic variables of the node are separately multiplied by the reciprocal of Health Degree of node, then go to 5; 4) if the node on which the idle task is located is unhealthy, then go to 8,and set timer on node,its Health Degree is multiplied by.9 every x minutes until it reaches the state of sub healthy; 5) Computing Euclidean distances between characteristic variables of job task and characteristic variables of node, sorting them from small to large; 6) Choosing an appropriate job according to Delay Scheduler in the sorted set; 7) Assigning the task of a chosen job to a task slot to run; 8) Finish. In step 4, generally the timer interval of unhealthy node is set to 5, when the node Health Degree reaches sub health, the node can be schedule again. In step 5, the computational method of Euclidean distance is M ldle M usage D= ( Cldle Cusage) + ( Ildle Iusage) + ( ) (3) M In step 6, Delay Scheduler is used so that the algorithm achieves higher locality. ldle

5773 Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 4. Experiment and Assessment 4.. Experimental Environment The experimental environment is deployed on a private cluster composed of 6 physical machines in two racks. The version of Hadoop is... Name Node and Job Tracker are installed on one machine and Data Node and Task Tracker are installed on others.these nodes are connected via Gigabit Switch. Hardware configuration is: Cpu: Intel Dual Core,.8GHZ, Memory: G, disk: 5G. Some important parameters of Hadoop are set as below: Table Hadoop Parameters Hadoop Parameter Value Describe Replication 3 Replication amount of each HDFS data block HDFS Block size 6m Size of data block Map tasks maximum 8 Maximum number of concurrently running Map task Reduce tasks maximum 8 Maximum number of on currently running Reduce task Heartbeat interval 3seconds Time interval of sending heartbeat infomation Three job pools based on multi-user are set in the experiment. 5 jobs are submitted to the pool every minute and the experiment continues 4 minutes. The job sets including cpu-intensive, I/O-intensive, memory-intensive and mixed type of job are randomly selected and submitted. FIFO and Fair Scheduler (FS) are used as comparison algorithm. The data used in analysis are the average data of ten experiments. 4.. Experiment of Average Fail Rate of Job In the experiment, the average fail rate of job is defined as λ timeout + error Fail= λ (4) sum Where: sum is the total number of Map/Reduce tasks which are submitted to all nodes within m minutes, timeout is the sum of timeout tasks, error is the sum of tasks which fail and are canceled, λ is the weight of timeout task, λ is the weight of error task. In the experiment λ =.5, λ =,m =5.The experiment results are shown in figure. Before the average load rate of node reaches 4%, the average fail rate of job differs little with the three algorithms, however, after the average load rate of node exceeds 4%, the average fail rate of job which uses FS-HD is significantly lower than the other algorithms. Because FS- HD assigns a job with corresponding load demand to node according to the Node Health Degree, the average fail rate of job is significantly reduced. 4.3. Job Throughput Experiment The job throughput is the average number of finished jobs per minute. Because of the lower average fail rate of job used FS-HD, Hadoop does not have to frequently launch backup tasks for timeout or error tasks,so redundancy is reduced and resource usage and job throughput are improved. In summary, FS-HD is better than Fair Schedule Strategy. Since jobs run in serial manner in a pool, the job throughput used FIFO is also poor. The experiment results are shown in figure 3.

Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 5774 Average Fail Rate of Job(%) 7 6 5 4 3 FIFO FS FS-HD 5 3 4 5 6 7 8 9 Average Load Rate of Node(%) Fig. Node Average Load Effect on the Average ErrorRate of Job Job Throughput(jobs/m) 3 5 5 5 FIFO FS FS-HD 5 5 5 3 4 45 System Running Time(minute) Fig.3 Job Throughput Curve over Time 4.4. Resource Load Balancing Experiment Resource load balancing experiment is defined as the variance of node s average load in unit time. According to node load, FS-HD selects the jobs with corresponding load demand to assign, so the nodes in cluster have batter resource load balancing, and FS-HD is superior to FIFO and fair schedule strategy. The experiment results are shown in figure 4. Equilibrium Degree of Resource Load.6.5.4.3.. FIFO FS FS-HD 5 5 5 3 4 45 System Running Time(minute) Fig.4 Resource Load Balance Curve over Time 5. Conclusion Hadoop is a popular open source implementation platform of MapReduce model and used to process and analyze large-scale data sets in parallel. FIFO, the default scheduler of Hadoop, Fair scheduler and Delay

5775 Y. Xia et al. /Journal of Computational Information Systems 7:6 () 5769-5775 scheduler are researched and the relationship between node load and job fail rate in Hadoop cluster is explored in MapReduce parallel programming environment. Further, the node Health Degree is defined and the job scheduling algorithm based on it is introduced in this paper. This algorithm is the improvement for Fair schedule strategy to improve resource load balancing and job throughput and reduce the average fail rate of job. And the effectiveness of the algorithm is verified by experiments. References [] Apache Hadoop. Hadoop [EB/OL]. [9-3-6]. http: hadoop.apache.org/. [] Yahoo. Yahoo! Hadoop Tutorial [EB/OL]. [9--7].http: public. yahoo.com/gogate/hadoop-tutorial/starttu-torial. html. [3] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The Google File System[C]. Proc of SOSP 3: 9-43. [4] Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplied Data Processing on Large Clusters[C]. Proc of OSDI 4, 4: 37-5. [5] Vaquero L M, Rodero-Merino L, Caceres J, et al. A Break in the Clouds: Towards a Cloud Definition[J]. ACM SIG-COMM Computer Communication Review,9,39():5-55. [6] Bryant R E. Data-Intensive Supercomputing: the Case fordisc[r]. CMU Technical Report CMU-CS-7-8, Depart-ment of Computer Science, Carnegie Mellon University, 7. [7] Apache.Hadoop On Demand[DB/OL]. http: //hadoop.a-pache.org/common/docs/r. 7. /hod. html 8-8-. [8] Zaharia M, Konwinski A, Joseph A D. Improving MapReduce Performance in Heterogeneous Environments[C] Proc of the 8th Usenix Symp on Operating Systems Designand Implementation, 8:9-4. [9] Fair Scheduler for Hadoop [EB/OL]. http://hadoop.apache.org/common/docs/current/fair_scheduler.html,- 4-5. [] Capacity Scheduler for Hadoop[EB/OL]. http://hadoop.apache.org/common/docs/current/capacity_scheduler.heml,-3-.