Telecom Data processing and analysis based on Hadoop



Similar documents
Hadoop Scheduler w i t h Deadline Constraint

Chapter 7. Using Hadoop Cluster and MapReduce


Survey on Scheduling Algorithm in MapReduce Framework

Design of Electric Energy Acquisition System on Hadoop

Hadoop Architecture. Part 1

MapReduce Job Processing

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Mobile Storage and Search Engine of Information Oriented to Food Cloud

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Task Scheduling in Hadoop

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Apache Hadoop. Alexandru Costan

Performance and Energy Efficiency of. Hadoop deployment models

HADOOP MOCK TEST HADOOP MOCK TEST II

Fault Tolerance in Hadoop for Work Migration

Big Data With Hadoop

Keywords: Big Data, HDFS, Map Reduce, Hadoop

Research Article Hadoop-Based Distributed Sensor Node Management System

UPS battery remote monitoring system in cloud computing

How To Install Hadoop From Apa Hadoop To (Hadoop)

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES

Distributed Filesystems

Big Data Analysis and Its Scheduling Policy Hadoop

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE

Detection of Distributed Denial of Service Attack with Hadoop on Live Network

Analysis of Information Management and Scheduling Technology in Hadoop

Open Access Research on Database Massive Data Processing and Mining Method based on Hadoop Cloud Platform

CSE-E5430 Scalable Cloud Computing Lecture 2

Reference Architecture and Best Practices for Virtualizing Hadoop Workloads Justin Murray VMware

HadoopRDF : A Scalable RDF Data Analysis System

SURVEY ON SCIENTIFIC DATA MANAGEMENT USING HADOOP MAPREDUCE IN THE KEPLER SCIENTIFIC WORKFLOW SYSTEM

Open source Google-style large scale data analysis with Hadoop

An efficient Mapreduce scheduling algorithm in hadoop R.Thangaselvi 1, S.Ananthbabu 2, R.Aruna 3

An Hadoop-based Platform for Massive Medical Data Storage

The Study of a Hierarchical Hadoop Architecture in Multiple Data Centers Environment

NoSQL and Hadoop Technologies On Oracle Cloud

R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Distributed Framework for Data Mining As a Service on Private Cloud

Efficient Data Replication Scheme based on Hadoop Distributed File System

Apache Hadoop new way for the company to store and analyze big data

How To Use Hadoop

Evaluating Task Scheduling in Hadoop-based Cloud Systems

A Brief Outline on Bigdata Hadoop

Big Data Storage Architecture Design in Cloud Computing

A Cost-Evaluation of MapReduce Applications in the Cloud

Adaptive Task Scheduling for Multi Job MapReduce

The Improved Job Scheduling Algorithm of Hadoop Platform

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

Research on Job Scheduling Algorithm in Hadoop

Research of Railway Wagon Flow Forecast System Based on Hadoop-Hazelcast

MapReduce. Tushar B. Kute,

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Hadoop Distributed File System. Dhruba Borthakur June, 2007

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Optimization and analysis of large scale data sorting algorithm based on Hadoop

Energy Efficient MapReduce

Finding Insights & Hadoop Cluster Performance Analysis over Census Dataset Using Big-Data Analytics

Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

GraySort and MinuteSort at Yahoo on Hadoop 0.23

The WAMS Power Data Processing based on Hadoop

Reduction of Data at Namenode in HDFS using harballing Technique

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Processing of Hadoop using Highly Available NameNode

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Big Data - Infrastructure Considerations

Networking in the Hadoop Cluster

Big Data Analytics for Net Flow Analysis in Distributed Environment using Hadoop

METHOD OF A MULTIMEDIA TRANSCODING FOR MULTIPLE MAPREDUCE JOBS IN CLOUD COMPUTING ENVIRONMENT

Mr. Apichon Witayangkurn Department of Civil Engineering The University of Tokyo

Research on Reliability of Hadoop Distributed File System

Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000

MapReduce with Apache Hadoop Analysing Big Data

Introduction to MapReduce and Hadoop

Open source large scale distributed data management with Google s MapReduce and Bigtable

Big Data and Hadoop. Sreedhar C, Dr. D. Kavitha, K. Asha Rani

Hadoop Parallel Data Processing

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

Parallel Processing of cluster by Map Reduce

Optimize the execution of local physics analysis workflows using Hadoop

How to properly misuse Hadoop. Marcel Huntemann NERSC tutorial session 2/12/13

HADOOP MOCK TEST HADOOP MOCK TEST

Applied research on data mining platform for weather forecast based on cloud storage

PARALLEL IMAGE DATABASE PROCESSING WITH MAPREDUCE AND PERFORMANCE EVALUATION IN PSEUDO DISTRIBUTED MODE

Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

Hadoop Lab - Setting a 3 node Cluster. Java -

On Cloud Computing Technology in the Construction of Digital Campus

Transcription:

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 Abstract Telecom Data processing and analysis based on Hadoop Guofan Lu, Qingnian Zhang *, Zhao Chen Wuhan University of Technology, Wuhan 4363,China Received 1 October 214, wwwcmnt.lv For call tracking system to adapt to the needs of large data processing, combined with a strong competitive advantage in recent years in large data processing Hadoop platform, designed and implemented a Hadoop-based call tracking data processing model, in order to verify its feasibility. The call tracking processing system model contains an analog data source module, data processing module, and a GUI interface. Analog data source module from real data samples in the simulated data, and the data is written directly to the Hadoop distributed file system, then using Hadoop's model to write appropriate per and r function, the distributed processing of the data. Detailed study based on the system design and implementation, system deployment topology, hardware and software conditions, and designed several comparative experiments to analyze some static indicators of system performance. Keywords: hadoop, model, data processing, call tracking 1 Introduction With the increasingly sophisticated and popularity of the third-generation communication networks, the size of the network is drastically becoming larger and the equipment complexity is also greatly improved. So how to quickly resolve the call failure and how to ensure the quality of network operation becomes the most urgent task for network maintenance personnel. Call trace system, a new feature derived from signaling trace, as an important subsystem of the network management system, is a powerful mean to solve call failure problem and guarantee the quality of the network operation [1, 2]. This paper deals with the demand of large amounts of call tracking data. In the paper, data and environment are analyzed to find their impact on system performance, combined with Hadoop platform, which is more used recently in the field of large-scale data processing. A call tracking simulation system based on Hadoop was designed to simulate real application scenarios. So the analysis research of algorithm and the assessment of performance can be done. Hadoop is currently the most used distributed computing platform. The core part of the distributed storage system consists of HDFS and the distributed computing framework. HDFS creates multiple copies of data blocks of replication to guarantee reliability. And put data blocks in the cluster's compute nodes, and then the application program will be broken down into many small tasks to realize the parallel processing. It processes the data block on its node where the data blocks are, using moving computing to substitute for data moving, greatly improving the efficiency. Hadoop is designed as a framework which can use simple programming model realizing distributed processing of large data sets with good scalability and high fault tolerance [3]. Streaming data access pattern is used to access the storage file system. This way is suitable for processing large amount of data application. The relationship between HDFS and is shown in Figure 1. Data Compute Cluster DFS Block2 DFS Block2 DFS Block1 DFS Block1 DFS Block1 DFS Block2 c DFS Block3 DFS Block3 DFS Block3 Results FIGURE 1 HDFS and relationship Diagram framework adopts a master-slave structure, which is composed of one JobTracker and several TaskTrackers. JobTracker is the master node, responsible for scheduling the jobs that submitted. Each job is divided into a number of tasks by JobTracker. Then JobTracker assigns tasks to TaskTrackers in the cluster. In addition, JobTracker will also be responsible for monitoring the implementation of the tasks and timely feedback to the user. TaskTracker is the slave node, which keeps in communication with the JobTracker through the heartbeat mechanism. It informs JobTracker its own state and executes the tasks that assigned by JobTracker [4]. The entire implementation process of should include the following four parts: the submission of a job, assignment and execution of tasks, assignment and execution of tasks and the completion of the job. User configures the relevant configuration files before submitting the job. As soon as the job is submitted the user could no longer intervene, since it enters an automatic process. In the configuration it should ensure that there is * Corresponding author s e-mail: 61624597@qq.com 658

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 at least the execution and code, which are finished by the user according to their needs. The core parts of are operations and operations. In process data process is in parallel, which is to say the input data should be separated into several parts. The main function of is to convert the input data into key/value pairs [5,6]. However, puts the separated data generated by together through sorting and merging. Then the new key/value pairs will be generated. In the user's perspective, the designation of the and function is associated with the type, just as shown in Equation (1) and (2) below. ( k1, v1) list( k2, v2) Re duce( k2, list( v2)) list( v2), (1). (2) This association is achieved in the internal programming model. The framework hides the complexity of the procedure. So for users, there is no difference between the distributed computing model system of and the serial computing algorithm we usually use. In actual process, the input data is divided into several blocks. Then the blocks are randomly assigned to the data nodes. When a job is submitted, JobTracker will distribute and tasks to TaskTrackers according to TaskTrackers status. Generally, one data block has a corresponding task. Input data is processed by task using the user-defined function and converted into key/value pairs. Then the frame will get the intermediate results associated with each from the output of through HTTP, This section is called Copy section. There will be a sort section before task. In Sort section the intermediate results get by various will be grouped according to their keys. And then for each unique key, it produces the final result according to userdefined function. Generally, it does sorts while doing copy. The specific implementation process of is shown in Figure 2. Partition Copy Sort&Merge process flowchart of call trace in the network management system is shown as Figure 3. ASN.1 UDP Ftp RNC PTS Data Processing Module CSV file VoIP QoE Oracle DB Vedio QoE FIGURE 3 Flowchart of call trace APP QoE RNC (Radio Network Controller) will generate UDP data packets in ASN.1 format. PTS (Package Transferring System) receives the data packets which are generated in RNC and then the call trace with ASN.1 coded format is wrote into one file according to specified time duration. This file is the source packet which will be handled by the data processing module. Then the data processing module will get the data packet through FTP service and generate the result into a CSV format file after processing. The results will be upload to CT (Call Trace) server and be stored in the database for further analysis and application or be called by other modules in the network management system. 2.1 OVERVIEW The cluster scale of Hadoop is generally to be larger in the real application. But the hardware environment used in this chapter is just a DL38 Medium server and a DL38 Small server as our main purpose is only to verify the availability of call trace s data processing. The configuration of these two servers is shown in Table 1. TABLE 1 Experimental hardware configuration DL38 Medium DL38 Small CPU 2.93G 2*8core 2.93G 1*8core Memory 4*18G 2*18G Disk HDD 12*3G 1rpm HDD 6*3G 1rpm Network 5GB interface 5GB interface Split Split1 Split2 Split3 Split4 Input HDFS Partition Partition Sort&Merge FIGURE 2 Implementation Process of 2 Design and analysis of the system Part Part1 Output HDFS In the real communication network, RNC will generate a large amount of call trace data. The source packets of these data will be generated at the specified path on the specified server. The data processing module will process these data and do further analysis for the results. In this paper, the Two experimental environments (pseudo-distributed and small cluster) is designed using our Existing hardware conditions mentioned above to analysis how the configuration of the hardware affects the system performance. DL38 Medium server is chose for pseudo-distributed environment as the master node and the slaver node will be configured on one machine in pseudo-distributed environment. The network topology of the pseudodistributed environment is shown as Figure 4. The DL38 is both the name node and the data node, which means it plays the role of JobTracker also the role of TaskTracker at the same time. DL38 Medium will have five daemons after starting Hadoop, which are JobTracker, SecondaryNameNode, NameNode, DataNode and TaskTracker respectively. 659

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 NameNode JobTracker DataNode TaskTracker DL38 DL38 Medium 2*8 core FIGURE 4 Network topology of pseudo-distributed environment Three configuration files are needed under hadoop-.21./conf/ on the system, which are core-site.xml, hdfssite.xml and mapred-site.xml, respectively. The detail of the configuration is shown as Table 2. TABLE 4 Full MR Vs Non-Full MR Experimental Result Data Type Input file Output file All time Output file size (M) Non-Full MR 7s 16s 9s 5s 37s 6M Full MR 3.7s 42s 72.3s 67s 185s 157M It Can be seen from Table 3 the size of the two data packets is similar and containing the same number (6,262,8 pieces) of information. In the non-full MR data package the MR data ratio is 5.6%. So in the actual data process, the ratio of the number of MR information processed using full MR data packet and non-mr packet is 17.9: 1. TABLE 2 configuration table of pseudo-distributed system Configuration file core-site.xml hdfs-site.xml mapred-site.xml Configuration content <name>fs.defalut.name</name> <value>hdfs://localhost:9</value> <name>dfs.replication</name> <value>1</value> <name>red.job.tracker</name> <value>hdfs://localhost:91</value> Analysis of System Performance The procedure of data process is split into four sections and we use representing the input data time, time, time and the output time. The calculation of the total processing time is shown in Equation (3). T, T, T, T i m r o T = T T T T Total i m r o. (3) In the following, three sets of comparative experiments are used to analyze the time of each section. All experiments will be made three times and then take the average value. 2.2 COMPARATIVE EXPERIMENTS OF FULL MR DATA VS NON-FULL MR DATA In this part, a group of non-compressed data and pseudodistributed test environment are used. The specific information of experimental data is shown in Table 3. For the experimental results, the average time spent of various sections and the size of output file is collected. The specific experimental results data is shown in Table 4. TABLE 3 Full MR Vs Non-Full MR Experimental Data Data Type Number of Information Number of MR Information File Size Non-Full MR Data 6,262,8 349,943 73M Full MR Data 6,262,8 6,262,8 613M 2.3 COMPARATIVE EXPERIMENTS OF PSEUDO- DISTRIBUTED VS SMALL CLUSTERS Different hardware environments inevitably have an impact on the performance of system. In theory, the cluster system has more hardware resources, the performance should be better than the pseudo-distributed test environment. The following set of experiments is designed to verify this. Four sets of Non- Compressed and Non-Full MR experimental data are prepared. The specific experimental data information is shown in Table 5. TABLE 5 Pseudo-distributed Vs small clusters experimental data information No. of experiment 1 2 3 4 Number of all the information Number of MR information 349,943 699,886 1,399,163 2,798,347 File Size 73M 1.4G 2.8G 5.6G Set the same size of data block and start the same number of tasks in DL38 pseudo-distributed and DL38 small cluster environment and collect the statistical data of two experimental environments. The experimental result of input section is shown in Figure 5. In the pseudodistributed environment, all the data is copied to the local disk. But in the small cluster, the two servers use gigabit Ethernet and since the amount of data to be copied is not huge, so the time of input section is almost the same in the two experimental environments. The results of section are shown in Figure 6 and Figure 7. The total number of CPU cores in DL38 small cluster is 24, pseudo-distributed 16. So the CPU core number of DL38 small cluster is 5% (8 core) more than the pseudo-distributed test environment. In Figure 6, it can see that the section time of small cluster using DL38 is only about 7% of the pseudo-distributed environment. efficiency is almost about 1.4 times higher than pseudo-distribute. That is to say, when processing the same data, small clusters can reduce the time of map section and improve the map efficiency compared to pseudodistributed environment. 66

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 DL38 VS DL38 Cluster(KM)Using Uncompreesed Data - Input file time 7 6 5 4 3 2 1 DL38 Uncompressed Input file time(s) DL38 Cluster Uncompreesed FIGURE 5 comparison of input section The results of section are shown in Figure 6 and Figure 7. The total number of CPU cores in DL38 small cluster is 24, pseudo-distributed 16. So the CPU core number of DL38 small cluster is 5% (8 core) more than the pseudo-distributed test environment. In Figure 6, it can see that the section time of small cluster using DL38 is only about 7% of the pseudo-distributed environment. efficiency is almost about 1.4 times higher than pseudo-distribute. That is to say, when processing the same data, small clusters can reduce the time of map section and improve the map efficiency compared to pseudo-distributed environment. DL38 VS DL38 Cluster(KM)Using Uncompreesed Data - time time(s) 16 14 12 1 8 6 4 2 DL38 Uncompreesed time(s) DL38 Cluster Uncompreesed time(s) FIGURE 6 comparison of map section efficiency 5, 45, 4, 35, 3, 25, 2, 15, 1, 5, DL38 VS DL38 Cluster(KM)Using Uncompreesed Data - efficiency FIGURE 7 Comparison of map s efficiency DL38 Uncompreesed DL38 Cluster Uncompreesed The results of section are shown in Figure 8. Since the input file size and the data block size are the same, so the number of tasks started is also the same. During the section, it gets s intermediate results from the same number of tasks in the two environments. Moreover, the small cluster uses Gigabit Ethernet. So the time is almost the same in the two different environments. 661

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 DL38 VS DL38 Cluster(KM)Using Uncompreesed Data - time time(s) 8 7 6 5 4 3 2 1 DL38 Uncompreesed time(s) DL38 Cluster Uncompreesed time(s) FIGURE 8 comparison of reduce section The results of Output file section are shown in Figure 9. Since the size and content of produced result is the same, the time of output file section is almost the same. The comparison analysis of the total time to run the entire job is shown in Figure 1. spent in Input file section, section and Output files section is almost the same in pseudo-distributed environment and small clusters according to the analysis of the previous four sections. However, the small clusters uses less section time than pseudo-distributed. Therefore, small clusters will spend less time than pseudo-distributed while processing the same job. In summary, small cluster using Gigabit Ethernet has better performance than pseudodistributed environment. DL38 VS DL38 Cluster(KM)Using Uncompreesed Data - Output file time Output file time(s) 14 12 1 8 6 4 2 DL38 Uncompreesed Output file time(s) DL38 Cluster Uncompreesed Output file time(s) FIGURE 9 comparison of output section All time(s) 35 3 25 2 15 1 5 DL38 VS DL38 Cluster(KM) Using Uncompressed Data - All time DL38 Uncompressed All time(s) DL38 Cluster Uncompressed All time(s) FIGURE 1 comparison of all sections 2.4 COMPARATIVE EXPERIMENTS OF COMPRESSED AND UNCOMPRESSED DATA These sets of experiments were carried out on DL38 small clusters. Four sets of experimental data which contains different number of information and includes compressed and uncompressed Non-Full MR were prepared. The specific information of the experimental data is shown in Table 6. 662

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 TABLE 6 Experimental data information of compressed and uncompressed No. of experiment 1 2 3 4 Number of all the information Number of MR information 349,943 699,886 1,399,163 2,798,347 File size Uncompressed 73M 1.4G 2.8G 5.6G Compression 137M 275M 549M 1.1G In this part, four sets of experimental data were processed in small DL38 cluster environment. The statistic results were collected. The results of Input section are shown in Figure 11. used writing file to HDFS is reduced while using compressed data since the size of compressed packet data is much smaller than the uncompressed. It can be calculated from Table 6 that the compression rate is about 2%, and the writing time after compressed is just about 2% of the uncompressed one. Therefore, compression can reduce the time of Input section and improve the efficiency of Writing file section. DL38 Cluster(KM) Compressed VS Uncompressed - Input file time 7 6 5 4 3 2 1 DL38 Cluster Uncompressed DL38 Cluster Compressed FIGURE 11 comparison of input section The results of section are shown in Figure 12 and 13. It will start less tasks since for compressed file size of data containing same information is much smaller than the uncompressed. It can be seen that using compressed data can reduce the time (Except the first data point, since data size there is small and the compressed data need time to uncompress itself. So the advantages of compressed data in section won t be reflected). So the compressed input data could improve the efficiency. It can be seen from Figure 12, as the input data size increases, The efficiency reduces no matter using compressed or uncompressed data since the input data size exceeds the capacity of nodes. DL38 Cluster(KM) Compressed VS Uncompressed - time time(s) 12 1 8 6 4 2 DL38 Cluster Uncompressed time(s) DL38 Cluster Compressed time(s) FIGURE 12 comparison of nap section 663

COMPUTER MODELLING & NEW TECHNOLOGIES 214 18(12B) 658-664 DL38 Cluster(KM) Compressed VS Uncompressed - efficiency efficiency 12, 1, 8, 6, 4, 2, 6,262,8 25,4,4 DL38 Cluster Uncompressed DL38 Cluster Compressed FIGURE 13 Comparison of map s efficiency 3 Summary This paper describes the deployment of call trace system. One pseudo-distributed environment and one small cluster environment was implemented based on the provided experimental hardware. Job running process in Hadoop and operation process of call trace system was studied. According to the characteristics of the system three sets of experiments were designed to study how the test environment and the different data types impact the system performance. Finally, the experimental results were analyzed and draw the conclusion that compressed data and small clusters can improve the system performance compared to uncompressed data and pseudo-distributed environment. Acknowledgements The authors are grateful for the support from the National Science Foundation of China (51479159) and the Soft Science Project of China s Ministry of Transport (213-322-811-47). References [1] Tian C, Zhou H, He Y, Zha L 29 A Dynamic Scheduler for Heterogeneous Workloads [C] Eighth International Conference on Grid and Cooperative Computing 218-24 [2] Lu P, Lee Y C, Wang C, Zhou B B, Chen J, Zomaya A Y 212 Workload Characteristic Oriented Scheduler for [C] IEEE 18th International Conference on Parallel and Distributed Systems 156-63 [3] Nguyen P, Simon T, Halem M, Chapman D, Le Q 212 A Hybrid Scheduling Algorithm for Data Intensive Workloads in a Environment [C] IEEE/ACM Fifth International Conference on Utility and Cloud Computing 161-7 [4] Liu L, Zhou Y, Liu M, Xu G, Chen X, Fan D, Wang Q 212 Preemptive Hadoop Jobs Scheduling under a Deadline [C] Eighth International Conference on Semantics, Knowledge and Grids 72-9 [5] Kamal Kc, Kemafor Anyanwu 21 Scheduling Hadoop Jobs to Meet Deadlines [C] IEEE Second International Conference on Cloud Computing Technology and Science (CloudCom) 388-92 [6] Yang S-J, Chen Y-R, Hsien Y-M 212 Design Dynamic Data Allocation Scheduler to Improve Performance in Heterogeneous Clouds [C] Nineth IEEE International Conference on e-business Engineering 265-7 [7] Matei Z, Dhruba B, Joydeep Sen S, Khaled E, Scott S, Ion S 21 Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling [C] 5th European conference on Computer systems 265-78 Authors Lu Guofeng, was born in 1966, Jilin, China Current position, grades: PhD candidate in Transportation planning and management in Wuhan University of Technology, Wuhan, China. University studies: BSc in Highway and Bridge from Harbin University of Civil Engineering and Architecture, China, in 1993, MSc degree in traffic management from Jilin University in 212. Scientific interest: traffic and transportation planning and transportation safety management etc. Zhang Qingnian was born in 1957, Xi an, China. Current position, grades: professor of School of Transportation at Wuhan University of Technology. University studies: PhD degree in Machinery design and theories from the Wuhan University of Technology, China, in 22. Scientific interest: traffic and transportation planning, optimization and decision making of transportation system, transportation safety management. Chen Zhao was born in 1969, Shaanxi Province, China. Current position, grades: PhD candidate in Logistics management at Wuhan University of Technology. University studies: BSc in Chinese language and Literature from China people's Police University, Beijing, China in 1992, MSc in Transportation management engineering from Wuhan University of Technology, China, in 22 Scientific interest: parallel computing on Hadoop platform, optimization decision. 664