An Innovate Dynaic Load Balancing Algorith Based on Task Classification Hong-bin Wang,,a, Zhi-yi Fang, b, Guan-nan Qu,*,c, Xiao-dan Ren,d College of Coputer Science and Technology, Jilin University, Changchun 300, China School of Coputer Science and Technology, Changchun University of Science and Technology, Changchun 300, China awhbin007@6co bfangzy@jlueducn *,cgnqu@jlueducn drxd006@63co Abstract In a distributed syste, the key factor affecting perforance is between any nodes and nodes dynaic task allocation and scheduling, that is dynaic load balancing, so the technology of load balancing has higher value in theory study and practical applications This paper considers users task requests type is difference and according to resource requireents type and real-tie extent of deand The users tasks are divided into real-tie I/O consuing task queue and CPU consuing task queue; through iproved the least connection scheduling algorith, the paper proposes an innovate dynaic load balancing algorith based on task classification, this algorith naed BTC(based on task classification) BTC can fully solve the syste s low throughput and efficiently address the node s resources can not adequately use; this paper proves the availability of algorith, and through the siulation experient verify the algorith s feasibility; copares with the least connection scheduling algorith, proves BTC is better than the least connection scheduling algorith, it can full use of node resources and iprove the syste s response tie Keywords: Cluster Syste, Dynaic Load Balancing, Weighted Least Connection Scheduling, Least Connection Scheduling, Task Classification Introduction In recent years, with the rapid developent of the coputer hardware and high-perforance coputer network technology, the people use coputers way have a great change, the people s deand on the achine resources whether can ake full using becoing ore and ore high, eanwhile the data transparency and as uch as possible ore coputing power requireents are also gradually increasing, which stiulated people s interest in distributed coputing systes, aking the distributed syste has been an unprecedented developent [][7] In a distributed syste, a group coputer nodes which independent each other, for a whole It as a unified view display for users, fro the user s point of view, a single syste provides service The syste consists of uch function independent physical odules and logical odules ensure achieving the tasks dynaic allocation, scattered in every node s physical and logical odules through coputer networks counicate achieves inforation exchange, together to accoplish the client s task request [][8][9] Server Cluster is a typical distributed syste, which includes several nodes [3] [0] Through LAN akes the ultiple server nodes high-speed connection, all nodes collaboration work, as a single The unified interface provides services for external users, while taking advantage of load balancing technology akes users large nuber of requests business reasonable and balanced distribution to ultiple server nodes, the full use of node resources, greatly enhances the syste s load capacity, parallel processing capacity, flexibility, high efficiency and scalability [4][] For decades, the distributed syste load balancing technology research has ade great progress In a cluster syste, according to each node device collected load inforation distributions the users task request for each node in the server pool, if the node has ore accurate load inforation, it has higher load balancing perforance The task allocation and scheduling algorith for load balancing cluster syste has been the research focus, for the load balancing algorith, researchers have ade a nuber of effective strategies, each strategy has different ethods deal with the issues which including load balancing node localization, task allocation and task igration [5][9][6] However, ost existing algoriths have soe shortcoings, for exaple, the weighted least connection scheduling algorith [6] [3] [4] [8], which is the classical load balancing algorith, has been widely used The algorith used the current server actual task connections estiate the server s actual load conditions, eanwhile International Journal of Advanceents in Coputing Technology(IJACT) Volue4, Nuber6, April 0 doi: 0456/ijactvol4issue69 44
according the server s processing capacity assigned it corresponding value, the value denote the server s process capacity, then the user s request was sent to the server which has the iniu weighted nuber of connections But there are still inadequacies in the following [7] []: First, the different task types of user requests for the achines resource utilization are different, so uses the current task connections to denote each server node s load conditions is not accurate Second, according each server s software and actual hardware configuration assign a fixed value for each server, the value indicated the server processing power, the syste adinistrator relies his experience set the value, it could not accurately reflect the server nodes really processing capability, and can not odify the value with server s process capabilities dynaically change in the process of scheduling For these two issues, there is no better way to resolve, in order to solve these probles and iprove the distributed systes processing capacity, this paper studies the distributed systes static and dynaic load balancing strategies, and analyzes the weighted least connection scheduling algorith, trying to analyze different task types of user requests of the load on different achines to iprove the weighted least connection algorith, and akes the new algorith apply to the cluster syste s load distribution The rest of this paper is organized as follows Section presents an innovation dynaic load balancing algorith based on task classification Section 3 proof decrypts the dynaic load balancing algorith based on task classification Section 4 presents the syste ipleentation and experiental evaluations Section 5 concludes the paper Dynaic load balancing algorith based on task classification In distributed parallel coputing, the dynaic load balancing scheduling is based on the current load status of the syste to adjust the tasks partition The task scheduling in parallel coputing is still the key issues to be resolved we do in-depth understanding the dynaic load balancing echanis Then we divide the task type according to the user s task request type, Different task types of client ake different deands on resources The calculate task ain consuing CPU resources, without need for I/O resources and the I/O-based task ainly occupied I/O channels, coplete input and output tasks, without need CPU resources to do operation When a achine have two tasks, one task executes CPU operations sentence and another task siultaneously coplete the disk read and write operations, and one task perfor the disk read and write operations and another task can perfor CPU operation Therefore, the client tasks are divided into CPU consuing type and I/O consuing type I/O consuing task and CPU consuing task For I/O and CPU consuing type task, we according to the procedures act define three paraeters and use the three paraeters to distinguish the task is I/O type or CPU type The definition as following: Definition we use the nuber of Reading and writing instructions for the proportion of the total nuber of instructions to easure The I/O instructions to total instructions how uch of We define the proportion as P Definition we use the I/O instructions accessing frequency to denote how uch average nuber of CPU instructions is ixed up with one I/O instruction We define the accessing frequency as f Definition 3 we set T is the average cycle of I/O instruction occurs That f is/t T is one of two I/O coand with an interval of the average nuber of CPU instructions Definition 4 we set N is the total nuber of reading and writing coands Definition 5 we set N is the total nuber of CPU instructions In order to focus on the analysis algoriths basic perforance and akes our analysis easier, we ake the following assuptions: Basic assuption : Because each CPU type instruction execution tie roughly sae, it can be defined as a constant tie, we set each CPU type instruction execution tie is C Basic assuption : because each I/O type instruction execution tie roughly sae, it can also be defined as the constant tie, we set executing each reading / writing instructions for the required tie is C 45
Basic assuption 3: there is a progra has N reading and writing instruction, N CPU instructions, execute reading and writing instruction total tie equal the total tie of executing CPU type instructionfro the basic assuption 3, we can deduce the following: CN C N, NN / N CC / C, P N / N / N C CC Which P is the proportion of read and write coands of the total coands, if a progra value of P is greater than C / C C The progra used to read and write operations total tie is greater than the total tie for the CPU coputing, this progra is I/O consuption type tasks If the value of P is less than C / C C, then the progra for the CPU coputing tie than the I/O tie, this process is CPU consuing type task After the above analysis, we have the following definition: Definition 6 A progra s total nuber of reading and writing instruction is C, the CPU coputing instruction is C, the total nuber of instructions for the progra to read and write the proportion of coand, if P C / C C, then the progra is I/O consuing type task Definition 7 The total of a progra s reading and writing coands is C, CPU coputing instructions have C, P is the total nuber of instructions for the progra to read and write the P C / C C, then the process is CPU consuing task proportion of coand, if C, C are constants, we assue C C, then P /, so long as the value of P is greater than/ We believe the task is I/O consuing type task, if P is less than/, it is CPU consuing type task Task queue s internal prioritization partition We set up a priority queue for I/O consuing type task and CPU consuing type task, according to the paraeters of each I/O consuing type task and CPU consuing type task to deterine the priority of each task, and then follow the priority fro high to low the order of the task into the queue For I/O type tasks, we put the tasks I/O wait tie descending order We choose the length of I/O instruction nuber and frequency to easure the I/O wait tie paraeter, according to the progra analysis, the total nuber of I/O coand is the key factors for easuring the I/O consuing task size The frequency is a secondary condition For exaple: There are two procedures, as shown in Figure We set P, P is the procedures and I/O ratio respectively, T, T is for the progra and I/O cycle respectively, f f for the progra, frequencies respectively Figure With the sae frequency I/O consuing task Figure With varying frequency of I/O consuing task 46
In figure, coparison procedure,, analysis as follows: P /3 /; P /3 /, so the two progras are I/O consuing type task For progra 5 5 / 5 T 5555 /4 5, obviously f f / T / T 0 The progra one has 0 I/O instructions; the progra two has 40 I/O instructions Obviously, the progra 's I/O wait tie is longer, that is occupied by I/O resources of a long tie So when two tasks have equal frequency, the nuber of I/O instructions is the decisive factor, the ore nuber of instructions has a higher priority task Therefore, the coparison procedure,, we coe to a conclusion: Conclusion : When the two I/O consuing type task with equal frequency, the nuber of I/O instructions is the decisive factor The task which has high priority has large nuber of instructions The two I/O consuption type task has different frequency, the nuber of I/O instructions is still a and, T, decisive factor For exaple: There are two procedures, as shown in Figure We set P, P is procedures and I/O proportion respectively T, T is procedures and I/O cycles respectively, f, f is procedures, frequencies respectively In figure, coparison procedure,, analysis as follows: P /3 /, P /3 /, so the two procedures are I/O consuing type task For procedure,, T 5555 /4 5, T 0 5 / 75, f / T 0, f / T 03 The procedure has 40 I/O instructions; the procedure has 30 I/O instructions Obviously, the procedure has ore I/O instructions, waiting for a long tie, which is occupied by I/O resources of a long tie, so when two tasks frequency is not equal; the nuber of I/O instructions is still the decisive factor, the ore nuber of instructions is a higher priority task Therefore, the coparison procedure and, we have coe to the second conclusion: Conclusion : When the two I/O consuing type task has different frequency, the nuber of I/O that is still the decisive factor, the task has large nuber of instructions that has high priority When the two I/O consuing type tasks with the sae I/O instruction, the frequency is the key factor that as a easure of the two tasks priority For exaple: There are two procedures, as shown in Figure 3 We set P, P is the procedures and I/O ratio respectively, for the progra and I/O cycles respectively, f f for the procedures, and frequencies respectively, N N for the nuber of procedures and I/O instruction respectively Figure 3 With the sae nuber of I/O instructions of I/O consuing task In figure 3, coparison procedure,, analysis as follows: P 4/7 /, P /3 /, so the two procedures are I/O consuing type task The procedure has 0 I/O instructions; the procedure has 0 I/O instructions For the procedure,, 47
T 555 /3 5, T 0 / 0, f / T 0, f / T 0, so when the two tasks have the sae nuber of I/O instructions, the frequency is the decisive factor for the task priority level The saller frequency procedures do not need frequent switch between the CPU and I/O instruction The ost of the tie perforing I/O operation, and the larger frequency procedures are often switch between the I/O and CPU instruction, exclusive I/O device short tie, it can give other I/O consuing type task chance occupy I/O resources, thereby increasing the I/O device resource utilization ratio Therefore, we have coe to the third conclude after copared the procedure, Conclusion 3: When the two I/O consuing type task with the sae nuber of I/O instructions, the saller frequency has the higher priority task After the above analysis, I/O consuing type task queue in deterining the priority should be copare the nuber of I/O instructions, the greater the nuber of instructions, the higher the priority When the I/O with the sae nuber of instructions; copare the frequency The frequency is saller, the priority is higher Through coparative analysis, the I/O consuing type task queue is based on the final task of I/O wait tie arranged in a descending queue Based on the above three conclusions, we can get conclusion 4: Conclusion 4: Coparing two I/O consuing task priorities should be coparing the two task s nuber of I/O instructions, the ore instructions tasks with higher priority When two tasks have the sae nuber of instructions, the need to copare the frequency of the two tasks, the less frequent tasks with high priority CPU consuing type task queue in the priority criteria is just the I/O consuing type task standards of the opposite The nuber of I/O instructions is less, the corresponding nuber of CPU instructions is ore; consue fewer disks, and consue ore CPU resources So when the two CPU consuing type task copares the priority, we should copare the nuber of I/O instructions first, the nuber of the I/O instructions is ore, the priority is lower When the I/O has the sae nuber of instructions, we copare the frequency The frequency is greater, the priority is higher The CPU task queue s order is based on the final task of I/O wait tie fro sall to big, which is arranged in descending CPU utilization ratio of a queue 3 Real-tie task urgency analysis A real-tie task ore stringent on tie, the urgency is the significance of dealing with task, or the strength of real-tie requireents of dealing with task In a task queue, not all tasks have the sae urgency, but according to the analysis of the condition of specific tasks to set the urgency The tie liit is a task processing tie constraint; a task processing should be copleted within the tie liit in order to eet the needs of users Urgency and tie liit are not necessarily linked, a task with a shorter processing tie, it does not has a higher urgency; the task has the sae tie liit handling, it also has different urgency We assue that the urgency of real-tie task and tie liit are known, and the task of handling a variety of factors to be considered, we can calculate the real-tie type of task priority For the task of R define the following paraeters: i P R is the task processing R i s coprehensive paraeter; ut is the task processing R i s urgency; dt is the task processing R i s tie liit; tsi is the R i s start tie; CPUti is the R i consuing CPU tie; IOti is the Ri consuing I/O tie; M ti is the longest tie of allowing to wait before Ri arrives the tie liit; t is the current tie; V is the coefficient; K i CP R s coputational forula as following: CP R u V t t V d V CPU V IO V M i T s Ti 3 ti 4 ti 5 ti 48
Thus, a higher-priority real-tie based task can be obtained services by urgency coputing and shorten the wait tie for users 3 Algorith effective proof The I/O consuing type tasks are assigned by the average waiting tie descending on the achine Each achine s I/O wait tie is roughly sae, at this tie the achine load balancing The CPU consuing type tasks are assigned by CPU utilization ratio descending on the achine Each achine's CPU utilization ratio is roughly sae, then the achine load balancing As a achine s CPU consuing type tasks and I/O consuing type tasks can execute concurrently, without disturbing each other, so the CPU consuing type task and the I/O consuing type task interspersed assigned to the achine also can ensure the load is balancing, the proof as following Proof: We assue that there are N tasks, such as: T, T,, T n and M achines, such as: C,, C,, C, assuing that N tasks have I tasks are I/O type, the N-I tasks are CPU type T, T,, T The I tasks order according to I/O wait tie descending order, t i The N-I tasks order by CPU occupancy ratio descending order, tti, Ti,, Tn The C, C C order by the CPU occupancy ratio fro sall to big, t3c, C,, C C C order by the I/O wait tie fro sall to big, C, C,, C The, When N M : t4 The I I/O consuing type tasks were assigned to the t4c, C,, The N-I CPU consuing type tasks were assigned to the C, C,, C previous I achines t3 previous N-I achines At this tie, each achine has at least one task, and each achine s I/O wait tie roughly sae, the CPU occupancy ratio is roughly sae, naely to achieve load balancing state When N M : C, C,, C previous I At I<M, the I I/O consuing type tasks were assigned to the t4 achine, the N-I CPU consuing type tasks were assigned to the C, C,, C previous M-I t3 achines At this tie, each achine has at least one task, and each achine s I/O wait tie roughly sae, the CPU occupancy ratio is roughly sae, naely to achieve load balancing state For the reaining N-M CPU type tasks, to update M achine's CPU utilization ratio and the I/O average wait tie, according to the algorith re-queue as follows: The C, C,, C orders by the CPU occupancy ratio fro sall to big, t5c, C,, The C, C,, C orders by the I/O wait tie fro sall to big, t6c, C,, The reaining N-M CPU consuing type tasks were assigned to the t5c, C,, previous N- M achines At I M, the previous M of tt, T,, Ti I/O consuing type tasks were assigned to the t4c, C,, previous M achines At this tie, each achine has added a task, and each achine s I/O wait tie on average roughly sae, the CPU occupancy ratio is roughly the sae, the overall reached load balancing state Update M achine's CPU utilization ratio and the I/O wait tie on average, according to re-queue algorith is as follows: The C, C,, C order by CPU occupancy ratio fro sall to big, t5c, C,, The C, C,, C order by the I/O wait tie fro sall to big, t6c, C,, The reaining I-M I/O consuing type tasks were assigned to the t6c, C,, previous I M achines At this tie, all I/O consuing type tasks get a response The entire syste is load balancing state If I M uch larger than M, the I od M is T, reainder is V; T denotes the nuber of cycles that cycle assues the I/O consuing type task s ties, V denotes the last reaining task nuber 49
Because every tie the M I/O consuing type task assigned to M achines, the achine's I/O wait tie is balancing, update the achine's I/O average wait tie after re-allocation of tasks, it is still balancing, so the last reaining V tasks assigned to V achines also can ensure the entire syste load balancing There are I CPU consuing type tasks and N-I I/O consuing type tasks The algorith can ensure the entire syste load balancing 4 Experient We use siulation experient to verify BTC s availability and feasibility, and then analysis the algorith s efficiency Using the aster/slave ode; the entire cluster syste has a aster node, a nuber of server nodes We divided the server node resources into two categories; six hoogeneous nodes configured the cluster and seven heterogeneous nodes constitute a cluster Through siulation experients show that the algorith in two types of cluster perforance The aster node in the copiler first analyzes the user's connection request, then according to the characteristics of the task to provide P, f, N, N four paraeters, load-balancing algorith based on these paraeters to distinguish the task type, then the allocation of tasks The aster node receive each server node send back the achines real-tie status, the task is executed fro the node and update their own achine resources occupancy ratio, the state sent to the aster node, the process shown in Figure 4 Figure 4 The ain control node runs process 4 Hoogeneous cluster experient results We assue that the client has 30 tasks requests, we use the least connection scheduling algorith (LCS), and each node should be allocated 5 tasks Through the experient, we run 30 tasks; each server node s CPU utilization ratio is shown in Figure 5 Run BTC algorith, the 30 tasks CUP occupancy ratios and I/O wait tie show in Table, Table and Table 3, each task node achine label as shown in Table 4 In order to facilitate copare the two load-balancing algoriths results, we use Figure 6 showing the new algorith s running results, each server node s CPU utilization ratio in cluster syste Table The task s CPU occupancy ratio and I/O wait tie Task nuber 3 4 5 6 7 8 9 0 CPU occupancy ratio(%) 0 00 55 40 55 0 30 50 5 60 I/O wait tie (s) 00 50 500 00 600 400 450 050 500 50 50
Table The task s CPU occupancy ratio and I/O wait tie Task nuber 3 4 5 6 7 8 9 0 CPU occupancy ratio(%) 45 0 40 60 55 50 38 0 40 90 I/O wait tie (s) 00 450 00 400 00 800 000 350 300 550 Table 3 The task s CPU occupancy ratio and I/O wait tie Task nuber 3 4 5 6 7 8 9 30 CPU occupancy ratio(%) 35 80 7 45 0 00 30 0 00 5 I/O wait tie (s) 800 00 00 00 00 00 500 00 600 350 Table 4 Each task node achine label in the hoogeneous cluster Server node 3 4 5 6 Task label 7 8 9 8 6 7 3 0 4 5 6 4 9 5 0 3 5 3 7 30 6 8 4 9 Figure 5 Hoogeneous cluster server node s CPU utilization ratio coparison chart Copare the data in Figure 5, we can find in the least connection scheduling algorith cluster, each node has five tasks, each node CPU utilization ratio difference relatively poor Contrast the least connection scheduling algorith, the new algorith akes the whole cluster syste efficiency has been iproved, and a ore balanced provision of services 5
Figure 6 Hoogeneous cluster server node s I/O waits tie coparison chart Copare the data in Figure 6, we can find in the cluster of the least connection scheduling algorith, each node I/O wait tie difference relatively poor, this paper proposed algorith has greatly iproved on the CPU, I/O resource utilization ratio With the aount of the task increases, the syste can provide ore efficient services The experient results show that, in the hoogeneous cluster, the innovation dynaic load balancing algorith based on task classification has ore efficiency and stronger applicability 4 Heterogeneous clusters experient results We assue that the client has 30 tasks request The 30 tasks are listed in Table, Table and Table 3 The weighted least-connection scheduling algorith is still used to connect the achine to easure the nuber of the current load conditions The CPU utilization ratio is shown as Figure 7 Figure 7 Heterogeneous cluster server node s CPU utilization ratio coparison chart Coparing weighted least connection algorith, the BTC akes the whole cluster syste s efficiency has been greatly iproved, and the cluster syste ore balanced provision of services With the task size increases, the new algorith can provide higher efficiency than the weighted least connections algorith The coparison of I/O wait tie is shown in Figure 8 5
Figure 8 Heterogeneous cluster server node s CPU utilization ratio coparison chart Coparing the data in Figure 8, we can find in the cluster of the weighted least connection scheduling algorith, each node I/O wait tie difference between the relatively poor, Using the new algorith, each node I/O wait tie roughly the sae, that is, each node has the sae load conditions; it can take advantage of I / O resources The experient results show that in heterogeneous clusters the new algorith has ore efficient than the weighted least connections algorith The new algorith can better take advantage of each node's CPU and I/O resources; the syste's overall perforance is great increase 5 Conclusion Load balancing technology s core is the task scheduling, ultiple task requests ore evenly distributed to different processing nodes for parallel processing, to provide solutions to a large nuber of concurrent users to access services, to achieve parallel processing, so that each effective node has the axiu utilization ratio, thereby enhancing the syste s throughput capacity to ensure the overall syste efficiency This paper analyzed the least load balancing algorith and weighted least-connection scheduling algorith existence shortage, and proposed an innovation dynaic load balancing algorith based on task classification Based on the actual network conditions to ake soe reasonable assuptions, we gave the algorith description and proved the validity of the theory, then through siulation experients deonstrate the algorith s feasibility The experiental data copared with the least weighted connection scheduling algorith and the least connection scheduling algorith, the new algorith ade full using of achine resources Through the long run, it ade the whole pool node cluster server on a ore balanced tasks distribution Although the proposed an innovation dynaic load balancing algorith based on task classification full using of achine resources, reduced users wait tie, truly balanced tasks distribution for the server node, but each task allocation needs to re-gather a pool of each node of the server achine state paraeters In the practical application, the algorith increased the counication overhead It follow-up study to reduce the algorith s counication overhead, and ade the new algorith can adapt to the real syste environent 6 References [] AM Colajanni, PS Yu, and DM Dias, Analysis of task assignent policies in scalable distributed web-server systes, IEEE Trans Parallel Distrib Syst, vol9, no6, pp585-600, 998 [] Andrew S Tanenbau, Distributed operating syste, [M] Prentiee-Hall Ine, 995 [3] TSchroeder, SGoddard and BRaaurthy, Scalable Web Server Clustering Technologies, IEEE Network, pp38-45, May/June 000 [4] VCardellini, MColajanni and PSYu, Dynaic Load Balancing on Web-Server Systes, IEEE Internet Coputing, pp8-39, May-June, 999 53
[5] YM Teo and R Ayani, Coparison of Load Balancing Strategies on Cluster-based Web Servers, Transactions of the Society for Modeling and Siulation, 00 [6] Yongjun Luo,Xiaole Li,Ruxiang Sun, Load Balancing Algoriths overview, SCI-TECH INFORMATION DEVELOPMENT AND ECONOMY, vol8, no3, pp34-35, 008 [7] Rilling L, Sivasubraanina S, Pierre G, High availability and scalability support for web applications, Proceedings of 007 International Syposiu on Applications and the Internet Ottawa : NRC Research Press, pp55-533, 007 [8] Keith WRoss,David DYao, Optial load balancing and scheduling in a distributed coputer syste, Journal of the ACM,vol38, no03, pp676-690, July 99 [9] HBryhni, EKlovning and OKure, A Coparison of Load Balancing Techniques for Scalable Web Servers, IEEE Network, pp58-63, July/August 000 [0] T Chen, Load Balancing Strategies on Cluster-based Web Servers, Project Report, Departent of Coputing Science, National University of Singapore, http://wwwcopnusedusg/ teoy/ref7pdf, 000 [] AIyengar, AMacNair and ENguyen, An Analysis of Web Server Perforance, IEEE GLOBECOM, vol3, pp943-947, 997 [] RJScheers, A load Balancing Nae Server in Perl, In Proc Of the 9th Systes Adinistration Conference, 995 [3] WZhang, Linux Server Clusters for Scalable Network Services, Free Software Syposiu, China, 00 [4] W Zhang and W Zhang, Linux Virtual Server Clusters, Linux Magazine, Noveber 003 [5] M Colajanni, P Yu and M D Dias, Analysis of task assignent policies in scalable distributed Web-server systes, IEEE Trans on Parallel Distributed Systes, June 998 [6] Xin Ye, Xinghua Bing, Liing Zhu, A Deadlock Detection Method for Inter-organizational Business Process Based on Role Network Model, AISS, vol3, no0, pp486-496, 0 [7] Haid Turab Mirza, Ling Chen, Gencai Chen, Practicablility of Dataspace Syste, JDCTA, vol4, no3, pp33-43, 00 [8] Hongbin Wang, Zhiyi Fang, Guannan Qu,Xi Zhang, Yunchun Zhang, A Novel Weight Distribution Scheduling Algorith, Journal of Inforation & Coputational Science, vol8, no8, pp35-43, 0 [9] Hongbin Wang, Zhiyi Fang, Shuang Cui, Dynaic Adaptive Feedback of Load Balancing Strategy, vol8, no0, pp90-908, 0 54