WLBS: A Weght-based Metadata Server Cluster Load Balacg Strategy J-L Zhag, We Qa, Xag-Hua Xu *, Ja Wa, Yu-Yu Y, Yog-Ja Re School of Computer Scece ad Techology Hagzhou Daz Uversty, Cha * Correspodg author:xhxu@hdu.edu.c Abstract Metadata Server s a crtcal compoet of object-based storage system, whch s used to maage fle system ame space, drect user access ad map fles ad drectores to physcal storage devces. Exstg metadata server cluster load balacg strateges fal to cosder the case of heterogeeous metadata server cluster, causg a potetal performace bottleeck. I ths paper, we troduce WLBS, a dyamc load balacg strategy for heterogeeous metadata server cluster. For each of the metadata server the cluster, WLBS ca precsely evaluate ts heret metadata processg capablty by a performace model based o IO latecy. WLBS uses the performace model to recommed subtree mgratos betwee metadata servers to balace load ad mprove overall performace. Each metadata server s assged workload term of ts weght. Extesve expermets demostrate that WLBS ca make the metadata server cluster keep load balace heterogeeous evromets. Keywords: Object-Based Storage, Metadata Server Cluster, Load Balacg, Weght-Based. Itroducto Wth the developmet of formato techology, the global amout of formato storage grows explosvely. The petabyte-scale data tesve applcato such as cloud computg[] geerates massve requests to backed storage systems. Object-based storage[2] system physcally separates the resposbltes of metadata ad fle cotet, capturg the beefts both of fast accessg lke SAN[3] ad safe data sharg lke NAS[4]. Object-based storage s the future of massve data storage techology. MDS (MetaData Server) s a crtcal compoet of object-based storage system, whch s maly used to maage dstrbuted fle system ame space, drect user access ad map fle ad drectory to physcal storage devce. Although the volume of sgle metadata s relatve small, the access of metadata s very frequet. Besdes, metadata requres hgh cosstecy. Leug et al's work[5] shows metadata trasactos accout more tha 60 percet of the whole operatos dstrbuted fle systems, whch dcates that the IO effcecy of the object-based storage system largely depeds o the metadata processg performace. Though there are some cetralzed metadata processg schemes such as Hadoop[6] ad Lustre[7,8], ther performace ad avalablty are lmted due to the sgle server. The massve data processg requres hgh performace ad avalablty. Therefore dstrbuted processg scheme whch employ multple servers to costtute the MDS cluster has attracted more ad more atteto. To acheve the maxmzato of MDS cluster metadata processg performace ad avod MDS to be a system bottleeck uder a lmted costrat, the ssue of load balace o MDS s very crtcal. The study of MDS cluster load balacg maly focuses o data partto ad MDS popularty evaluato. Frstly, data partto maly cludes drectory subtree partto such as StorageTak[9], NFS[0] ad Ceph[] ad Hash dstrbuto such as zfs[2], Lustre, ad Lazy Hybrd[3]. Drectory tree partto schemes ca be dvded to statc subtree partto ad dyamc subtree partto. Both of them make use of localty of metadata operato to accelerate drectory traverse. The former load dstrbuto s desgated by the admstrator, whch ca't adjust promptly dyamcally whe the load chages; the latter oe has a complex desg, ad ts performace s effcet for small ad medumszed storage systems. Hash dstrbuto uses fle absolute path or fle cotet as Hash fucto put to determe the storage posto of metadata. Ths strategy s smple ad effcet, ad the clet ca locate fle drectly. Furthermore, because the fle s dstrbuted thoroughly by a Hash fucto, sgle Iteratoal Joural of Advacemets Computg Techology(IJACT) Volume4, Number, Jauary 202 do: 0.456/jact.vol4.ssue.9 77
cetralzed access caused by localty ca be avoded. However oce the server cluster sze chages, Hash dstrbuto may lead to large amouts of data mgrato. Secodly, the estmato of server load s based o the legth of requests queue, access rate, respose tme[4], CPU load rate, ad wth certa coeffcet calculato weghted value[5] to cosder the server load comprehesvely. All the above schemes are desged o the assumpto that the processg capablty of each MDS cluster s same. However, the case of processg capacty heterogeeous MDS cluster s very commo practce. It s well kow that the servers wth dfferet processg capablty produce dfferet effects whe hadlg the same workload. The MDS wth low processg capablty s easy to become performace bottleeck of the system f allocated the average workload. Ubalaced workload dstrbuto wll further lead to low overall system processg performace. To address these above problems, we propose WLBS, a weght-based metadata server cluster load balacg strategy. The ma dea of WLBS s as follows: frstly, collect the map relatoshp betwee umber of parallel accessg IO ad metadata operato latecy ad the use least squares to compute the ler ft; the slop of ftted le s used to dcate the performace capablty of MDS (deoted as W); secodly, make use of Equato 2 to cout the popularty of each MDS (deoted as work_load); thrdly use parameter L=work_load/W to represet load level of each MDS ad compute the average load level of MDS cluster; fally, order to acheve the target that let workload proportoal to the performace capablty of each MDS, the overloaded MDS wll take the tatve to mgrate workload by trasferrg parts of subtree to uderloaded odes MDS cluster. The rest of ths paper s orgazed as follows: Secto 2 presets the related techology of our research. I Secto 3, we descrbe load balacg strategy ad the performace evaluato model of MDS. Secto 4 presets the expermets ad aalyses the results. We coclude the effectveess of the strategy Secto 5. 2. Related techology 2.. Archtectures of Ceph fle system [6] Ceph s a object-based dstrbuted fle system developed by the Uversty of Calfora. It has bee tegrated to Lux kerel (verso from 2.6.34 owards). As show the Fgure. the archtecture of Ceph ca be dvded to four parts Clet, MDS Cluster, OSD Cluster, ad motor. MDS Cluster s used to cache ad sychroze dstrbuted metadata, maage ame space ad drect clet access; OSD Cluster stores metadata as objects; motors observes the status of the other odes the dstrbuted fle system. Fgure. Archtectures of Ceph Fle System 78
2.2. Dyamc subtree parttog Ceph employs a dyamc subtree parttog MDS cluster load balacg strategy. MDS cluster maages ad dstrbutes the ame space of the fle system (see Fgure. 2), ad shares ther ow workload testy perodcally. Whe the workload of oe MDS s hgher tha a certa predefed threshold, t wll voke the subtree mgrato, accordg to the overloaded popularty. The t wll select approprate drectory subtree, whch wll be mgrated to a uderloaded ode to balace the system workload. 2.3. MDS load stadard Fgure 2. The dstrbuto of fle system ame space MDS cluster Gve the weghted value of the drectory subtree metadata load, request rate ad request queue legth, the formula s as follows: my_load=0.8*auth_meta_load+0.2*all_meta_load+req_rate+0.0*queue_le () I Equato, auth_meta_load represets the authorty metadata load of the ode, all_meta_load represets all the metadata load of the ode, req_rate represets request rate whch the clets sed, queue_le represets request queue legth. 2.4. Metadata popularty Each drectory ad fle has a correspodg popularty couter to measure the popularty of the fle or drectory. Wheever the clet seds metadata operato request, such as ope, close ad reame, the correspodg couter wll crease the popularty a certa value(v). The formula s defed as follows: meta_load = meta_load*f(t) + V (2) Cosderg the tmg effect for metadata operato's mpact to popularty, the couter value wll decay expoetally as tme goes o. Decayg fucto s defed as follows: f(t) = exp(t*l(0.5)/5) (3) 2.5. Defcecy The dyamc subtree parttog strategy ca make MDS both mata scalablty ad hgh performace. However, t s o the codto that the performace capablty of each MDS s same. Sce the chage of busess or fud codto, the devces purchased at dfferet tmes may be deployed smultaeously practcal. But due to the rapd developmet of hardware techology, these devces performace dfferetly. If we let two MDS wth greatly dfferet performace capablty process the same workload, the MDS wth weak capablty s easy to become the system bottleeck whch may causes the overall performace of the two MDS eve lower tha sgle oe. The load mbalace of MDS cluster wll lead to decrease of IO effcecy. Ths becomes a lmtato. 79
3. Load balacg strategy of metadata server cluster The ma dea of WLBS s as follows: frstly, collect the map relatoshp betwee umber of parallel accessg IO ad metadata operato latecy ad the use least squares to compute the ler ft; the slop of ftted le s used to dcate the performace capablty of MDS (deoted as W); secodly, make use of Equato 2 to cout the popularty of each MDS (deoted as work_load); thrdly use parameter L=work_load/W to represet load level of each MDS ad compute the average load level of MDS cluster; fally, order to acheve the target that let workload proportoal to the performace capablty of each MDS, the overloaded MDS wll take the tatve to mgrate workload by trasferrg parts of subtree to uderloaded odes MDS cluster. 3.. MDS Performace evaluato model The ma target of cluster load balacg s to dstrbute the workload proportoal to the performace capablty of each server, mmzg the executo tme of applcato[7]. Therefore, a very mportat prerequste to acheve the above load balacg strategy uder heterogeeous evromet s to precsely evaluate heret metadata processg capablty for each MDS. I geeral, the processg capablty of MDS s maly related to the CPU, memory, etwork badwdth, dsk performace, operatg system ad so o[8]. Yu et al's work[9] calculates the weght of factors accordg to the fluece, ad wth certa coeffcet calculato weghted value to evaluate the capablty of server comprehesvely. But there are two dffcultes these models: Frstly, dfferet workload types usually have dfferet characterstcs ad dfferet focus requremets o the server; Secodly, there may be ter-costrats betwee factors. For example, a weak cofgurato factor may become a performace bottleeck, ad other compoets of MDS ca't ru at full caused by t. So the above model s dffcult to accurately quatfy the fluece of each factor. What's more, the approprate choce of factor weght may affect the system performace. MDS wth powerful performace ca process each request quckly ad the same legth of request queue ca also be processed a shorter tme, whch dcate a smaller slope curve of <clets, latecy> data pars. What's more, Ajay Gulat et al's work[20] shows that wth the cremet of the parallel accessg IO umber, the average operato latecy wll crease learly. It s because the request latecy creases learly as queug delay creases. Request latecy s a mport property of QoS[2], ad order to mmze the cremet of t, we eed gve prorty to the MDS wth mmum <clets, latecy> le slope workload dstrbuto. Therefore, the <clets, latecy> le ca be used to evaluate the metadata processg capablty of MDS. I ths paper, weght s defed as: W=/slope. Gve the above kowledge, we collect data pots the form of <clets, latecy> for a perod of tme, ad the use least squares to compute lear ft[22]. The value of slope ca be The least squares estmato of slope ca be computed by equato(5) We assume the relatoshp operato latecy (Y) ad clets umber(x) ca be defed as: Y = a + bx + ɛ (4) Ɛ represets sample error vector, ad we assume Ɛ ~N(0,σ 2 ). The sample of X ad Y s respectvely defe as:y = [y,y 2, y ] T ad X = [x,x 2, x ] T. x y x y x x y y Slope bˆ (5) 2 2 2 x x x x Mdtest[23] s a MPI-coordated metadata bechmark test that performs ope/stat/close operatos o fles ad drectores. We make use of t as metadata workload geerator. The slope of ftted le dcates metadata processg capablty of MDS. Fgure 3 ad Fgure 4 show the average metadata operato latecy creases learly wth the cremet of clet umber. Whe the umber of clets s small, we may see o-lear behavor. Ths s because the system has ot fully acheved the throughput lmt. But at the reasoable rage (8-6), the correlato of latecy ad umber of clets has lear behavor. Sce the stablty performace of the system ad sample errors, the sample data follows the tred of lear regresso, but ot very accurate fall o a straght le. We dd two expermets to further demostrate our cocluso, respectvely testg the fluece of the etwork badwdth ad CPU frequecy to the performace of MDS. Fgure 3 shows the le of 80
correlato of <clets, latecy> collected uder 3 groups of dfferet etwork badwdths. The expermet dcates that wth the cremet of etwork badwdths from 50Mbt/s to 200Mbt/s, the slope of the <clets, latecy> le decrease mootocally. Fgure 4 shows the le of correlato <clets, latecy> collected o MDS wth two dfferet CPU frequecy, whch dcates the correlato betwee slope of le ad the CPU frequecy of MDS also s egatve. Both of the above two expermets show that wth the ehacemet of the MDS hardware cofgurato, the correspodg slope of le decreases mootoously. Therefore the model descrbed above ca cover the detal formato for terals of a MDS, avods the dffculty of the quatfcato of fluece factor the "whte-box" performace model, ad evaluates the metadata processg capacty of MDS precsely. Fgure.3 Varato of average operato latecy wth respect to Clets umber uder dfferet badwdth Fgure.4 Varato of average operato latecy wth respect to Clets umber uder dfferet badwdth 3.2. Weght-based load balacg strategy The weght W could be obtaed from the performace model metoed above to measure the metadata processg capablty of MDS. Weght ad load level formato are shared perodcty betwee the MDS the form of heartbeat. Durg the mgrato process, parts of drectory subtree wll be trasferred to approprate uderloaded odes, whch let each MDS to be allocated correspodg proporto workload accordg to ts weght. Ths ca avod that the weak performace ode become the system bottleeck because of heavy workload ad the suffcet use of strog performace cluster ode, thus maxmze the metadata processg capablty of system. Equato 6 s used to measure the mbalace degree of MDS. IM = my_load - target_load ( M()) (6) M() represets the set of MDS cluster, ad the umber of MDS the cluster s M() = N. IM s the mbalace degree of MDS, obtaed by the absolute dfferece of curret load ad target load of MDS..IM > 0 meas that the load of ode s heavy ad drectory subtree should be exported; IM <0 meas that the load s lght ad more load could be accepted. The target load of MDS s determed by ts weght. target_load = (W / W total ) * total_load (7) The target load s the expected load of MDS at the curret stage, umercal proporto to ts performace capablty. The total load of MDS cluster s defed as: total _ load my _ load (8) IM threshold represets the mbalace threshold value. The followg load balacg strategy s luched whe ay MDS who IM > IM threshold the MDS cluster. () Italzato Importer_set = Φ 2 Exporter_set = Φ 3 for to N N 8
/*Target load s proportoate to the weght of MDS*/ 4 target_load (W / Wtotal) * total_load 5 f my_load - target_load IMthreshold /*Put the overloaded ode to the export set*/ 6 Exporter_set Exporter_set MDS 7 else f target_load - my_load IMthreshold /*Put the uderloaded ode to the mport set*/ 8 Importer_set = Importer_set MDS (2) Determe the mgrato mappg betwee odes for each Exporter_set 2 f IM 0.0 3 cotue 4 for each j Importer_set 5 f IM 0.0 6 cotue 7 MATCH(MDS, MDSj) 8 f IM 0.0 9 break (3) Matchg the mgrato popularty betwee odes MATCH(MDS, MDSj) 2 Pmg = MIN(IM, IMj) 3 IM = IM - Pmg 4 IMj= IMj -Pmg /*Mgrate the popularty the form of drectory subtree */ 5 MIGRATE(MDS, MDSj, Pmg) 6 retur Pmg Whe the curret load of MDS exceeds ts target load, ad cotues for more tha 2 heartbeat cycles, the drectory subtree mgrato s proceeded whch avods the uecessary overhead caused by the frequet mgrato betwee MDS. The Export_set stores the overhead odes ad the Import_set stores the uderloaded odes. Each overhead ode MDS has a mbalace degree value IM. I most cases, the load mgrato procedure eed the coordato of several odes, as usually the export load does ot just match mport load oe-to-oe. For example, MDS eed a large umber of export ad have to mgrate overload to more tha oe uderloaded odes. The mgrato procedure searches the drectory subtree whose popularty sum s P mg, ad the mgrate the subtree to uderloaded ode. Each drectory subtree represets certa access popularty, so the mgrato of drectory subtree meas the mgrato of workload. Each MDS ode could get the proportoal workload accordg to ts processg capablty. I the ed, the MDS cluster satsfes Equato 9 whe load s balaced. my _ load my _ loadj (,j,k M(),ad j) (9) W Wj 3.3. Cost aalyss Compared wth the load balacg algorthm of Ceph, the addtoal overhead from the WLBS maly focus o etwork ad CPU. MDS should sed ts weght (deoted as W) ad target load(deoted as my_target_load) to the rest odes the cluster per heartbeat cycle. The amout of sedg data s small, so the etwork overhead s very lmted. Wth respect to CPU overhead, we use parameter T to represet the preparg overhead before each data sedg, ad the use T 2 to represet CPU overhead cost by extra data trasfer. Therefore we ca defe the CPU overhead each data 82
sedg as follows: T=2*T +T 2. Referrg to sgal MDS, we use parameter N to represet the umber of MDS the cluster. Let P be the ormalzed cremet of CPU overhead per heartbeat cycle: P = (N-)T=(N-)T +2(N-)T 2 (0) The addtoal data are trasferred by ecapsulated the system heartbeat packet, so the preparg overhead(t ) s eglgble. I order to demostrate the extra overhead, we coducted two sets of expermets ad the result s as we expected. Table show that the overhead of our strategy s eglgble ad acceptable. Table. Performace overhead comparso The Strategy of Ceph MDS Throughput (make fle) MDS Throughput (make dr) Network IO per Heartbeat cycle CPU Occupacy Rate 767.9/s 473.93/s 525Byte 4.0% WLBS 685.49/s 4385.39/s 565Byte 3.9% 4. Expermets ad results aalyss We put WLBS apply to Ceph, ad verfy ts effcecy by comparg wth the performace of orgal load balacg strategy. The sze of MDS cluster N=3, each of them has dfferet performace, ad we use Stadard Devato SD(x) to represet the performace capablty dfferece degree of MDS cluster. The expermet s coducted LAN, ad clets cotually sed requests to MDS to smulate metadata practce a actual applcato. I order to obta the performace of MDS cluster, each clet cotually create/traverse/remove drectory trees whose depth s 4, brach umber s 4 ad tem umber s 5. Table 2 shows expermet evromet. Table 2. Expermetal evromet Role CPU Memory Badwdth OS Clets Itel Xeo E5507 2.27GHz *4 6GB 00MB/s Motor Itel Xeo E5507 2.27GHz *4 6GB 00MB/s OSD Itel Xeo E5507 2.27GHz *4 6GB 00MB/s MDS Itel Xeo E5507 2.27GHz *4 6GB 20MB/s MDS2 Itel Xeo E5507 2.27GHz *4 6GB 20MB/s MDS3 Itel Xeo E5507 2.27GHz *4 6GB 20MB/s Sce the orgal MDS cluster load balacg strategy assumes that each MDS cluster has the same capablty, t dstrbutes workload to each MDS evely. However, whe the performace capablty dfferece betwee MDS cluster are greatly dverse, parts of MDS wth hgh capablty would fal to get full-load rug ad some MDS wth low capablty would become overloaded, whch may results performace decle of MDS cluster. I the expermets, we lmt the etwork badwdth to chage the performace of each MDS, smulatg heterogeeous cluster. To demostrate the effect of heterogeety to MDS cluster performace, the sum performace of MDS cluster keeps costat the expermet. Fgure 5 shows as the MDS degree of capablty dfferece SD(x) creases, the performace of orgal balacg strategy appears mootoous decle, whch further demostrate ts lmtato heterogeeous MDS cluster. Meawhle, WLBS keeps the MDS cluster performace approxmately costat, decreasg by less tha 3.37%. Especally, whe SD(x) > 50, the performace of orgal algorthm decreases rapdly. Fgure 6 shows wth the cremet of SD(x), the average metadata operato latecy have the smlar behave. Whe SD(x)>50, the average operato latecy of orgal strategy crease rapdly rse. Both of the above two expermets result dcates 83
that WLBS ca ehace the metadata processg performace of object-based storage system a heterogeeous cluster. Therefore, the effectveess of WLBS s verfed. Fgure 5. MDS cluster throughput varato wth the MDS degree of capablty dfferece SD Fgure 6. Average operato latecy varato wth the MDS degree of capablty dfferece SD To demostrate the addtoal overhead caused by WLBS, two sets of expermets are take to test the overhead of etwork ad CPU resource respectvely utlzg the metadata load balacg strategy of Ceph ad WLBS. There are 3 MDS the expermets ad the hardware cofguratos are all the same order to avod the effect of the subtree mgrato. The results are show Table 2. Cosequetly, the aggregate throughputs lose caused by WLBS s lower tha 0.07%. The CPU occupacy rate oly creases 0.%. The overhead of etwork each heartbeat cycle creases by 0.08%. Cosderg the great mprovemet of the performace due to the load balacg, both of the overhead o etwork ad CPU are eglgble ad acceptable. Therefore, the feasblty of WLBS s verfed. 5. Coclusos ad future work I ths paper, to the workload mbalace problem caused by performace heterogeeous MDS cluster, we troduce WLBS, a dyamc load balacg strategy for heterogeeous MDS cluster. For each MDS the cluster, WLBS ca precsely evaluate ts heret metadata processg capablty by by a performace model based o IO latecy. WLBS uses the performace model to recommed subtree mgratos betwee metadata servers to balace load ad mprove overall performace. Each metadata server s assged workload term of ts weght. The ma features of WLBS are as follows: the performace evaluato model based o the IO latecy use the slope of ftted le of correlato <clets, latecy> to dcate heret metadata processg capablty of MDS; assg correspodg proporto workload to MDS accordg to ts weght, makg the MDS cluster load balace to acheve hgh overall throughput heterogeeous evromet. Expermetal results show that the weght-based load balacg strategy ca mprove the metadata processg performace of MDS cluster a heterogeeous evromet. I the future, we wll test the stablty ad effectveess of our strategy dustral evromets. O the other had, we pla to mprove the metadata maagemet to specfcally optmze the IO effcecy of massve small fles. Ackowledgemets Ths paper s supported by State Key Developmet Program of Basc Research of Cha uder grat No.2007CB30900, Natural Scece Fud of Cha uder grat No.600043, 60093, 6003077, 60873023, 60973029, Zhejag Provcal Natural Scece Foudato uder grat No.Y004, Y0092, Y090940 ad the prmg scetfc research foudato of Hagzhou Daz Uversty uder grat No.KYS05560004, KYS05560809. 84
6. Refereces [] Chegwe Yag, Shju Lu, Le Wu,Chegle Yag,Xagxu Meg, "The Applcato of Cloud Computg Textle-order Servce", JDCTA: Iteratoal Joural of Dgtal Cotet Techology ad ts Applcatos, Vol. 5, No. 8, pp. 222-233, 20. [2] Mke Meser, Carege Mello, Itel Gregory R. Gager, Carege Mello,Erk Redel, Seagate Research. "Object-Based Storage". Commucatos Magaze, Vol. 4, No. 8, pp. 84-90, 2003. [3] Garth A. Gbso, Rodey Va Meter, "Network attached storage archtecture", Commucatos of the ACM, Vol. 43 No., pp. 37-45, 2000. [4] J.Tate, F.Lucchese, R.Moore, "Itroducto to Storage Area Network(Fourth Edto)", Redbook, Iteratoal Busess Maches Corporato, USA, 2006. [5] Adrew W. Leug, Shakar Pasupathy, Garth Goodso, Etha L. Mller. "Measuremet ad Aalyss of Large-Scale Network Fle System Workloads", USENIX 2008 Aual Techcal Coferece o Aual Techcal Coferece, pp. 23-226, 2008. [6] "HDFS Archtecture", http://hadoop.apache.org/commo/docs/r0.20.0/hdfs_desg.pdf. [7] Peter J. Braam. "The Lustre Storage Archtecture", Techcal report, Cluster Fle Ic. 2004. [8] P. Schwa. "Lustre: Buldg a fle system for 000-ode clusters", I Proceedgs of the 2003 Lux Symposum, pp.380-386, 2003. [9] J. Meo, D. A. Pease, R. Rees, L. Duyaovch, ad B. Hllsberg. "IBM Storage Tak a heterogeeous scalable SAN fle system", IBM Systems Joural, Vol. 42, No. 2, pp. 250-267, 2003. [0] Bra Pawlowsk, Chet Juszczak, Peter Staubach, Carl Smth, Dae Lebel, Davd Htz. "NFS Verso 3 Desg ad Implemetato", I Proceedgs of the Summer 994 USENIX Techcal Coferece, pp. 37--5, 994. [] Sage A.Wel, Krstal T.Pollack, Scott A.Bradt, Etha L.Mller. "Dyamc Metadata Maagemet for Petabyte-scale Fle Systems", SC '04 Proceedgs of the 2004 ACM/IEEE coferece o Supercomputg, pp.4, 2004. [2] Ohad Rodeh,Av Teperma., "zfs - A Scalable Dstrbuted Fle System Usg Object Dsks", Mass Storage Systems ad Techologes, pp. 207-28, 2003. [3] Scott A. Bradt, La Xue, Etha L. Mller, ad Darrell D. E. Log, "Effcet metadata maagemet large dstrbuted fle systems", I Proceedgs of the 20th IEEE/th NASA Goddard Coferece o Mass Storage Systems ad Techologes, pp. 290 298, 2003. [4] Wag Jua, Feg Da, Wag Fag, Lao Zhesog, "Load Balacg Algorthm Metadata Server Cluster. Joural of Chese Computer Systems", Vol. 30, No. 4, pp. 757-760, 2009. [5] Sha Zhguag, Da Qogha, L Chuag, Yag Yag. "Itegrated Schemes of Web Request Dspatchg ad Selectg ad Ther Performace Aalyss", Joural of Software, Vol. 2, No. 3, pp. 355-366, 200. [6] Sage A.Wel,Scott A.Bradt,Etha L.Mller,Darrell D.E.Log,Carlos Maltza, "Ceph A Scalable,Hgh-Performace Dstrbuted Fle System", OSDI '06 Proceedgs of the 7th symposum o Operatg systems desg ad mplemetato, pp. 307~320, 2006. [7] Buyya Rajkumar, "Hgh Performace Cluster Computg Archtectures ad System", Pretce Hall, USA, 2000. [8] Guo Chegcheg, Ya Pulu, "A Dyamc Load Balacg Algorthm for Heterogeeous Web Server Cluster. Chese Joural Of Computers", Vol.28, No.2, pp. 79-84, 2005. [9] Yu Le, L Zogka, Guo Yucha, L Shuoxu, "Load balacg ad fault tolerat servces mult server system", Joural of System Smulato, Vol. 3, No. 3, pp.325-328, 200. [20] Ajay Gulat, Chetha Kumar, Irfa Ahmad, Kara Kumar, "BASIL: Automated IO Load Balacg Across Storage Devces", FAST'0 Proceedgs of the 8th USENIX coferece o Fle ad storage techologes, pp.3-3, 200. [2] Dael A. Meascé. "QoS ssues Web servces", IEEE INTERNET COMPUTING. Vol.6, NO.6 pp.72-75, 2002. [22] Pey Zhu, Qag Zhog, Wel Xog, Baoguo Xu, "A Robust Least Squares for Iformato Estmato Fuso", JDCTA: Iteratoal Joural of Dgtal Cotet Techology ad ts Applcatos, Vol. 5, No. 4, pp. 65-73, 20. [23] http://mdtest.sourceforge.et/ 85