A New Paradgm for Load Balancng n Wreless Mesh Networks Abstract: Obtanng maxmum throughput across a network or a mesh through optmal load balancng s known to be an NP-hard problem. Desgnng effcent load balancng algorthms for networks n the wreless doman becomes an especally challengng task due to the lmted bandwdth avalable. In ths paper we present heurstc algorthms for load balancng and maxmum throughput schedulng n Wreless Mesh Networks wth statonary nodes. The goals are to (a) mprove the network throughput through admssbly optmal dstrbuton of the network traffc across the wreless lnks, (b) ensure that the scheme s secure, and (c) ensure farness to all nodes n the network for bandwdth allocaton. The man consderaton s the routng of non-local traffc between the nodes and the destnaton va multple Internet gateways. Our schemes splt an ndvdual node s traffc to the Internet across multple gateways that are accessble from t. Smulaton results show that ths approach results n marked ncrease n average network throughput n moderate to heavy traffc scenaros. We also prove that n our algorthm t s very dffcult for an adversary to block a fracton of a node s avalable paths, makng t extremely hard to compromse all traffc from a node. Smulaton results also show that our scheme s admssbly far n bandwdth allocaton even to nodes wth longest paths to the gateway nodes. Key Words: Farness, Load Balancng, Reslency, Traffc Splttng, Throughput, Wreless Mesh Networks 1. Introducton Optmal load balancng across a mesh or a network s a known hard problem. [1] Descrbes load balancng as an optmzaton problem. [2], [3], [4], [5], prove the NP-completeness of varous load-balancng problems and provde approxmaton algorthms. Effcent load balancng n wreless networks becomes an even more challengng problem due to the lmtatons on avalable bandwdth and unrelablty of wreless lnks. In ths paper we consder load balancng n wreless mesh networks wth statonary nodes. These nclude Wreless Mesh [6], [7] and Communty networks [8], such as the Self-Organzng Neghborhood Wreless Mesh Networks [9]. We provde heurstc algorthms for load balancng n wreless mesh networks wth the followng goals: (a) maxmze network throughput through admssbly optmal dstrbuton of the network traffc across the wreless lnks, (b) ensure the scheme s secure, and (c) ensure farness to all nodes n the network for bandwdth allocaton. Below we frst provde motvaton for snglng out these networks over other conventonal wreless networks n our study for desgnng load balancng heurstc algorthms. 1.1. Motvaton The statonary nature of the nodes n wreless mesh networks warrants the desgn of robust and effcent traffc management protocols for them. Exstng sngle path and multpath traffc protocols for wreless networks are desgned for sngle source-destnaton pars (see Sec. 2). Mesh and communty network nodes, by contrast, can have accessblty to multple Internet gateways connectng them to the Internet backbone and each node can ndvdually select the best gateway for ts non-local traffc (traffc to the Internet). Snce the bulk of node traffc n such networks would be non-local, ths wll lead to performance and farness ssues for the network. Load balancng n wreless mesh networks s an nterestng and unque problem dfferent from conventonal wreless networks due to several reasons: Nodes are statonary. Exstng wreless ad-hoc network protocols are desgned wth node moblty consderatons. Sensor network protocols have power and computaton constrants. Thus, nether ad-hoc, nor sensor network protocols would be sutable for mesh networks. Better network performance s n order through protocols desgned specfcally for mesh networks wth statonary wreless nodes. Multple gateways are avalable. Load balancng can be mproved by splttng network traffc across the accessble gateways and reassemblng ths traffc on the Dstrbuton System (wred backbone network). Ths may be a better approach ncase of some wreless lnks gettng loaded to capacty and others beng underutlzed. Ths s akn to IP [10] routng for the wred networks where dfferent packets between a source and destnaton may be routed along dfferent paths. 1
Clearly, above condtons are dfferent from ad-hoc and sensor networks and requre desgnng a traffc dstrbuton protocol specfcally for ths scenaro. 1.2. Summary of Contrbutons Ths paper proposes a new traffc dstrbuton and load balancng protocol for statonary mesh networks. The focus s on traffc between the nodes and the backbone Internet. We show that optmal load balancng and maxmum throughput schedulng for ths scenaro s NP complete through a smple reducton of the known NP complete Knapsack [11] problem. We present a heurstc algorthm for load balancng whch ams at maxmzng network throughput through effcent wreless-lnks utlzaton. The algorthm splts a node s non-local traffc across the multple gateways connected to t. The ratonale s that under heavy traffc load, f each node s able to send part of ts traffc on ts best avalable route, lnk utlzaton wll be unform throughout the network and average network performance wll mprove. We demonstrate the effcency of our algorthm through smulatons, evaluatng t aganst other popular approaches. In addton, sngle path node falures can be better dealt wth and network robustness and reslency aganst attacks can be enhanced through our scheme. We demonstrate that the problem of an adversary compromsng a subset of paths from a node (by compromsng some ntermedate nodes on those paths, wth each node havng an assocated cost to compromse t), such that the cost s less than some value K, s NP-complete. Fnally, through smulatons, we demonstrate that our scheme s admssbly far n bandwdth allocaton even to nodes wth longest paths to the gateway nodes. Thus, our scheme acheves three goals (a) effcent load balancng, (b) securty, and (c) farness. To the best of our knowledge, ths s the frst scheme to consder load-balancng by smultaneously splttng a node s traffc to all avalable destnatons gateways. 1.3. Paper Organzaton Ths paper s organzed as follows. Secton 2 descrbes related work and gves an overvew of our model. Secton 3 presents proof of NP completeness of maxmum throughput schedulng for mesh networks and detals our heurstc load balancng algorthms. Secton 4 talks about the securty of our scheme. Secton 5 demonstrates the farness of our scheme and presents applcablty analyss. Smulaton results are presented n Secton 6. Fnally, n Secton 7 we conclude the paper wth a dscusson of ts lmtatons and proposed future work. 2. Related Work and Model Overvew 2.1. Related Work Conventonal mult-hop wreless network traffc protocols are ether sngle path [12], [13] or multpath [14]. In [15] the authors present a multpath loadbalancng protocol for moble ad-hoc networks wth drectonal antennas for maxmally zone dsjont routes. Ganjal et al. [16] compare load balancng n ad-hoc networks for sngle path and multpath routng. Multpath protocols mantan multple paths, but use only one path at a gven tme. Only [17] researches smultaneous actvaton of multpaths, but n ther study a packet randomly chooses one of several avalable paths. In addton, they consder multple paths between sngle source-destnaton pars. Our protocol has a well defned (non-random) algorthm for splttng a node s traffc and we consder smultaneous nvocaton of multple paths between a node and multple Internet gateways. Moble Mesh protocol [18], [19] descrbes schemes for lnk dscovery, routng and border dscovery n wreless mesh networks, but does not consder load balancng. In [20] authors descrbe choosng a hgh throughput path between a source and a destnaton for communty wreless networks. Ranwala et al. [21] dscuss loadbalancng n wreless mesh networks wth nodes havng mult-channel rados, but these rados would requre multple cards and antennas for each node and would be expensve to deploy. Hespanha et al. [22] formulate secure load balanced routng n networks as a zero-sum game between the desgner of the routng algorthm and an adversary that attempts to ntersect packets. They show that for some versons of the game, the optmal routng polces also maxmze the throughput between the source and the destnaton node. There s extensve lterature on optmzaton problems on dynamc and statc load balancng across meshes [23]. Optmal load balancng across meshes s known to be a hard problem. Akyldz et al. [6] exhaustvely survey the research ssues assocated wth wreless mesh networks and dscuss the requrement to explore multpath routng for load balancng n these networks. However, maxmum throughput schedulng and load balancng n wreless mesh networks s an unexplored problem. 2
In ths paper we present for the frst tme, a load balancng scheme n wreless mesh networks by systematcally splttng each node s non-local traffc to avalable destnaton gateways and evaluate ts performance. 2.2. Assumptons and Model Overvew We consder a wreless mesh network model wth statonary nodes. An example s a communty network where buldngs wth wreless antennas mounted on them are the network nodes (Fg. 1) [9]. Nodes connect to the backbone Internet va gateways located n the communty. Traffc s forwarded to the gateways by the nodes n a hop-by-hop fashon. Thus, each node acts as both transmtter and router. Traffc across varous gateways n a regon can be effectvely reassembled at the Dstrbuton System. Snce the network s statonary, route changes are nfrequent, and occur only ncase of node falures or faults. As such, the network controller or a smlar entty on the Dstrbuton System mantans complete network nformaton ncludng the network topology, and can perform statc route computaton from any node to the gateways. Snce ths s an admnstered network, the network controller has an estmate of the upper lmt on bandwdth requred by each node for ts self-traffc. Nodes are assumed to be non-malcous and cooperate n routng others traffc. problem s NP-complete. We then present two heurstc algorthms amng at maxmum network throughput schedulng for load balancng. We call them Schedulng Schemes 1 and 2. 3.1. The Maxmum Throughput Schedulng Problem The maxmum throughput schedulng optmzaton problem can be defned as follows. Suppose n a graph every node has some bt rate traffc to send wth some specfed paths (maybe multple paths to the same destnaton), and every lnk has an upper bound on capacty, what knd schedulng (.e., the order to reserve bandwdth for the nodes) can acheve maxmum throughput? The correspondng decson problem s: s there a schedule such that the overall throughput of the network s greater than K? Theorem 1 The decson verson of the maxmum throughput schedulng problem s NP-complete. Proof: As a specal case, consder a star network wth center C. All nodes except for one destnaton node need to send traffc to the destnaton node va the center C. Now C needs to schedule all ts receved traffc to forward to the destnaton. Ths s a case of knapsack problem where one can consder the capacty of the lnk between C and destnaton as the capacty of the knapsack. Knapsack problem s NP-complete [11], so the maxmum throughput schedulng problem s NPcomplete. 3.2. Schedulng Scheme 1 Fg. 1. Communty Network 3. Maxmum Throughput Schedulng The prmary objectve of a Load Balancng scheme s to acheve maxmal network throughput through unform lnk utlzaton. The other goals nclude farness to the nodes and robustness aganst attacks and node falures. Thus maxmal throughput schedulng forms a subset of the larger Load Balancng problem. In ths secton we frst show that the decson verson of the maxmum throughput schedulng optmzaton Ths schedulng scheme computes the volume of traffc each node can send along ts routes to connected gateways. The scheme requres the nvocaton of the Traffc Dstrbuton Algorthm (TDA) shown n Fg. 2 (explaned n Sec. 3.2.2). Ths algorthm computes traffc dstrbuton for each node ahead of the actual routng based on lnk weghts and node prortes. Any changes n the network topology (due to node falures or new nodes beng added) would requre a rerun of the algorthm. Each round of TDA requres partal re-computaton of the current shortest path from each node to ts connected gateways. Instead of computng the shortest path from each node to all of ts connected gateways, a mnmum spannng tree s constructed rooted at each gateway. Ths gves the average shortest path from each gateway to the accessble nodes. Ths s an admssble approxmaton whch smplfes the 3
computaton and mproves the runnng tme. Below we frst present the notatons used n TDA and then descrbe the algorthm n detal. 3.2.1 Notatons N : Set of network nodes t x : The traffc sent on lnk by node x. T x : Total traffc to be sent by node x n kbps. r x : Number of routes from node x to ts connected gateways,.e., the number of gateways node x s connected to. P n : Prorty of node n. Equvalent to traffc routed by n for other nodes n kbps. For example, f node n routes 30 kbps traffc for another node, ts prorty becomes 30. RT n : Remanng traffc to be sent by node n. C j : Cost of lnk j. Equvalent to traffc routed on lnk j n kbps. For example, f lnk j s reserved for routng 30 kbps traffc, ts cost becomes 30. D : Max(r x ) for x = 1 to N. Ths s the maxmum number of gateways that any node n the network has paths to, from the set of all nodes. Ths fxes the number of teratons of the algorthm. Z : the array contanng RT n + P n values for all nodes n belongng to N. S x : The current shortest path to a gateway node from node x. 3.2.2. TDA Descrpton The TDA works by assgnng costs to lnks and prortes to nodes. The rato of the total traffc a node has to send, to the number of gateway nodes t s connected to (T / r ), s a metrc for dstrbutng the node s traffc to ts connected gateways. We assume Constant Bt Rate (CBR) traffc. The algorthm mantans k-shortest paths [24] from each node to gateway nodes. It runs teratvely to ensure farness to all the nodes. In each teraton, the node n wth the next hghest RT n + P n value has ts shortest path S n assgned to t successvely. If any lnk n ths present shortest path S n exceeds the lnk capacty c, that shortest path s not used and the next-shortest path s computed. Node n then routes (T n / r n ) of ts traffc along path S n. After n has been assgned ths route, the cost of all the lnks along ths route s ncremented by the amount of traffc they wll be routng for n.e., t n. All the nodes whch have not yet been assgned ther shortest paths n ths teraton, have to re-compute ther shortest paths for ths teraton, as the cost of lnks along ther orgnal shortest paths may have now ncreased. The cost of a path s the sum of the cost of each lnk along that path. In order to be far and to reward an ntermedate hop node (.e., a node that s nether the source nor destnaton n ths sesson) for routng traffc of other nodes, we ntroduce a prorty metrc P. N {all nodes} L {lnks} S {shortest paths} procedure TraffcDstrbute (N, L, S) { t x = [ T x / r x ] ; for each n ε N { P n = 0; RT n = T n ; } for each l ε L { C l = 1; } done_count = 0; for teratons = 1 to D { Sort ( Z ); //sort array Z n decreasng order new_count = N done_count for x = 0 to new_count { Choose node n x correspondng to Z [x]; check =0; whle (check ==0){ Compute S x // current shortest path for n x for each, s.t. l ε { S x } f( (c + t x ) c { check = 0; Remove present path from lst of k- shortest paths for node n x. } else { check = 1; } } // end of (whle check==0) loop S x n x ; //Assgn n x ts shortest path for each, s.t. l ε { S x } { c = c + t x ; RT x = T x t x ; f ( RT x = 0 ) done_count = done_count + 1; for each k, s.t. n k ε { S x } //nodes lyng on the path S x P k = P k + t x ; for each n q, s.t q s from x to ( N done_count ) Recompute S x ; } } Fg. 2. Traffc Dstrbuton Algorthm 4
The prorty of all ntermedate hop nodes on a path S n s ncremented for the next teraton, by the amount of traffc they wll be routng for source node n. At the begnnng of each teraton, array Z s sorted to determne the orderng of nodes accordng to ther current RT + P values. The number of teratons s determned by the value D. The algorthm splts each node s traffc accordng to the number of gateway nodes t s connected to and favors the nodes wth the hghest RT + P values n assgnng the shortest paths n each teraton. 3.3. Schedulng Scheme 2 In ths secton we present an alternate smple Traffc Schedulng scheme nvolvng less computaton. Ths s a greedy schedulng scheme and would perform coarse graned schedulng and load balancng as compared to the TDA. Consder a node that has paths to n gateway nodes. Let h 1, h 2, h 3, h., h n, be the number of hops along routes to gateway nodes 1 to n. The traffc from a source node s dstrbuted n nverse proporton to the number of hops along all ts routes: Hghest volume of traffc s sent along the route wth fewest hops. Fracton of traffc sent along a route ( T ) wll be computed as: T = { [ ( h 1 h 2 h 3.. h h n ) / ( h 2 h 3.. h.. h n + h 1 h 3.. h h n +...+ h 1 h 2 h 3.. h h n-1 ) ] * ( 1/ h ) } For example, consder a node whch s connected to three gateway nodes and ts dstance n number of hops from these gateway nodes s 2, 3 and 4, respectvely. So, t sends 12/26 th fracton of ts traffc to the closest gateway node, 8/26 th fracton to the next gateway node and 6/26 th fracton to the farthest gateway node. 4. Securty and Reslency In ths secton we show that our heurstc algorthms for maxmum throughput schedulng acheve robustness and reslency aganst attacks and node falures. The robustness and securty of a traffc scheme can be measured by how effectvely t can be compromsed or attacked by an adversary. An adversary can dsrupt functonng of the network by blockng traffc to the gateways va compromsng en-route nodes or localzed jammng. Ths s one of the chef reasons for mantanng dsjont or braded multpaths [25]. Intutvely, a scheme usng multple paths smultaneously should be more robust aganst such dsruptons than multpath (mantanng multple paths, usng one path at any tme) schemes, makng t harder for an adversary to block all actve paths from a node. Ths s proved below. We consder a more strngent threat model whch assumes that traffc from a node s compromsed f a subset of the node s actve paths s blocked or compromsed. Ths s based on the prncples of threshold cryptography [26], where a secret s compromsed f more than some percentage of shares s compromsed. In other words, a secret s correctly receved by the recpent, only f more than a certan percentage of shares are accurately receved. The localzed lnk jammng by an adversary scenaro s smlar to usng lnk cuts [27] to attack Internet routng [28]. For path blockng through en-route node compromse we present an optmzaton problem called the Mnmum Cost Blockng (MCB) Problem. The MCB problem pertans to a set of nodes n a network and a set of paths between the nodes, wth an assocated cost to compromse each node. It seeks to fnd the mnmum cost for an adversary to compromse a subset of the nodes n a network such that a certan percentage of the network paths are blocked? Here, we present two nstances of the MCB problem, a specal case and the general case. 4.1. The MCB Problem: Specal Case Ths s a specal case of the MCB problem. The optmzaton problem for the adversary can be defned as follows. Suppose n graph G (V, E), V = n and every node v n V has a cost c to be compromsed. Suppose there are m paths ( P 1, P 2,..., P m ) from some sources to some destnatons. Some paths may have the same source and destnaton as other paths (.e., multpaths exst n ths source and destnaton par). What s the mnmum cost to compromse a subset of the nodes such that a certan percentage of the paths are compromsed? The correspondng decson problem s: Gven graph G = (V, E) and every node v n V has a cost c to be compromsed. There are m paths ( P 1, P 2,..., P m ), and ntegers K and R. Is there a subset V of V such that V wll block R out of the m paths and the total cost of nodes n V s no greater than K? Theorem 2 The specal case of MCB problem s NPcomplete. Proof: If we consder every path ( P 1, P 2,..., P m ) as an element, then every node n the graph G can be 5
consdered to comprse of a subset of all paths. That s, all the paths on whch the node s located, consttute that node s subset. For example a node v n V may have a subset { P j, Pk, Pl } f v les on the paths P, P, P ). Now the problem to fnd the mnmum ( j k l weght subset of nodes that block a part of the paths s equvalent to weghted partal set cover problem, whch s NP-complete [29]. So the specal case of the MCB problem s NP-complete. 4.2. The MCB Problem: General Case Here we present the general case of the MCB problem for the adversary: Suppose n graph G(V,E), V = k, and every node v n V has a cost c to be compromsed. Suppose we have m = k = 1 n paths: ( P P, L, P, P, P, L, P, L, P, P, L, P ). 11, 12 1n1 21 22 2n2 k1 k2 Here ( P 1, P 2, L, P ) are paths orgnatng from node n ( = 1,2,..., k ). What s the mnmum cost to compromse a subset of the nodes such that a certan percentage of paths that orgnate from a node are compromsed (f a path orgnates from a node, we say that the path belongs to that node)? That s, for every node, ( = 1,2,..., k ), there s a number R, 0 R n, and at least R paths out of all paths belongng to ths node (paths P 1, P 2, L, P ) are compromsed? Ths s a typcal optmzaton problem. The decson problem correspondng to t s: Gven graph G=(V,E) and cost of every node, and set k n = 1 11, P12, L, P1 n, P21, P22, L, P2 n, L, Pk 1, P 1 2 k 2 of nodes n m = P, L, P kn k n paths, and ntegers C and R, 0 R n, s there a subset of V of V such that V wll block at least R out of the paths P, P, 2, P 1 L n, for all ( 1,2, L, k total cost of nodes n V s no greater than C? kn k = ), and the Theorem 3 The general case of the MCB problem s NP-complete. Proof: The problem s a general case of the partal set cover problem [29], whch s NP complete. So the general case of MCB problem s NP-complete. Snce the specal and general cases of MCB problem are NP-complete, t wll not be easy for an adversary to block a certan percentage of a node s paths (or some percentage of any paths) n the network, makng our scheme reslent aganst attacks and falures. 5. Node Farness and Applcablty Ths secton (a) dscusses the farness of Schedulng Scheme 1 for all nodes beng able to reserve bandwdth for ther self-traffc, (b) dscusses farness and performance ssues of Schedulng Scheme 2, and (c) compares the runnng tme and applcablty of our schemes n qualtatve terms. Ths secton complements the Smulaton results n Sec. 6 for a better understandng of the performance of our schemes. 5.1. Farness to Nodes: Schedulng Scheme 1 In Schedulng Scheme 1, durng each teraton of TDA, every node s allowed to reserve bandwdth for a fracton of ts traffc. Ths has two outstandng consequences: (a) No node starves for bandwdth, and (b) Ths schedule doesn t result n bandwdth wastage for nodes wth less self-traffc. Thus, Schedulng Scheme 1 results n maxmal farness to the nodes n terms of opportunty to reserve bandwdth. Due to prorty assgnments for routng non-self traffc, nodes closer to the gateways (for example, nodes 1-hop away from a gateway: called G-1 nodes n the rest of ths paper) may eventually end up gettng hgher prortes n the later rounds of TDA. Ths results n such nodes beng able to reserve ther shortest paths before other nodes n later rounds. Consequently, these nodes can acheve hgher throughputs than say, a node that s stuated farther away from all gateways. Ths s further demonstrated by smulaton results n Sec. 6. Nevertheless ths dfference n throughput s not an ndcaton of lack of farness. It s natural for nodes n dfferent parts of the network to acheve non-unform throughputs due to the dfference n path-lengths. A node stuated n the centre of a network and unformly dstant from all avalable gateways wll have lower throughput than nodes closer to the gateways, regardless of the schedulng or route reservaton schemes used. The farness of Schedulng Scheme 1 s demonstrated by the smulaton results n Sec. 6 where t outperforms other well known schemes n terms of throughput for such a node. The farness s also demonstrated by the smulaton graphs showng unform lnk utlzaton n Sec. 6. Unform throughput 6
for all networks nodes, f desred, s achevable by a slght modfcaton to TDA: the G-1 nodes and other nodes close to the gateways need to be ntalzed to prorty values lower than 0 as opposed to other nodes. 5.2. Farness and Performance: Schedulng Scheme 2 Schedulng Scheme 2 s far to nodes n terms of bandwdth schedulng n that t splts every node s traffc to avalable paths, not favorng any nodes over others for traffc schedulng. Ths may result n nodes havng to compete for bandwdth, but the scheme does not provde any nherent advantage to some nodes over others. Scheme 2 selectvely loads shorter paths wth more traffc, causng lnks along shorter paths beng loaded to capacty, whle other lnks n the network may be lghtly loaded. It results n lower throughput for the network nodes when compared wth Scheme 1, though Scheme 2 stll outperforms other well known schemes (as shown by smulaton results). Performance vs. smplcty tradeoff for Scheme 2 s dscussed n Sec. 5.3. The reducton n throughput for Scheme 2 s partly attrbutable to and more pronounced for G-1 nodes. For G-1 and other nodes n the vcnty of gateways, small proporton of ther traffc may be routed on very long routes even though the nodes themselves are close to the gateways. Though a very small volume of traffc may be affected due to ths factor, t helps n keepng the desgn of the scheme smple (see Sec 5.3). A varant of the scheme could be to not assgn traffc to routes that are longer than some threshold. 5.3. Runnng Tme and Applcablty Analyss Schedulng Scheme 2 has a much lower computaton complexty and runnng tme as compared to Scheme 1. TDA has a computaton tme of O(N 3 ) where N s the number of nodes n the network. The algorthm has to perform O(N 2 ) computatons n each teraton and has to perform O(N) teratons n the worst case. Schedulng Scheme 2 has a worst case computaton tme of O(N 2 ). The hgher computaton tme of Scheme 1 s nconsequental for scenaros lke the communty networks because all the computatons are performed offlne and beforehand by the network controller or a smlar entty on the dstrbuton system. There would be very lttle change n topology for such networks, thus the load balancng computatons would be very nfrequent. However, f the network s dynamc due to node moblty, node sleep-tme, or frequent node falures, then Scheme 2 would be benefcal. Scheme 2 would be especally helpful n such scenaros where frequent route re-computatons are requred. An nherent assumpton for both the schemes s that the cost of re-assemblng the nodes traffc at the gateways (or the dstrbuton system) s offset by the savngs ncurred n terms of network bandwdth utlzaton and network throughput ncrease. Ths s a vald assumpton snce the wreless meda s a shared resource, whereas an arbtrarly powerful machne can be employed for reassemblng node traffc on the dstrbuton system. 6. Smulaton Results We conduct smulatons to nvestgate the performance of our schemes for each of the followng three objectves: (a) Maxmum Throughput Schedulng, (b) Mnmum Cost Blockng, and (c) Farness to nodes. We do a self-contaned evaluaton of the algorthms, transparent of underlyng network level ssues other than the capacty of wreless lnks. We develop two specfc schemes called Sngle Shortest Paths (SSP) algorthm and Fxed Shortest Path (FSP) algorthm so that we can evaluate TDA and Schedulng Scheme 2 aganst some basc benchmarks. In the SSP algorthm, each node determnes one shortest path to a gateway and sends all self traffc on ths shortest path. In the FSP algorthm, each node splts traffc equally on all ts paths to connected gateways. Ths s smlar to Scheme 2, except that a node s traffc s equally splt along ts avalable routes. Further, there are two varants for Scheme 2 and the FSP algorthm: large load-node frst (favorng nodes wth hgh volume of self-traffc) and small load-node frst (favorng nodes wth less self traffc). In the large load frst scheme, nodes wth the largest loads (heavest traffc) schedule traffc along all ther paths before nodes wth smaller loads. In small load frst, the order of schedulng s reversed from nodes wth least traffc to nodes wth heavest traffc. If any lnks along a node s path reach capacty, then no more traffc can be scheduled across those lnks, and the node has to use other paths. Our smulaton topology s 100 nodes evenly dstrbuted over a rectangular area of sdes 1000 meters 7
by 1000 meters. There are 4 gateways, one at each corner of the rectangle. There are 15 G-1 nodes. We used a pseudo-random functon for placng the nodes n the rectangular area and generatng the lnks between the nodes. Once generated, the topology was fxed. All data ponts are average of 100 runs. In the smulaton graphs, we refer to Schedulng Scheme 2 as Algorthm 2 for space conservaton. 6.1. Throughput Comparson We frst compare the throughput of TDA and Algorthm 2 wth SSP and FSP. In the smulaton graphs, X-axs represents the average of all nodes traffc n kbps. Y-axs represents the percentage throughput (1 corresponds to 100% throughput). Fgures 3, 4, 5 present total network throughput for network lnk capactes of 50, 100 and 150 kbps respectvely. It s evdent that TDA outperforms all other schemes n terms of network throughput under condtons of heavy traffc (hgh load, low lnk capacty), moderate traffc, and low traffc (low load, hgh lnk capacty); establshng that t s far superor to other schemes for network throughput. Ths s because TDA dynamcally adjusts shortest paths and node prortes. No other algorthm clearly domnates others n all scenaros; however the performance of SSP algorthm s consstently poor at all tmes. Performance of Algorthm 2 wth large load frst s only margnally better as schedulng nodes wth large loads frst results n shortest paths gettng loaded wth a few nodes traffc ntally. Subsequently a large number of nodes are forced to route most of ther traffc along generally longer routes due to a lot of lnks gettng loaded to capacty early on. However, Algorthm 2 wth small load frst s generally better than the FSP algorthm under constraned lnkbandwdth condtons as seen n Fg. 3. Wth the small load frst schedule, more nodes are able to schedule traffc on ther paths of frst-choce (lnks are loaded wth less traffc per node ntally) and shorter paths are able to carry traffc for hgher number of nodes. Only a few nodes wth heavy traffc may fnd lnks along ther shortest paths loaded to capacty later n the schedule. Thus the number of nodes affected by lnk congeston wll be less. Wth FSP, the nodes splt ther traffc equally, thus the advantage of small load-node frst becomes less promnent. Fgures 6, 7 and 8 show the throughput of a G-1 node for lnk capactes of 50, 100 and 150 kbps respectvely. As dscussed n Sec. 5, the performance of TDA s exceptonally hgher than other schemes because G-1 nodes attan hgh prortes for routng other nodes traffc. The performance of Algorthm 2 (small load frst) s also consstently better than BSP and SSP algorthms. Fgures 9, 10 and 11 present the throughput of a node located roughly n the center of the network for lnk bandwdths of 50, 100 and 150 kbps respectvely. Nodes stuated wthn a square of 300 meters by 300 meters n the center of the smulaton topology were consdered canddates. Agan the performance of TDA s better than other schemes. Performance of FSP (smallest load frst), Algorthm 2 (smallest load frst) and SSP are comparable: the dfference n path lengths between shorter and longer paths s less sgnfcant for a node almost unformly dstant from all gateways. Ths offsets the advantage that Algorthm 2 and FSP have over SSP. 6.2. Robustness As dscussed n Sec. 4, threshold cryptography schemes assume that a secret s correctly receved f a certan percentage of the shares are delvered accurately. Here we test our schemes for two dfferent cases: a secret s correctly receved f (a) 2 out of 4 and (b) 3 out of 4 shares from a node are receved at the gateways. Fgures 12, 13 and 14 represent the percentage of nodes that successfully sent t out of n (here 2 out of 4 and 3 out 4) threshold traffc for lnk bandwdth upper bounds 50, 100, 150 kbps respectvely. The X-axs n the plots represents the percentage of nodes and the Y- axs represents the average node load. Agan t s evdent from the fgures that TDA s the more robust scheme n both the cases. 6.3. Farness Fgures 15, 16 and 17 show average lnk utlzaton for lnk capactes 50, 100 and 150 kbps respectvely. The average lnk utlzaton s substantally hgher for TDA n all three scenaros. Ths ndcates that the TDA s able to load the lnks more effcently and unformly than the other schemes, representatve of homogeneous traffc dstrbuton across the network and enhanced node farness. Snce all the paths are dynamcally chosen, any unused lnk bandwdth may be utlzed after other lnks get congested. 8
Fg. 3. Network Throughput (50 kbps) Fg. 6. G-1 Node Throughput (50 kbps) Fg. 4. Network Throughput (100 kbps) Fg. 7. G-1 Node Throughput (100 kbps) Fg. 5. Network Throughput (150 kbps) Fg. 8. G-1 Node Throughput (150 kbps) 9
Fg. 9. Central Node Throughput (50 kbps) Fg. 12. Percentage Shares Receved (50 kbps) Fg. 13. Percentage Shares Receved (100 kbps) Fg. 10. Central Node Throughput (100 kbps) Fg. 14. Percentage Shares Receved (150 kbps) Fg. 11. Central Node Throughput (150 kbps) 10
7. Concluson Fg. 15. Average Lnk Utlzaton (50 kbps) Ths paper presents a new paradgm for load balancng n wreless mesh networks wth statc nodes. Ths s one of the frst papers to explore load balancng n wreless mesh networks as a maxmum throughput schedulng problem. The prmary contrbuton of ths paper s a new scheme whch smultaneously acheves three goals: mproved network throughput, securty and reslency, and farness to nodes. These are mportant and desrable characterstcs of any networkng protocols. Our scheme demonstrates that by splttng the nodes traffc across multple gateways through ntellgent traffc schedulng and node prorty assgnments, t s possble to substantally mprove the network performance n terms of throughput, reslency and farness. In ths paper, we proved some basc theoretcal ssues, proposed several heurstcs and verfed the performance of our schemes through extensve smulatons. The motvaton for ths research drves from the mportance of desgnng effcent load balancng algorthms for mprovng the performance of wreless networks, and the nherent dffculty n desgnng such schemes. Ths task becomes especally challengng due to the lmted avalablty of bandwdth n the wreless doman and the fact that obtanng maxmum throughput across a network or a mesh through optmal load balancng s a known NP-hard problem. Fg. 16. Average Lnk Utlzaton (100 kbps) Fg. 17. Average Lnk Utlzaton (150 kbps) IP routng nvolves packets between source and destnaton traversng ndependent paths. It s only natural to extend ths traffc-splttng dea to the wreless mesh doman. Our scheme would be benefcal for real-lfe communty wreless networks, whch have a promsng growth potental n the future of commercal wreless technology. The drecton of our contnung research s to ntegrate our scheme wth exstng wreless protocols. Ths wll enable us to study the effects of other wreless network parameters lke lnk unrelablty and transmsson delays. Further mprovement n the dynamcty of the scheme for changng network condtons s n order. For example, statonary wreless networks wth frequent node falures, or networks wth strctly synchronzed node sleep and wakeup cycles (due to power conservaton concerns) can utlze a varant of our scheme wth more dynamc route re-computatons. 11
Our contnung research also focuses on desgnng a smlar traffc splttng scheme for wreless mesh networks wth moble nodes. Incorporatng moblty would requre decsons to be made locally at the nodes. Part of our research nvolves theoretcal analyss of the dffculty of a moble node to establsh smultaneous multple paths versus the benefts of establshng these paths. A scheme nvolvng smultaneous multple paths for a moble node could be benefcal for node throughput f some old paths survve when the node moves. References [1] http://www.netlb.org/utk/ls/pcwlsi/text/node248.h tml [2] O. Beaumont, V. Boudet, F. Rastello, Y. Robert, "Parttonng a square nto rectangles: NPcompleteness and approxmaton algorthms", Algorthmca, 34, 2002, pp.217-239 [3] O. Beaumont, V. Boudet, F. Rastello, Y. Robert, "Matrx multplcaton on heterogeneous platforms", IEEE Trans. Parallel Dstrbuted Systems, 12(10), 2001, pp. 1033-1051 [4] O. Beaumont, A. Legrand, F. Rastello, Y. Robert, "Dense lnear algebra kernels on heterogeneous platforms: Redstrbuton ssues", Parallel Computng, 28, 2001, pp. 155-185 [5] O. Beaumont, A. Legrand, F. Rastello, Y. Robert, "Statc LU decomposton on heterogeneous platforms", Int. Journal of Hgh Performance Computng Applcatons, 15(3), 2001, pp. 310-323 [6] I. Akyldz, X. Wang, W. Wang, ``Wreless Mesh Networks: A Survey'', Computer Networks Journal 47, (Elsever), March 2005. pp. 445-487. [7] http://wre.less.dk/wk/ndex.php/meshlnks [8] http://www.communtywreless.org/ [9] http://research.mcrosoft.com/mesh/ [10] http://www.etf.org/rfc/rfc0791.txt [11] [KN] M. Garey, D. Johnson, "Computers and Intractablty: A Gude to the Theory of NP- Completeness", W. Freeman, ISBN 0716710455, A6: MP9, 1979, pg. 247 [12] P. Pham, S. Perreau, Performance Analyss of Reactve Shortest Path and Mult-path Routng Mechansm wth Load Balance, IEEE INFOCOM 03, San Francsco, CA, 2003. [13] H. Hassanen, A. Zhou, Routng wth Load Balancng n Wreless Ad hoc Networks, 4th ACM Internatonal Workshop on Modelng, Analyss and Smulaton of Wreless and Moble Systems, Rome, Italy, 2001, pp. 89 96. [14] S. Lee, M. Gerla, AODV-BR: Backup Routng n Ad hoc Network, IEEE WCNC 00, pp. 1311 1316. [15] S. Roy, D. Saha, S. Bandyopadhyay, A Network- Aware MAC and Routng Protocol for Effectve Load Balancng n Ad Hoc Wreless Networks wth Drectonal Antenna, MobHoc 03, Annapols, MD. [16] Y. Ganjal, A. Keshavarzan, Load Balancng n Ad Hoc Networks: Sngle-path Routng vs. Mult-path Routng, IEEE Infocom, Mar 2003, Hong Kong. [17] http://pnehurst.usc.edu/~junsoo/smr/ [18] http://www.orellynet.com/pub/a/wreless/2004/01/2 2/wrelessmesh.html [19] www.mtre.org/work/tech_transfer/moblemesh/ [20] R. Draves, J. Padhye, B. Zll, Routng n Mult- Rado, Mult-Hop Wreless Mesh Networks, MobCom 04, Sept. 2004, Phladelpha, PA [21] A. Ranwala, T. Chueh, Archtecture and Algorthms for an IEEE 802.11-Based Mult- Channel Wreless Mesh Network, IEEE Infocom Mar 2005, Mam, FL. [22] J. Hespanha, S. Bohacek, Prelmnary Results n Routng Games, Amercan Control Conference, Arlngton, VA, June 2001, vol. 3, pp. 1904-1909 [23] G. Horton, "A mult-level dffuson method for dynamc load balancng", Parallel Computng. 19 (1993), pp. 209-229 [24] D. Eppsten, Fndng the k shortest paths,35th IEEE Symposum on Foundatons of Computer Scence., Santa Fe, 1994, pp. 154-165. [25] D. Ganesan, R. Govndan, S. Shenker, D. Estrn, "Hghly-Reslent, Energy-Effcent Multpath Routng n Wreless Sensor Networks", Moble Computng and Communcatons Revew, Vol. 1, Number 2, 2002. [26] Y. Desmedt, Y. Frankel, Threshold Cryptosystems, Proceedngs on Advances n cryptology, 1989, ISBN: 0-387-97317-6 Santa Barbara, CA, pp. 307-315 [27] C. Chekur, A. Goldberg, D. Karger, M. Levne, C. Sten, Expermental Study of Mnmum Cut Algorthms, Tech. Rep., NEC Research Insttute, Inc., 1996, pp. 96-132 [28] S. Bellovn, E. Gansner, "Usng Lnk Cuts to Attack Internet Routng", Tech. Rep., ATT Research, 2004, Work n Progress 2003 USENIX [29] R. Gandh, S. Khuller, A. Srnvasan, Approxmaton Algorthms for Partal Coverng Problems, Lecture Notes n Computer Scence, Sprnger-Verlag GmbH, ISSN: 0302-9743, Vol. 2076, pp. 225-236 12