11 IEEE Communications Magazine

Size: px
Start display at page:

Download "11 IEEE Communications Magazine"

Transcription

1 A Introduction PACKET-SWITCHED network may be thought of as a distributed pool of productive resources (channels, buffers, and switching processors) whose capacity must be shared dynamically by a community of competing users (or, more generally, processes) wishing to communicate with each other. Dynamic resource sharing is what distinguishes packet switching from the more traditional circuit-switching approach, in which network resources are dedicated to each user for an entire session. The key advantages of dynamic sharing are greater speed and flexibility in setting up user connections across the network, and more efficient use of network resources after the connection is established. These advantages do not come without a certain danger, however. Indeed, unless careful control is exercised on user demands, users may seriously abuse the network. In fact, if the demands are allowed to exceed the system capacity, highly unpleasant congestion effects occur which rapidly neutralize the delay and efficiency advantages of a packet network. The type of congestion that occurs in an overloaded packet network is not unlike that observed in a highway network. During peak hours, the traffic often exceeds the highway capacity, creating large backlogs. Furthermore, the interference between transit traffic on the highway and onramp and off-ramp traffic reduces the effective throughput of the highway, thus causing an even more rapid increase in the backlog. If this positive feedback situation persists, traffic on the highway may come to a standstill. The typical relationship between effective throughput and offered load in a highway system (and, more generally, in many uncontrolled, distributed dynamic sharing systems) is shown in Fig. 1. By properly monitoring and controlling the offered load, many of these congestion problems may be.eliminated. In a highway system, it is common to control the input by using access ramp traffic lights. The objective is to keep interference between transit traffic and incoming traffic within acceptable limits, and to prevent the incoming traffic rate from exceeding the highway capacity. Similar types of controls are used in packet-switched networks, and are called flow control procedures. As in the highway system, the basic principle is to keep the excess load out of the network. The techniques, however, are much more sophisticated here, since the elements of the network (that is, the switching processors) are intelligent, can communicate with each other, and therefore can coordinate their actions in a distributed control strategy. More precisely, the goal of flow control is to control admissions to the network so that resources are efficiently utilized and, at the same time, user performance requirements are satisfied. In this paper, flow control is intended as a synonym for congestion control. Some authors extend the definition of flow control to include higher-level functions, such as error protection, duplicate detection, and so on [l]. This is not surprising since, as we will see later, congestion control is the result of the cooperation of several levels of protocols, starting at the link level and going all the way up to the endto-end level. Each one of these protocols contributes to flow control while supporting other functions, such as link error protection, packet sequencing, and so on. Here, however, we attempt to isolate the congestion control features from the other features, and reserve the term flow control for those November 1984-Vol. 22, No IEEE Communications Magazine

2 functions (of the protocols) that promote and protect the efficient use of shared resources. Once user data has been accepted into the network, it is the task of the routing protocol to route that data on the best path to its destination. More precisely, the goal of.the routing. protocol is to provide the best collection of paths between source and destination, given the traffic requirements and the network configuration. The definition of best path may vary depending on the nature of the traffic (for example, interactive, batch, or digitized voice) and the existence of constraints not necessarily related to performance (such as political, legal, policy, and security issues). In this paper, however, best path will generally be regarded as the path of minimum delay. Since traffic requirements and network configuration change in time (the latter, for example, because of link and node failures), the routing protocol must be adaptive, that is, it must be able to adjust the routes to time-varying network conditions. So far, we have introduced the routing and flow-control protocols as separate functions: flow control determines when to block (or drop) data traffic either at the entrance or within the network, while routing selects the best path to the destination. Strong interaction between the two protocols actually exists in some types of packet subnets. In virtualcircuit (VC) subnets, for example, where a path is preestablished (and resources along the path are preallocated) at connection set-up time, the routing protocol is responsible for finding the best path with sufficient resources to support the connection. If no such path is found, the connection is refused. Furthermore, if congestion builds up at some point in an already established path, a form of flow control known as backpressure quickly propagates upstream along the path all the way to the source, stopping further inputs until the congestion is cleared. Thus, routing and flow control work in concert in VC networks. In datagram networks, on the other hand, these functions are to a large degree independent. The purpose of this paper is to identify the basic building blocks of the routing and flow control schemes currently used in packet networks, and to review and compare some representative implementations. The Routing Protocol In the IS0 Open System Interconnection classification of data communications protocols, the routing protocol is November Vol. 22, NO. 11 IEEE Communications Magazine 12 generally placed at level three-the network level. The function of the network level is to establish cost-effective and reliable network paths for the delivery of packets from source to destination. Level three is supported below by level two-the link level-which provides reliable transfer of data on a channel. Level three, in turn, provides services to level four-the transport level-which establishes and maintains reliable connections between end users. One of the main functions of the network layer protocol is routing, that is, the ability to establish cost-effective paths. Another key function is flow control, which guarantees good performance on the paths found by the routing protocol. First, we state the requirements of a routing implementation. These are: rapid delivery of packets, that is, the ability to detect low-delay paths, adaptivity to link/node failures, that is, finding alternate paths when the current path becomes unavailable, and adaptivity to load fluctuations, that is, the capability of detecting an alternate path when the current path becomes congested. In order to achieve the above goals, a routing policy must perform several functions (such as status acquisition, route computation, and information exchange), each of which can be implemented in several different ways. The number of possible combinations is very large; thus, a preliminary classification is helpful before engaging in a discussion of the basic functions and the survey of existing and proposed policies. Numerous classifications have been proposed [2,3,4,5]. In this study, we choose to classify adaptive policies on the basis of the location(s) where the routing computation is performed and the type of network status information required.. Four classes are identified: Isolated policies-the routing computation is performed by each node independently, based on local information. No exchange of status or routing information among the nodes is provided. Distributed policies-the routing computation is performed in parallel by all the nodes, in a cooperative manner, based on partial status information exchanged between the nodes. Centralized policies-the Network Control Center (NCC) assembles the global state of the network, computes minimum delay routes, and distributes routing tables (or routing commands) to all the nodes. Mixed policies-these policies derive and combine features from some or all of the previous classes. A typical example is the integration of centralized and isolated routing computation. spite of the substantial differences that clearly exist between the various types of routing policies, it is possible to identify some basic functions that may be regarded as the building blocks of the most general policy. These functions are listed below. Monitoring Network Status The network status variables of interest in the routing computation are the topological connectivity, the traffic pattern, and the delay. We distinguish among local, global,

3 and partial status. In order to acquire local connectivity, each node monitors the status of adjacent lines and/or nodes by measuring line traffic or (in the absence of traffic) by periodically interrogating the neighbors. Local traffic measures are average (or instantaneous) queue lengths, average flow on outgoing links, and incoming traffic from external sources. Global status consists of the full network connectivity and traffic pattern information, and may be acquired by collecting and carefully correlating the local status information from all the nodes in the network. Partial status (relative to a given node) is at an intermediate level between local and global status. It is the status of the network as seen by the node and it includes all the elements necessary for the routing decision at that node. An example of partial status is the list of reachable nodes, and the estimated delay (or hop distance) to each of these nodes. We will see later that there is a close correlation between network status information techniques and routing classes as defined above. Briefly, the isolated routing policy is based on local status information: the distributed policy is based on partial status information; and the centralized policy requires local status acquisition at all nodes and global status information at the NCC. Network Status Reporting Network status measurements may be processed locally or reported to a collection center for further processing. In the centralized routing case, local measurements are periodically reported to the NCC and are integrated there to produce the global status. In the distributed case, the local measurements are combined with the information received from neighbors to update. the partial status of the network (as viewed by that node): then the partial status is communicated to the neighbors. In the isolated case, measurements are processed and used locally: no exchange of status information occurs between nodes. Routing Computation and lmplernentation As a general rule, route selection follows the criterion of minimum delay to destination, based on the available network status information. In an isolated routing policy, each node stores a precomputed list of preferred output links (ranked in priority order) for a given destination..the list is computed off line, and down-line loaded into each node when the node is configured. In some schemes, like SNA, entire paths are precomputed. In a distributed policy, it is the responsibility of each node to compute (periodically), from its status information, the minimum-delay route to each destination. This route is then used for future traffic to that particular destination. In a centralized policy, the NCC maintains global network status and periodically constructs the best routes between all node pairs: no routing computation is required at the nodes. There are several possibilities for storing the routing information in the, nodes. The most general solution consists of storing at node i (for i = 1,..., N) an N x Ai matrix (where N = total number of nodes and Ai = number of neighbors of node i) whose entries are the fractions of total traffic to a given destination that must be distributed among the various neighbors (multipath routing). This solution reduces to single-path routing. if only one output link per destination is used. The routing tables are either computed locally (distributed routing) or are provided by the NCC (centralized routing). Packet Forwarding Packet forwarding is the process that takes place at each intermediate node on the path, and involves the inspection of the header of each incoming packet, the consultation of proper tables, and the placement of the packet on the proper output queue. We distinguish between vi,rtual-circuit and datagram forwarding mechanisms. In the virtual-circuit mode, routes are selected on a connection basis. A call request packet sets up the path and loads routing maps for this connection at each intermediate node. The map provides a mapping between the ID number and the route to be used. Each data packet carries the ID of the connection to which it belongs in the header. At the end of the user session, a call clear packet erases the connection. In the datagram mode, each packet is routed independently of the other packets in the sam e session. Thus, the responsibility of tracing the path (based on destination address and routing tables) rests on every packet in a datagram network, while it is delegated to the call request packet in a virtual-circuit network. The subsequent packets in the connection follow the leader. In most networks (either datagram or VC), the route computation is executed in the background, independently of the packet-forwarding operation which is carried out for each data (or call request) transmission. In other words, the route computation generates routing tables, which are then used for packet forwarding. In some networks, however, some (or all) of the routing computation is done at packet forwarding time. For example, in an isolated routing policy with several (precomputed) priority routes, the route is selected on a packet-by-packet basis. More specifically, the highest-priority route is chosen until that queue exceeds a given threshold. Then, the second-best route is chosen, and so on. Thus, queues are checked and a simple routing computation must be made at each packet-forwarding instant. As another example, TYMNET, a VC network implementing a centralized routing scheme, executes a shortest-route computation for each connection request. In general, recomputing the-route for each packet-forwarding action provides more efficient paths (that is, lower delay). It does, however, introduce more processor overhead. In the following sections, various representative examples of routing policies are presented, Isolated Routing Policies In an isolated implementation, a node can derive useful routing information only from the length of its local queues. A simple policy aimed at minimizing delay consists of choosing the output link with the shortest queue, regardless of the final destination of the packet. This is the essence of Fultz s shortest queue and zero bias algorithm and Baran s hop potato algorithm [2]. An even more simple-minded isolated policy is the flooding policy [2]. Here, the node disregards even the locally available delay information and routes one copy of each transit packet to all neighbors (except for the neighbor from which the packet was just received). A hop count is incremented in the packet header after each transmission. Packets are absorbed by the intended destination or are discarded after a maximum hop number is reached. November 1984-Vol. 22, No IEEE Communications Magazine

4 The foregoing schemes have some adaptivity to load fluctuations and failures. The flooding algorithm, in particular, is extremely robust to failures, in that it will always deliver a copy of the packet to a reachable destination. Major drawbacks, however, are poor line efficiency (especially in flooding) and the presence of loops. To improve line efficiency and reduce looping, one may combine the local adaptive policy with a predefined priority ordering of the outgoing links that may be used to reach a given destination. A common scheme consists of defining primary and secondary routes, where the primary is the shortest path and the secondary is the second shortest path, not including the first leg of the former. There are numerous examples of isolated policies in existing networks. In the Amdahl CSD network, up to seven priority routes for each destination are stored at each node. The selection of the best route is based on queue length and the number of VC s on each trunk. Telenet also implements an isolated, locally adaptive routing scheme with prioritized options [6]. In the IBM SNA architecture, several paths (for each source-destination pair) are precomputed and mapped into the network via routing maps at intermediate nodes. These paths are called explicit routes. Each node chooses one of the available explicit routes (at virtual route setup time) based on load balancing considerations [7]. As a difference from other isolated schemes, the entire path, instead of just the next leg on the path, is chosen. Isolated policies have been the subject of several analytic and simulation studies [8]. The results show that the adaptivity to load fluctuations is very good. The adaptivity to failures, on the other hand, is rather poor, owing to the fact that the underlying priority path list is static. During failures, the major problem is looping. A simple case of looping is illustrated by the example shown in Fig. 2. The link labels in the 5-node network represent the link priorities to destination node E. A packet, for example, residing in node C and directed to node E will be, whenever possible, transmitted on link (C, E). If the queue on such a link exceeds a given threshold, link (C, D) is used. (This link would be the first leg on the alternate path (C,D,E.) If link (C, D) is also congested, the third option-that is, link (C, A)-is used. (This is the first leg on path (C, A, B, D, E.) Let us now assume that link (D, E) has failed, and that a heavy traffic requirement arises from C to E. Because of the heavy load, the queue to (C, E) will rapidly overflow, thus forcing the link (C, D) to be used. When packets arrive at node D, they find that they cannot use link (D, E) (since it has failed). They cannot use link (D, C) either, since this would create a ping-pong loop. (In general, isolated policies prevent a packet from being sent back to the same node it just arrived from.) Thus, the only option is to send the packets to node B. From B, packets will move on to A and then to C. Thus, a loop has been generated! These loops, in part, would be avoided by modifying (from the NCC) the priorities of the outgoing links to reflect the changes in topology. In the above example, options 2 and 3 out of node C should be dropped upon learning about the failure of link (D, E). Many manufacturers are, in fact, considering the possibility of dynamically controlling the priority options from the NCC. Telenet, for example, has already incorporated this feature in its network. We will say more about the interaction of NCC controls and local adaptive policies in the section on Mixed Routing Policies. Distributed Routing Policies Distributed policies assume distributed routing computation and routing information exchange among neighbor nodes. The basic approach consists of computing shortest paths from each node to each destination, using the information periodically received from other nodes. Several schemes can be used. The best known scheme is the old ARPANET routing scheme (so called because it was replaced by a different version in the late 1970s). It is presented here, since it has inspired several other developments. The old ARPANET algorithm can be defined as a shortest queue + bias algorithm [2]. First, we describe the data structures. Each node i(i = l,...n) maintains a delay table DT, whose entry DTfjJ(k,/) is the estimated delay from node i to destination k if I is chosen as the next node on the route to k (see Fig. 3). From DVJ, a vector MDV ;, the minimum-delay vector, is obtained as follows: MWJ(k) = min DP(k,/) /EA; where A; is the set of nodes adjacent to node i. MDVfiJ(kl represents the minimum estimated delay from i to k. The adaptive policy attempts to send packets on minimum-delay routes; therefore, packets directed to k are sent to neighbor Is such that DVJ(k,Is) = min DVJ(k,/) I EA; Each node periodically updates its own delay table using the delays measured on its output lines and delay information received from neighbor nodes. In fact, every fraction of a second, each node transmits the vector MDVasynchronously to its neighbors. Upon reception of vectors MDVC, Pr /EA;, node i updates DT as follows: DT (k,/) = d(i,/) + MDVfIJ(k) + Dp where d(i, I) is the measured delay (queuing +transmission) on channel (i,/) and Dp is a bias term properly adjusted to reduce oscillatory effects [2]. Initially: MDVJ (k) = { o for k = i 00 for k # i November 1984-Vol. 22, No. 11 IEEE Communications Magazine 14

5 DECNET and DATAPAC, consists of making the link costs d(i,i) insensitive to queue fluctuations. This automatically eliminates oscillations; it does, of course, also eliminate the load-leveling feature. A different remedy was used in the new ARPANET routing algorithm. In this algorithm, each node periodically (say, every 10 seconds) broadcasts to all the other nodes in the network its local connectivity and its own queuing delays (averaged over the previous update interval). These broadcast messages are delivered via flooding. Because of this broadcast scheme, each node can construct and maintain an up-to-date data base of network topology and network delays, and can therefore compute the minimum delay path from itself to each destination. The first leg of the path is then entered in the routing table. The latter has the same structure and the same function as the routing table in the old ARPANET algorithm. The new ARPANET algorithm is still a distributed algorithm, since each node is fully responsible for its routing table computation. It requires more memory than the old algorithm, since it must store the global topology and delay data base. However, it introduces less line overhead, since routing messages are fewer and shorter. It also reduces the problem of oscillation, since averages, instead of instantaneous queue lengths, are used in the delay estimation. It can be shown that this algorithm converges very rapidly to the shortest-path solution, where shortest paths are based on link lengths (d(i,i) + Dp). After the shortest path has been computed, each packet is then routed along such path. Recall that ARPANET is a datagram network; therefore, the route is selected on a packet-by-packet basis. Unfortunately, in sustained loads, this solution is not stable. In fact, queues on the minimum path will build up, causing delays to increase. Consequently, at the next update iteration, another path will be found to be the minimum path and all the traffic will be deviated to it. So, the routing solution may show an oscillating behavior, where several routes are used cyclically, even when the external traffic pattern is stationary (this is in contrast with the optimal routing solution, where multiple routes are used simultaneously). We can now easily summarize the pro s and con s of the old ARPANET algorithm. On the positive side, the algorithm computes minimum delay paths, is robust to network failures, and, to some extent, distributes flows on multiple paths. On the negative side, it tends to introduce oscillations (because of the rapid fluctuation of link costs d(i,/)). It also introduces a fairly high line overhead, due to the frequent exchange of MDV vectors. Several sleps can be taken to stabilize the algorithm. An effective remedy used by several networks, including Centralized Routing Policies The heart of a centralized routing policy is the NCC-the center responsible for global network status collection, route computation, and route implementation. The NCC may be a nodal processor equipped with specialized software, but more commonly it is a host computer with adequate storage and processing capability as required to store the network topology and traffic information and to compute the optimal routes. Different versions of centralized policy exist depending on the type of network information stored in the NCC, the route computation algorithm, and the route implementation technique. In particular, two alternatives are possible for route implementation: Periodical dissemination of routing tables from the NCC to all the nodes. Implementation of individual paths on a call-by-call basis (VC networks only). In the following, we describe the centralized policy implemented in TYMNET, a VC network supporting the communications requirements of TVMSHARE [9]. In TYMNET, for each connection request between a user and a remote host, a control packet is first sent by the origin.node to the NCC (the supervisor) which carries source, destination, and password information. After password verification, the supervisor computes the minimum-cost path for the VC. Each link in the network has a cost associated with it which is a function of link bandwidth and load condition. The link is said to be in normal or overload condition, depending on whether the number of VC s carried on it is below or above a specified threshold. Once the path has been computed, the supervisor allocates buffers and sets up mappings (permuter tables) at each intermediate node on the path to create the VC. User blocks flowing on a given virtual circuit are identified by a 15 November 1984-VOl. 22, No. 11 IEEE Communications Magazine

6 connection ID number carried in the header, and are routed according to the permuter table information. After receiving the permuter table entries, each node sends an acknowledgment to the supervisor. Once all the acknowledgments are in, the supervisor informs the end users that the connection has been established and.data transfer can start. To compute minimum-cost routes, the supervisor must have knowledge of the network topology and link loads. While link load information is incrementally updated by the supervisor after each circuit allocation/deallocation, network topology must be initially acquired with a special procedure called network takeover. In the takeover phase, the supervisor first sends a takeover command to its own node, and learns that node s capacity, the capacity of its links, every permuter table entry in that node, and the neighbors of that node. After this first step is completed, the supervisor sends takeover commands to the neighbors of the node just taken over, and the procedure continues until no more nodes are discovered. After takeover, if link failures occur, they are reported (from the adjacent nodes) to the supervisor, and the topological map is updated. From the map, nodal failures and isolations can be detected by the supervisor and taken into account in future connection establishments. A centralized policy enjoys some distinct advantages over distributed and isolated policies: It eliminates routing computation requirements at the nodes; it permits a more accurate optimization of the routes, eliminating loops and oscillations that may occur when the network state is not completely known: and it allows some form of flow control on the incoming traffic. The last property is especially attractive: in TYMNET, for example, the supervisor constantly monitors loading in the network and therefore can reject calls when a load threshold is exceeded. This flow-control capability cannot be easily implemented in distributed controlled networks. On the negative side, there are two problems that limit the applicability of centralized policies to operational networks. One is the increased traffic overhead in the proximity of the NCC due to the periodic collection of status reports by the NCC from all the nodes and the distribution of routing commands from the NCC to the nodes. The other (more serious) problem is reliability. If the NCC fails, the entire network goes out of control. This problem is partially corrected by using a hierarchy of NCC s (four in TYMNET) which continuously monitor each other; when the senior NCC fails, the next in line takes over. This solution is costly, however, and not always satisfactory. In a military environment, for example, the NCC s may become very vulnerable targets. To compensate for the deficiencies (and combine the advantages) of the various routing schemes, one may wish to use more than one policy simultaneously. This approach is possible and is discussed in the next section. Mixed Routing Policies In principle, it is possible to combine any type and number of routing strategies in the same network, as long as precise rules are defined at each node for the selection of one of the policies, depending on the type of traffic, the load, and the connectivity conditions. In practice, line and processor overhead considerations restrict the number of policies that can be combined in a mixed strategy to two: namely, a centralized policy and an isolated policy. The centralized policy is used to find overall best paths at steady state. These paths are dispatched by the NCC to all the nodes, generally in the form of routing tables. The isolated policy is used to provide rapid response to local congestion and failure problems. The remedy is temporary, and need not be accurate (for instance, loops may be tolerated). Eventually, the centralized policy will learn about the change in traffic and topology conditions, and will revise the routing tables. These principles are similar to the principles proposed by Rudin in his delta-routing algorithm [3]. An example of a combined centralized and local routing scheme is offered by TRANSPAC [lo]. TRANSPAC is the French Public Packet Network, operational since the late 1970s. It is a VC network based on X.25. The centralized component of the routing algorithm is supported by the NCC. Periodically, each node submits to the NCC the average residual capacity measured on each of its outgoing links. (This measurement is based on average link utilization and on the number of VC s routed on the link.) Clearly, average residual capacity is related to performance: the lower the residual capacity, the higher the delay. The NCC maintains up-to-date information of residual capacities throughout the network and computes (periodically) the paths of maximum residual capacity between all node pairs. It then ships, to each node (say node i), a residual capacity matrix C(j,k), where j is any neighbor of i, and k is any destination. C(j,k) is the residual capacity on the best path from j to k (excluding node i), for destination k. The actual best path from i to k is then computed by node i using the isolated component of the routing algorithm. Namely, node i examines all the neighbors and chooses as the next leg to destination k the neighbor which maximizes the following expression: max (min[c(i,i),c(j,k)l) i where C(i,j) is the residual capacity from i to j, and C(j,k) is the residual capacity from j to destination k. C(i,j) is measured locally, while C(j,k) is obtained from the NCC. The advantage of this scheme is that it makes very efficient use of the information available locally, and it relies on the NCC for information beyond the horizon of the local node. Flow Control The overall goal of flow control is to implement mechanisms for the efficient dynamic sharing of the pool of resources (channels, buffers, and switching processors) in a packet network. More specifically, the main functions of flow control may be summarized as follows: 1) Prevention of throughput degradation and loss of efficiency due to overload, 2) deadlock avoidance, and 3) fair allocation of resources among competing users. Throughput degradation and deadlocks occur because the traffic that has already been accepted into the network (that is, traffic that has already been allocated network resources) exceeds the nominal network capacity. To prevent overallocation of resources, the flow-control procedure-includes a set of constraints (for example, on buffers that can be allocated, on outstanding packets, on transmission rates) which can November 1984-Vol. 22, No. 11 IEEE Communications Magazine 16

7 effectively limit the access of traffic into the network or, more precisely, to selected sections of the network. These constraints may be fixed, or may be dynamically adjusted based on traffic conditions. Efficiency and congestion prevention benefits of flow control do not come without cost. In fact, flow control (as any other form of control in a distributed network) may require some exchange of information between nodes to select the control strategy and, possibly, some exchange of commands and. parameter information to implement that strategy. This exchange translates into channel, processor, and storage overhead. Furthermore, flow control may require the dedication of resources (for example, buffers, bandwidth) to individual users or classes of users, thus reducing the statistical benefits of complete resource sharing. Clearly, the trade-off between gain in efficiency (due to controls) and loss in efficiency (due to limited sharing and overhead) must be carefully considered in designing flow-control strategies. This trade-off is illustrated by the curves in Fig. 4, showing the effective throughput as a function of offered load. The ideal throughput curve corresponds to perfect control as it could be implemented by an ideal observer, with complete and instantaneous network status information, ideal throughput follows the input and increases linearly until it reaches a horizontal asymptote corresponding to the maximum theoretical network throughput. The controlled throughput curve is a typical curve that can be obtained with an actual control procedure. Throughput values are lower than with the ideal curve because of imperfect control and control overhead. The uncontrolled curve follows the ideal curve for low offered load; for higher load, it collapses to a very low value of throughput and. possibly, to a deadlock. Clearly, controls buy safety at high offered loads at the expense of somewhat reduced efficiency. The reduction in efficiency is measured in terms of higher delays (for light load) and lower throughput (at saturation). Furthermore, experience shows that flow-control procedures are quite difficult to design and, ironically, can themselves be the source of deadlocks and degradations. Apart from the requirement of throughput efficiency, network resources must be fairly distributed among users. Unfortunately, efficiency and fairness objectives do not always coincide. For example, referring back to our highway traffic situation, the effective throughput of the Long Island Expressway could be maximized by opening all the lanes to traffic from the Island to New York City during the morning rush hour, and in the opposite direction during the evening rush hour. This solution, however, would also maximize the discontent of the reverse commuters (and we all know how dangerous it is to anger a New Yorker!). In packet networks. unfairness conditions can also arise (as we will show in the following sections), but they tend to be more subtle and less obvious than in highway networks because of the complexity of ihe communications protocols. One of the functions of flow control, therefore, is to prevent unfairness by placing selective restrictions on the amount of resources that each user (or user group) may.acquire, in spite of the negative effect that these restrictions.may have o n dynamic resource sharing and, therefore,.overall throughput efficiency. The above functions are generally implemented in a packet network through a set of. mechanisms which operate independently at different. levels. hthis respect;flow control is much more difficult to characterize than the routing protocol, which is clearly assigned to the network layer in the IS0 architecture. There are, of course, many different ways of classifying the various levels of flow control existing in a packet network. In this paper, we will use the following classification (see Fig. 5) [ll]: 11 Hop Level-This level of flow control attempts to maintain a smooth flow of traffic between two neighboring nodes in a computer network, avoiding local buffer congestion and deadlocks. We may further distinguish this level into Link Hop Level and Virtual Circuit Hop Level, depending on whether the entire flow between two, nodes is controlled, or the flow on each VC is selectively controlled. 21 Entry-to-Exit Level-This level of flow control is generally implemented as a protocol between the source and destination swtich. and has the purpose, of preventing buffer congestion at the exit switch. 3) Network Access Level-The objective of this level is to throttle external inputs based on measurements of internal (as opposed to destination) network congestion. 41 Transport Level-This is the level of flow control associated with the transport protocol, that is, the protocol which provides for the reliable delivery of packets on the virtual connection between two remote processes. Its main purpose is to prevent congestion of user buffers at the process level (that is, outside of the network). If we wish to establish a correspondence between the above levels and the levels of the IS0 architecture, we may associate the Link Hop Level.and Network Access Level to the IS0 Link Level, Entry-to-Exit Level and Virtual-Circuit Hop Level to the IS0 Packet Level, and finally, the Transport Level to the IS0 Transport Level. At this point, we must caution the reader that the true system behavior is far more complex than our models and classifications attempt (or can afford) to portray. Therefore, actual networks may not always mechanize all of the abo.ve four levels of flow control with distinct procedures. It is quite possible, for example, for a single flow-control mechanism to combine two or more levels of flow control. On the other hand, it is possible that one or more levels of flow control may be missing in the network implementation. 17 Nbvetnber 1984-Voi. 22, NO. I I JEEE Communications hlagazine

8 In the following sections, we will describe some representative examples for each of the above levels. Hop-Level Flow Control The objective of hop-level flow control is to prevent storeand-forward buffer congestion and its consequences, namely, throughput degradation and deadlocks. Hop-level flow control operates in a local myopic way, in that it monitors local queues and buffer occupancies at each node and rejects store-and-forward (S/F) traffic arriving at the node when some predefined thresholds (for example, maximum queue limits) are exceeded. The function of checking buffer thresholds and discarding (and later retransmitting) packets on a network link is often carried out by the data-link control protocol. This locality of the control does not preclude, however, possible end-to-end repercussions of hop-level flow control, due to the backpressure effect (that is, the propagation of buffer threshold conditions from the congested node upstream to the traffic source(s)). In fact, the backpressure property is efficiently exploited in several network implementations (as will soon be described). Essentially, the hop-level flow control scheme plays the role of arbitrator between various classes of traffic competing for a common buffer pool in each node. A fundamental distinction between different flow control schemes is based on the way the traffic entering a node is subdivided into classes. One family of hop flow control schemes distinguishes incoming packets based on the output links on which they must be transmitted. Thus, the number of classes is equal to the number of the output links. This is what we call Link Hop Level flow control. The flow control scheme supervises the allocation of store-and-forward buffers to the corresponding output queues. Some limit (fixed or dynamically adjustable) is defined for each queue; packets beyond this limit are discarded. Hence, the name channel queue limit (CQL) schemes is generally given to such mechanisms. Another family distinguishes packets based on the virtual circuit (that is, end-to-end session) they belong to. This type of scheme requires, of course, a virtual-circuit network architecture; it assumes that each node can distinguish incoming packets based on the VC they belong to, and that it can keep track of a number of classes equal to the number of VC s that currently traverse it. Note that the number of classes varies here with time (since virtual circuits are dynamically created and released), as distinct from the previously mentioned schemes where the number of classes is merely a function of the topology. Upon creation, a VC is allocated a set of buffers (fixed or variable) at each node. When this set is used up, no further traffic is accepted from that VC. We will refer to this family of schemes as virtualcircuit hop-level (VC-HL) schemes. The most basic CQL scheme consists of setting a f.ixed limit on the maximum size of an output queue. When the maximum is exceeded, incoming packets directed to that queue are discarded (and are later retransmitted by the linklayer protocol). The CQL scheme protects the network from an insidious form of deadlock known as the store-andforward deadlock. To illustrate this deadlock, consider a network (not equipped with CQL) consisting of two switches, A and B, connected by a trunk carrying heavy traffic in both directions (see Fig. 6). Under the heavy traffic assumption, node A rapidly fills up with packets directed to B, and vice versa, B fills up with packets directed to A. If we as$ume that dropped packets are retransmitted, then each node must hold a copy of each packet (and therefore a buffer) until the packet is accepted by the other node. This may result in an endless wait, in which a node holds all of its buffers to store packets being transmitted to the other node, and keeps retransmitting those packets while waiting for buffers to be freed there. Consequently, no useful data are transferred on the trunk. November 1984-Vol. 22. No. 11 IEEE Communications Magazine 18

9 If CQL flow control is implemented, the deadlock condition depicted in Fig. 6 cannot occur, since the buffers in node A cannot be taken over completely by the channel (A, B) queue. Therefore, some buffers in A will always be available to receive packets from node B. Some form or another of CQL flow control is found in every network implementation. The ARPANET Interface Message Processor (IMP) has a shared buffer pool with minimum allocation and maximum limit for each queue, as shown in Fig. 7 [12]. Of the total buffer pool (typically, 40 buffers), 2 buffers for input and 1 buffer for output are permanently allocated to each internode channel. Similarly, 10 buffers are permanently dedicated to the reassembly of messages directed to the hosts. The remaining buffers are shared among output queues and the reassembly function, with the following restrictions: reassembly buffers 520, output queue 58, and total store-and-forward buffers 520. In a VC network, we can also implement, in addition to CQL, a VC-HL scheme. We recall that in VC networks, a physical network path is set up for each user session and is released when the session is terminated. Packets follow the preestablished path in sequence. Sequencing and error control are provided at each step along the path. The basic principle of operation of the VC-HL scheme consists of setting a limit M on the maximum number of packets for each VC stream that can be in transit at each intermediate node. The limit M may be fixed at VC set-up time, or may be dynamically adjusted, based on load fluctuations. The buffer limit M is enforced at each hop by the VC-HL protocol, which regulates the issue of transmission "permits" and discards packets based on buffer occupancy. The advantage of VC-HL (over CQL) is to provide a more efficient and prompt recovery from congestion by selectively slowing down the VC's directly feeding into the congested area. By virtue of backpressure, the control then propagates to all the sources that are contributing to the congestion, and reduces (or stops) their inputs, leaving the other traffic sources undisturbed. Without VC-HL flow control, the congestion would spread gradually to a large portion of the network, blocking traffic sources that were not directly responsible for the original congestion, and causing unnecessary throughput degradation and unfairness. Various buffer sharing policies can be proposed for a VC- HL scheme. At one extreme, M buffers can be dedicated to each VC at set-up time; at the other extreme, buffers may be allocated, on demand, from a common pool (complete sharing). It is easily seen that buffer dedication can lead to extraordinary storage overhead, since there is, generally, no practical upper bound on the number of VC's that can simultaneously exist in a network; furthermore, the traffic on each VC is generally bursty, leading to low utilization of the reserved buffers. For these reasons, most implementations employ dynamic buffer sharing. The shared vs. dedicated buffer policy also has an impact on the deadlock prevention properties of the VC-HL scheme. With buffer dedication, the VC-HL scheme becomes deadlock free. If, on the other hand, no buffer reservations are made and buffers are allocated strictly on demand, deadlocks may occur unless additional protection is implemented. An example of VC-HL flow control (with fixed buffer allocation) is offered by TYMNET. In TYMNET, a throughput limit is computed for each VC at set-up time according to terminal speed, and is enforced all along the network path. Throughput control is obtained by assigning a maximum buffer limit (per VC) at each intermediate node and by controlling the issue of transmission permits from node to node based on the current buffer allocation. Periodically (every half second), each node sends a backpressure vector to its neighbors, containing one bit for each VC that traverses it. If the number of currently buffered characters for a given VC exceeds the maximum allocation (for example, for low-speed terminal-10 to 30 cps-the allocation is 32 characters), the backpressure bit is set to zero; otherwise the bit is set to one. On the transmitting side, each VC is associated with a counter which is initialized to the maximum buffer limit and is decremented by one for each character transmitted. Transmission stops on a particular VC when the corresponding counter is reduced to zero. Upon reception of a backpressure bit = 1, the counter is reset to its initial value and transmission can resume. Another example of a VC-HL scheme (with dynamic sharing) is offered by TRANSPAC [lo]. One of the distin- 19 November 1984-Vo1.22, No. 1 1 IEEE Communications Magazine

10 guishing features of TRANSPAC is the use of the throughput class concept in X.25 for internal flow-congestion control. Each VC call request carries a throughput class declaration which corresponds to the maximum (instantaneous) data rate that the user will ever attempt to present to that VC. Each node keeps track of the aggregate declared throughput (which represents the worst-case situation), and at the same time monitors actual throughput (typically, much lower than the declared throughput) and average buffer utilization. Based on the ratio of actual to declared throughput, the node may decide to oversell capacity, that is, it will attempt to carry a declared throughput volume higher than trunk capacity. Clearly, overselling implies that input rates may temporarily exceed trunk capacities, so that the network must be prepared to exercise flow control. Packet buffers are dynamically allocated to VC s based on demand (complete sharing), but thresholds are set on individual VC allocations as well as on overall buffer pool utilization. Flow control is exercised on the preceding node using the X.25 credit window. When no buffers are available, the window is reduced to zero. Entry-to-Exit [ETE] Flow Control The main objective of the entry-to-exit flow control is to prevent buffer congestion at the exit node. The cause of this bottleneck could be either the overload of the local lines connecting the exit node to the hosts, or the slow acceptance rate of the hosts. The problem of congestion prevention at the exit node becomes complex when this node must also reassemble packets into messages, and/or resequence messages before delivery to the host. In fact, reassembly and resequence deadlocks may occur, thus requiring special prevention measures. In order to understand how reassembly deadlocks can be generated, let us consider the network path shown in Fig. 8, where three store-and-forward nodes (node 1, node 2, and node 3, respectively) relay traffic.directed to host 1. In the situation depicted in Fig. 8, three multipacket messages-a, B, and C-are in transit towards host 1. Without loss of generality, we assume that the message size is 54 packets and that 4 buffers are dedicated to messages being assembled at a node; furthermore, a channel queue limit Q, = 4 is set on each trunk queue, for hop-level flow control. We note from Fig. 8 that. message A (which has seized all four reassembly buffers at node 3) cannot be delivered to the host since packet A2 is missing. Packet A2, on the other hand, cannot be forwarded to node 2 since the queue at node 2 is full. The node 2 queue, in turn, cannot advance until r.eassembly space becomes available in node 3 for B or C messages. Deadlock! In ARPANET, ETE flow control is exercised on a host-pair basis [12]. Specifically, all messages traveling from the same source host to the same destination host are carried on the same logical pipe. Each pipe is individually flow controlled by a window mechanism. An independent message number sequence is maintained for each pipe. Numbers are sequentially assigned to messages flowing on the pipe, and are checked at the destination for sequencing and duplicate detection purposes. Both the source and the destination keep a small window w (presently, w = 8) of currently valid message numbers. Messages arriving at the destination with out-of-range numbers are discarded. Messages arriving out of order are discarded, since storing them (while waiting for the missing message) may- lead to potential resequence deadlocks. Correctly received messages are acknowledged with short control messages, called RFNM s (ready for next message). Upon receipt of an RFNM, the sending end of the pipe advances its transmission window, accordingly. The window and message numbering mechanisms described so far support ETE flow control, and sequencing and error-control functions in the ARPANET. A separate mechanism, known as reassembly buffer allocation [12], is used to prevent reassembly deadlocks. Each multipacket message must secure a reassembly buffer allocation at the destination node before transmission. This is accomplished by sending a reservation message called a REQALL (request for allocation) to the destination and waiting for an ALL (allocation) message from the destination before attempting transmission. To reduce delay (and, therefore, increase throughput) of steady multipacket message flow between the same source-destination pair, ALL messages are automatically piggybacked on RFNM s, thus eliminating the reservation delay for all messages after the first one. If a pending allocation at the source node is not claimed within a given time-out (250 ms), it is returned to the destination with a giveback message. Single-packet messages are transmitted to their destinations without buffer reservation. However, if upon arrival at the destination, all the reassembly buffers are full, the single-packet message is discarded and a copy is retransmitted from the source IMP after an explicit buffer reservation has been obtained. Network Access Flow Control The objective of network access (NA) flow control is to throttle external inputs based on measurements of internal network congestion. Congestion measures may be local (such as buffer occupancy in the entry node), global (total number of buffers available in the entire network), or selective (congestion of the path(s) leading to a given destination). The congestion condition is determined at (or November 1984-Vol. 22, No. 11 IEEE Communications Magazine 20

11 reported to) the network access points and is used to regulate the access of external traffic into the network. NA flow control differs from HL and ETE flow control in that it throttles external traffic to prevent overall internal buffer congestion, while HL flow control limits access to a specific store-and-forward node to prevent local congestion and store-and-forward deadlocks; ETE flow control limits the flow between a specific source-destination pair to prevent congestion and reassembly buffer deadlocks at the destination. As we mentioned earlier, however, both HL and ETE schemes indirectly provide some form of NA flow control by reporting an internal network congestion condition back to the access point either via the backpressure mechanism (HL scheme), or via the credit slow down caused by large internal delays (ETE scheme). Since the primary cause of network congestion is the excessive number of packets stored in the network, an intuitively sound congestion prevention principle consists of setting a limit on the total number of packets that can circulate in the network at any one time. An implementation of this principle is offered by the isarithmic scheme proposed for the National Physical Laboratories network [13]. The isarithmic scheme is based on the concept of a permit, that is, a ticket that permits a packet to travel from the entry point to the desired destination. Under this concept, the network is initially provided with a number of permits, several held in store at each node. As traffic is offered by a tiost to the network, each packet must secure a permit before admission to the high-level node is allowed. Each accepted packet causes a reduction of one in the store of permits available at the accepting node. The accepted data packet is able to traverse the network, under the control of node and link protocols, until its destination node is reached. When the packet is handed over to the destination subscriber, the permit which has accompanied it during its journey becomes free and an attempt is made to add it to the permit store of the node in which it now finds itself. In order to achieve a viable system in which permits do not accumulate in certain parts of the network at the expense of other parts, it is necessary to place a limit on the number of permits that can be held in store by each node, If then, because of this limit, a newly freed permit cannot be accommodated at a node (overflow permit), it must be sent elsewhere. The normal method of carrying the permit in these circumstances is to piggyback it on other traffic, be this data or control. Only in the absence of other traffic does a special permit-carrying packet need to be generated. Critical parameters in the isarithmic scheme design are the total number of permits P in the network and the maximum number of permits L that can be accumulated at each node (permit queue). Experimental results show that optimal performance is achieved for P= 3N, where N= total number of nodes, and L = 3. An excessive number of permits in the network would lead to congestion. An excessive value of L would lead to unfairness, accumulation of permits at a few nodes, and throughput starvation at the others. Another type of NA scheme is the input buffer limit (IBL) scheme. This scheme differentiates between input traffic (that is, traffic from external sources) and transit traffic, and throttles the input traffic based on buffer occupancy at the entry node. IBL is a local network access method since it monitors local congestion at the entry node, rather than global congestion as does the isarithmic scheme. Entry node congestion, on the other hand, is often a good indicator of global congestion because the well-known backpressure effect will have propagated internal congestion conditions back to the entry nodes. The function of IBL controls is to block input traffic when certain buffer utilization thresholds are reached in the entry node. This flow-control approach clearly favors transit traffic over input traffic. Intuitively, this is a desirable property since a number of network resources have already been invested in transit traffic. This intuitive argument is supported by a number of analytical and simulation experiments proving the effectiveness of the IBL scheme. Many versions of IBL control can be proposed. Here, we describe the version proposed by Lam [14] and analytically evaluated in an elegant model. Only two classes of trafficinput and transit-areconsidered in this proposal. Letting NT be the total number of buffers in the node and N, the input buffer limit (where NIINT), the following constraints are imposed at each node: Number of input packets IN/, and Number of transit packets SNT The analytical results indicate that there is an optimal ratio N//NT. which maximizes throughput for heavy offered load, as shown in Fig. 9. A good heuristic choice for N ~NT is the ratio between input message throughput and total message throughput at a node. As shown in the figure, throughput performance does not change significantly even for relatively large variations of the ratio N~NT around the optimal value, thus implying that the IBL scheme is robust to external perturbations such as traffic fluctuations and topology changes. Transport Level Flow Control A transport protocol is a set of rules that govern the transfer of control and data between user processes across 21 November 1984-Vol. 22, No. 1 1 IEEE Communications Magazine

12 the network. The main functions of this protocol are the efficient and reliable transmission of messages within each User SeSSiOn (including packetization, reassembly, resequencing, recovery from loss, and elimination of duplicates) and the efficient sharing of common network resources by several user sessions (obtained by multiplexing many user connections on the same physical path and by maintaining priorities between different sessions to reflect the relative urgency). For efficient and reliable reassembly of messages at the destination host (or most generally, the DTE), the transport protocol must ensure that messages arriving at the destination DTE are provided adequate buffering. The transport protocol function which prevents destination buffer congestion and overflow is known as transport level flow control. Generally, this level of flow control is based on a credit (or window) mechanism. Namely, the receiver grants transmission credits to the sender as soon as reassembly buffers become free. Upon receiving a credit, the sender is authorized to transmit a message of an agreed-upon length. When reassembly buffers become full, no credits are returned to the sender, thus temporarily stopping message transmissions [l]. The credit scheme described above is somewhat vulnerable to losses, since a lost credit may hang up a connection. In fact. a sender may wait indefinitely for a lost credit, while the receiver is waiting for a message. A more robust flow control scheme is obtained by numbering credits relative to the messages flowing in the opposite direction. In this case, each credit carries a message sequence number, say N. and a window size, w. Upon receiving this credit, the sender is authorized to send all backlogged messages up to the (N + w)th message. With the numbered credit scheme, if a credit is lost, then the subsequent credit will restore proper information to the sender [15]. Besides preventing destination buffer congestion, the credit scheme also indirectly provides global network congestion protection. In fact, store-and-forward buffer congestion at the intermediate nodes along the path may cause a large end-to-end credit delay, thus slowing down the return of credits to the sender, and consequently, reducing the rate of fresh message inputs into the network. Several versions of the transport protocol are in existence, each incorporating its own form of transport-level flow control. Here, we briefly describe some representative implementations. The earliest example of transport protocol implementation is the original version of the ARPANET NCP [4]. NCP flow control is provided by unnumbered credits called allocate control messages. Only one allocate could be outstanding at a time (that is, window size W= 1). The French research network Cyclades provided the environment for the development of the transport station (TS) protocol [16]: In the TS protocol, the flow-control mechanism is based on numbered credits, each credit authorizing the transmission of a variable-size message called a letter. Flow control is actually combined with error control in that credits are carried by acknowledgment messages. The transmission control program (TCP) was the second generation transport protocol developed by the ARPANET research community in order to overcome the deficiencies of the original NCP protocol [l]. As in the TS protocol, flow and error control are combined in TCP. As a difference however. error and flow control are on a byte (rather than letter) basis. This allows a more efficient utilization of reassembly buffers at the destination. Conclusions and Future Trends Traffic control is a vital function in modern data communications networks. It is supported by routing and flow control procedures. These procedures can be implemented in several different ways, of which we have given a few representative examples in this paper. Some readers may ask, at this point, Which is the best routing and flow control solution? Unfortunately. there is no simple answer. The selection will depend on many factors, including network architecture (for example, virtual-circuit, datagram, integrated packet and circuit). type of traffic (for example. data, integrated voice and data). robustness requirements (such as experimental, commercial, or military), switching node storage and processing resources, and network size. In some cases. new solutions must be sought. In the following, we briefly outline some specialized network environments for which the conventional solutions are not adequate and, therefore, new techniques must be researched. When networks grow large, conventional routing strategies become inefficient because the increased size of the routing tables (proportional to the number of nodes) causes higher line overhead (due to routing-table exchanges) and higher storage overhead. The obvious solution to this problem is the hierarchical routing implementation [17]. In a hierarchical implementation, the network is partitioned into regions, and routes within each region are efficiently computed using a regional strategy. The regions are interconnected by a national network governed by a national routing strategy. The route connecting nodes in different regions is then the concatenation of three locally optimal routes (one national and two regional). In [17]. algorithms for network partitioning are presented, and the performance of hierarchical routing is compared with that of optimal routing. Related to the issue of hierarchical routing in large networks is the issue of internet routing. When local networks are interconnected via gateways, local routing is accomplished using the preexisting routing strategies, while inter-gateway routing may be provided by a gateway routing procedure which resembles the national routing procedure proposed for hierarchical networks. With this scheme. the gateways must have the knowledge of all host addresses. To simplify gateway design, source routing with a route prestamped in each packet header can be implemented [15]. The integration of voice and data requirements in packetswitched networks has been vigorously advocated in recent years on the grounds of improved efficiency and reduced cost [la]. Unfortunately, little attention has been given to the fact that integrated networks require a complete redesign of conventional flow-control schemes, since voice traffic cannot be buffered and delayed in case of congestion. Priorities are of help only if the voice traffic is a small fraction of the total traffic. For the general case. new flow-control techniques must be developed for voice. 1-hese techniques should be preventive in nature. that is, they should block calls before congestion occurs, rather than detecting congestion and then attempting to recover from it, as is the case for most November 1984-VOl. 22, No. 11 IEEE Communications Magazine 22

13 conventional flow control schemes [19]. Furthermore, different routing criteria should be applied to data and voice. Forgie [19] proposes a virtual-circuit routing approach for voice connections, where voice is carried on a fixed path for the entire duration of the session and a voice call may be refused if link utilization along the path exceeds a given threshold. Data packets, on the other hand, are routed using a datagram approach. Hybrid packet and circuit networks are now emerging as a solution to multimode (voice and data; batch and interactive) user requirements in Integrated Services Data Networks [20]. These networks must be equipped with novel flow-control mechanisms. In fact, if the network were to apply conventional flow control schemes to the packet-switched (PIS) component only, leaving the circuit-switched (CIS) component uncontrolled, then the CIS component would very likely capture the entire network bandwidth during peak hours. This may not cause congestion, since the CIS protocol is not as congestion prone as the PIS protocol, but it certainly creates unfairness. Some form of flow control on CIS traffic which is sensitive to the relative PIS load is therefore required. As for the routing protocol in a hybrid network, the objective is to find feasible, minimum blocking paths for circuit-switched requests, and minimum-delay paths for packet transmissions. These objectives may be achieved with separate routing algorithms, but this may lead to inefficiencies due to high line and processor overhead and lack of proper coordination. The design of a unified routing algorittim, on ttie other hand, is a challenging problem, since it is not obvious that the best route for packet transmission is also efficient (or even feasible) for circuit establishment. A unified algorithm based on distributed computation was proposed in [21]. This algorithm computes at each node the set of paths to each destination, and ranks them by increasing values of residual bandwidth and increasing delay. Packets are always routed on minimum-delay paths, while circuits are routed on paths with sufficient residual bandwidth to satisfy the bandwidth requirement. In this paper, we have presented routing and flow control as separate, independent procedures. Indeed, these procedures have traditionally been developed independently in packet networks, under the assumption that flow control must keep excess traffic out of the network, and routing must struggle to efficiently transport to its destination whatever traffic was permitted into the network by the flow control scheme. It seems, however, that routing and flow control integration can be beneficial in virtual-circuit networks, where a path must be selected before data transfer on a user connection begins [21]. In this case, the routing algorithm can be invoked first to determine whether a path of sufficient residual bandwidth is available. If no path is available, the virtual circuit connection is blocked immediately at the entry node by the network access flow control level, thus preventing congestion, rather than allowing it to occur and then attempting to recover from it. We expect this routing and flow-control integration to be another area of active research in the future. References [l] V. Cerf and R. Kahn, A protocol for packet network intercommunication, / Trans. Commun., COM-22, no. 5, May [2] G. L. Fultz, Adaptive Routing Techniques for Message Switching Computer Communications Networks. Los Angeles, CA: UCLA Eng. Report 7252, July 19%2. [3] H. Rudin, On routing and delta routing: a taxonomy and performance comparison of techniques for packet-switched networks. / E Trans. Commun., COM-24, no. 1, Jan [4] M. Schwartz and T. E. Stern, Routing techniques used in communications networks, / E Trans. Commun., COM-28, no. 4, pp , April [5] M. Gerla, Routing and flow control, in Protocols and Techniques for Data Communications Networks, F. Kuo, Ed., Englewood Cliffs, NJ: Prentice-Hall, [6] D. F. Weir and J. B. Homblad, An X.75 based network architecture, lccc 80 Conf. Proc., Atlanta, GA, pp , Nov [7] V. Ahuja, Routing and flow control in systems network architecture, IBM Syst. J.< VOI. 18, no. 2, pp , [8] T. Yum and M. Schwartz, Comparison of adaptive routing algorithms for computer communications networks, Proc. Nat. Telecommun. Conf., Dec [9] J. Rinde, The routing and control in a centrally directed network, Proc. AFlPS Natl. Comput. Conf.. Dallas, TX, June [lo] J. M. Simon and A. Danet, Controle des resources et principes du routage dans le reseau TRANSPAC, Proc. lnt Symp. Cornput. Networks, Versailles, France, Feb [ll] M. Gerla and L. Kleinrock, Flow control: a comparative survey, /E Trans. Commun., pp , April [12] J. M. McQuillan et al., Improvements in the design and performance of the ARPA network, Proc. Fall Joint Comput. Conf., [13] 0. W. Davies, The control of congestion in packet-switching networks, / E Trans. Commun.. COM-20, June [14] S. Lam and M. Reiser, Congestion control of store and forward networks by buffer input limits, Proc. Natl. Telecommun. Conf., Los Angeles, CA,Dec [15] A. C. Sunshine Transport protocols for computer networks, in Protocols and Techniques for Data Communications Networks. F. Kuo, Ed., Englewood Cliffs, NJ: Prentice-Hall, [16] H. Zimmermann, The cyclades end-to-end protocol, Proc. 4th Data Commun. Symp., Quebec, P. Q., Canada, pp. 7:21-26, Oct [17] F. Kamoun, Design Considerations for Large Computer Communications Networks, Ph.0. dissertation, Los Angeles, CA: UCLA Engineering Report 7642, April [18] I. Gitman and H. Frank, Economic analysis of integrated voice and data networks, Proc. / E, pp , Nov [19] J. Forgie and A. Nemeth, An efficient packetized voice/data network using statistical flow control, Proc. lnt. Conf. Commun.. Chicago, I(., June [20] R. Pazos and M. Gerla, Bandwidth allocation and routing in ISDN s, / E Communications Magazine, vol. 22, no. 2, pp , Feb [21] M. Gerla, Bandwidth routing in X.25 networks, Proc. PTC, Hawaii, Jan Mario Gerla received a graduate degree in Engineering from the Politecnico di Milano in 1966, and the M.S. and Ph.D. degrees in Engineering from UCLA in 1970 and 1973, respectively. From 1973 to 1976, he was with Network Analysis Corporation, New York, where he was involved in several computer network design projects for both government and industry. From 1976 to 1977 he was with Tran Telecommunications, Los Angeles, CA, where he participated in the development of an integrated packet and circuit network. Since 1977, he has been on the faculty of the Computer Science Department at UCLA. Dr. Gerla s research interests include the design and control of distributed computer communications systems and networks. He is a of the IEEE. Member 23 November 1984-Voi. 22, NO. 11 IEE E Communications Magazine

Routing in packet-switching networks

Routing in packet-switching networks Routing in packet-switching networks Circuit switching vs. Packet switching Most of WANs based on circuit or packet switching Circuit switching designed for voice Resources dedicated to a particular call

More information

Introduction to LAN/WAN. Network Layer

Introduction to LAN/WAN. Network Layer Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services

More information

Flow Control in a Packet Network

Flow Control in a Packet Network ' IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-28, NO. 4, APRIL 1980 553 Flow Control: A Comparative Survey.. MARIO GERLA, MEMBER. IEEE. AND LEONARD KLEINROCK, FELLOW. IEEE (Invited Paper) w %, Abstract-Packet

More information

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed. Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one

More information

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP) TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

Communications and Computer Networks

Communications and Computer Networks SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the

More information

QoS issues in Voice over IP

QoS issues in Voice over IP COMP9333 Advance Computer Networks Mini Conference QoS issues in Voice over IP Student ID: 3058224 Student ID: 3043237 Student ID: 3036281 Student ID: 3025715 QoS issues in Voice over IP Abstract: This

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

CHAPTER 1 ATM TRAFFIC MANAGEMENT

CHAPTER 1 ATM TRAFFIC MANAGEMENT CHAPTER 1 ATM TRAFFIC MANAGEMENT Webster s New World Dictionary defines congestion as filled to excess, or overcrowded; for example, highway congestion. Although, the best solution of congestion is to

More information

Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012. Network Chapter# 19 INTERNETWORK OPERATION

Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012. Network Chapter# 19 INTERNETWORK OPERATION Faculty of Engineering Computer Engineering Department Islamic University of Gaza 2012 Network Chapter# 19 INTERNETWORK OPERATION Review Questions ٢ Network Chapter# 19 INTERNETWORK OPERATION 19.1 List

More information

How To Provide Qos Based Routing In The Internet

How To Provide Qos Based Routing In The Internet CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this

More information

Per-Flow Queuing Allot's Approach to Bandwidth Management

Per-Flow Queuing Allot's Approach to Bandwidth Management White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth

More information

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project Effects of Filler Traffic In IP Networks Adam Feldman April 5, 2001 Master s Project Abstract On the Internet, there is a well-documented requirement that much more bandwidth be available than is used

More information

Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks

Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Guru P.V. Thodime, Vinod M. Vokkarane, and Jason P. Jue The University of Texas at Dallas, Richardson, TX 75083-0688 vgt015000,

More information

Network Layer: Network Layer and IP Protocol

Network Layer: Network Layer and IP Protocol 1 Network Layer: Network Layer and IP Protocol Required reading: Garcia 7.3.3, 8.1, 8.2.1 CSE 3213, Winter 2010 Instructor: N. Vlajic 2 1. Introduction 2. Router Architecture 3. Network Layer Protocols

More information

Network management and QoS provisioning - QoS in the Internet

Network management and QoS provisioning - QoS in the Internet QoS in the Internet Inernet approach is based on datagram service (best effort), so provide QoS was not a purpose for developers. Mainly problems are:. recognizing flows;. manage the issue that packets

More information

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps

More information

A Novel Approach for Load Balancing In Heterogeneous Cellular Network

A Novel Approach for Load Balancing In Heterogeneous Cellular Network A Novel Approach for Load Balancing In Heterogeneous Cellular Network Bittu Ann Mathew1, Sumy Joseph2 PG Scholar, Dept of Computer Science, Amal Jyothi College of Engineering, Kanjirappally, Kerala, India1

More information

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource

More information

ΤΕΙ Κρήτης, Παράρτηµα Χανίων

ΤΕΙ Κρήτης, Παράρτηµα Χανίων ΤΕΙ Κρήτης, Παράρτηµα Χανίων ΠΣΕ, Τµήµα Τηλεπικοινωνιών & ικτύων Η/Υ Εργαστήριο ιαδίκτυα & Ενδοδίκτυα Η/Υ Modeling Wide Area Networks (WANs) ρ Θεοδώρου Παύλος Χανιά 2003 8. Modeling Wide Area Networks

More information

White Paper Abstract Disclaimer

White Paper Abstract Disclaimer White Paper Synopsis of the Data Streaming Logical Specification (Phase I) Based on: RapidIO Specification Part X: Data Streaming Logical Specification Rev. 1.2, 08/2004 Abstract The Data Streaming specification

More information

Requirements of Voice in an IP Internetwork

Requirements of Voice in an IP Internetwork Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.

More information

Performance of networks containing both MaxNet and SumNet links

Performance of networks containing both MaxNet and SumNet links Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for

More information

Computer Networks. Chapter 5 Transport Protocols

Computer Networks. Chapter 5 Transport Protocols Computer Networks Chapter 5 Transport Protocols Transport Protocol Provides end-to-end transport Hides the network details Transport protocol or service (TS) offers: Different types of services QoS Data

More information

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Transport Layer Protocols

Transport Layer Protocols Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements

More information

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols

Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Behavior Analysis of TCP Traffic in Mobile Ad Hoc Network using Reactive Routing Protocols Purvi N. Ramanuj Department of Computer Engineering L.D. College of Engineering Ahmedabad Hiteishi M. Diwanji

More information

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities) QoS Switching H. T. Kung Division of Engineering and Applied Sciences Harvard University November 4, 1998 1of40 Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p

More information

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc (International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan dr.khalidbilal@hotmail.com

More information

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS? 18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

More information

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Professor M. Chiang Electrical Engineering Department, Princeton University ELE539A February 21, 2007 Lecture Outline TCP

More information

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and. Master s Thesis Title A Study on Active Queue Management Mechanisms for Internet Routers: Design, Performance Analysis, and Parameter Tuning Supervisor Prof. Masayuki Murata Author Tomoya Eguchi February

More information

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi

Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, dlagutin@cc.hut.fi 1. Introduction Ad hoc wireless networks pose a big challenge for transport layer protocol and transport layer protocols

More information

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links

An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,

More information

Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007)

Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007) School of Business Eastern Illinois University Wide Area Networks (Week 11, Thursday 3/22/2007) Abdou Illia, Spring 2007 Learning Objectives 2 Distinguish between LAN and WAN Distinguish between Circuit

More information

4 Internet QoS Management

4 Internet QoS Management 4 Internet QoS Management Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology stadler@ee.kth.se September 2008 Overview Network Management Performance Mgt QoS Mgt Resource Control

More information

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls

More information

How To Understand The Concept Of Circuit Switching

How To Understand The Concept Of Circuit Switching Module 2 Communication Switching Lesson 2 Circuit Switching INSTRUCTIONAL OBJECTIVES GENERAL This lesson is aimed at developing the concept and application of circuit switching which is a very important

More information

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow

Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter

More information

CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS

CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 137 CHAPTER 8 CONCLUSION AND FUTURE ENHANCEMENTS 8.1 CONCLUSION In this thesis, efficient schemes have been designed and analyzed to control congestion and distribute the load in the routing process of

More information

TCP in Wireless Mobile Networks

TCP in Wireless Mobile Networks TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer

More information

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)

Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1) Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including

More information

6.6 Scheduling and Policing Mechanisms

6.6 Scheduling and Policing Mechanisms 02-068 C06 pp4 6/14/02 3:11 PM Page 572 572 CHAPTER 6 Multimedia Networking 6.6 Scheduling and Policing Mechanisms In the previous section, we identified the important underlying principles in providing

More information

Dynamic Source Routing in Ad Hoc Wireless Networks

Dynamic Source Routing in Ad Hoc Wireless Networks Dynamic Source Routing in Ad Hoc Wireless Networks David B. Johnson David A. Maltz Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213-3891 dbj@cs.cmu.edu Abstract

More information

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements

The Quality of Internet Service: AT&T s Global IP Network Performance Measurements The Quality of Internet Service: AT&T s Global IP Network Performance Measurements In today's economy, corporations need to make the most of opportunities made possible by the Internet, while managing

More information

A Workload-Based Adaptive Load-Balancing Technique for Mobile Ad Hoc Networks

A Workload-Based Adaptive Load-Balancing Technique for Mobile Ad Hoc Networks A Workload-Based Adaptive Load-Balancing Technique for Mobile Ad Hoc Networks Young J. Lee and George F. Riley School of Electrical & Computer Engineering Georgia Institute of Technology, Atlanta, GA 30332

More information

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, 2005. Lecturer: Kartik Krishnan Lecture 1-3

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, 2005. Lecturer: Kartik Krishnan Lecture 1-3 SFWR 4C03: Computer Networks & Computer Security Jan 3-7, 2005 Lecturer: Kartik Krishnan Lecture 1-3 Communications and Computer Networks The fundamental purpose of a communication network is the exchange

More information

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

51-20-99 Traffic Control Functions in ATM Networks Byung G. Kim Payoff

51-20-99 Traffic Control Functions in ATM Networks Byung G. Kim Payoff 51-20-99 Traffic Control Functions in ATM Networks Byung G. Kim Payoff A standard monitoring algorithm for two traffic types--constant bit rate (e.g., voice) and variable bit rate (e.g., compressed video)--can

More information

A NOVEL RESOURCE EFFICIENT DMMS APPROACH

A NOVEL RESOURCE EFFICIENT DMMS APPROACH A NOVEL RESOURCE EFFICIENT DMMS APPROACH FOR NETWORK MONITORING AND CONTROLLING FUNCTIONS Golam R. Khan 1, Sharmistha Khan 2, Dhadesugoor R. Vaman 3, and Suxia Cui 4 Department of Electrical and Computer

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

Region 10 Videoconference Network (R10VN)

Region 10 Videoconference Network (R10VN) Region 10 Videoconference Network (R10VN) Network Considerations & Guidelines 1 What Causes A Poor Video Call? There are several factors that can affect a videoconference call. The two biggest culprits

More information

Module 7. Routing and Congestion Control. Version 2 CSE IIT, Kharagpur

Module 7. Routing and Congestion Control. Version 2 CSE IIT, Kharagpur Module 7 Routing and Congestion Control Lesson 4 Border Gateway Protocol (BGP) Specific Instructional Objectives On completion of this lesson, the students will be able to: Explain the operation of the

More information

Protocols and Architecture. Protocol Architecture.

Protocols and Architecture. Protocol Architecture. Protocols and Architecture Protocol Architecture. Layered structure of hardware and software to support exchange of data between systems/distributed applications Set of rules for transmission of data between

More information

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師 Lecture 7: Distributed Operating Systems A Distributed System 7.2 Resource sharing Motivation sharing and printing files at remote sites processing information in a distributed database using remote specialized

More information

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak Packet Switching and Computer Networks Switching As computer networks became more pervasive, more and more data and also less voice was transmitted over telephone lines. Circuit Switching The telephone

More information

The ABCs of Spanning Tree Protocol

The ABCs of Spanning Tree Protocol The ABCs of Spanning Tree Protocol INTRODUCTION In an industrial automation application that relies heavily on the health of the Ethernet network that attaches all the controllers and computers together,

More information

Chapter 3 ATM and Multimedia Traffic

Chapter 3 ATM and Multimedia Traffic In the middle of the 1980, the telecommunications world started the design of a network technology that could act as a great unifier to support all digital services, including low-speed telephony and very

More information

Congestion Control Overview

Congestion Control Overview Congestion Control Overview Problem: When too many packets are transmitted through a network, congestion occurs t very high traffic, performance collapses completely, and almost no packets are delivered

More information

Inter-domain Routing Basics. Border Gateway Protocol. Inter-domain Routing Basics. Inter-domain Routing Basics. Exterior routing protocols created to:

Inter-domain Routing Basics. Border Gateway Protocol. Inter-domain Routing Basics. Inter-domain Routing Basics. Exterior routing protocols created to: Border Gateway Protocol Exterior routing protocols created to: control the expansion of routing tables provide a structured view of the Internet by segregating routing domains into separate administrations

More information

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Robust Router Congestion Control Using Acceptance and Departure Rate Measures

Robust Router Congestion Control Using Acceptance and Departure Rate Measures Robust Router Congestion Control Using Acceptance and Departure Rate Measures Ganesh Gopalakrishnan a, Sneha Kasera b, Catherine Loader c, and Xin Wang b a {ganeshg@microsoft.com}, Microsoft Corporation,

More information

An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks *

An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks * An Efficient QoS Routing Protocol for Mobile Ad-Hoc Networks * Inwhee Joe College of Information and Communications Hanyang University Seoul, Korea iwj oeshanyang.ac.kr Abstract. To satisfy the user requirements

More information

Route Discovery Protocols

Route Discovery Protocols Route Discovery Protocols Columbus, OH 43210 Jain@cse.ohio-State.Edu http://www.cse.ohio-state.edu/~jain/ 1 Overview Building Routing Tables Routing Information Protocol Version 1 (RIP V1) RIP V2 OSPF

More information

Computer Networks Homework 1

Computer Networks Homework 1 Computer Networks Homework 1 Reference Solution 1. (15%) Suppose users share a 1 Mbps link. Also suppose each user requires 100 kbps when transmitting, but each user transmits only 10 percent of the time.

More information

Introduction to Quality of Service. Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.

Introduction to Quality of Service. Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito. Introduction to Quality of Service Andrea Bianco Telecommunication Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ QoS Issues in Telecommunication Networks - 1 Quality of service

More information

PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS

PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS PERFORMANCE STUDY AND SIMULATION OF AN ANYCAST PROTOCOL FOR WIRELESS MOBILE AD HOC NETWORKS Reza Azizi Engineering Department, Bojnourd Branch, Islamic Azad University, Bojnourd, Iran reza.azizi@bojnourdiau.ac.ir

More information

2004 Networks UK Publishers. Reprinted with permission.

2004 Networks UK Publishers. Reprinted with permission. Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET

More information

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) Chapter 4 VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network) 4.1 Introduction Traffic Engineering can be defined as a task of mapping traffic

More information

Load-balancing Approach for AOMDV in Ad-hoc Networks R. Vinod Kumar, Dr.R.S.D.Wahida Banu

Load-balancing Approach for AOMDV in Ad-hoc Networks R. Vinod Kumar, Dr.R.S.D.Wahida Banu Load-balancing Approach for AOMDV in Ad-hoc Networks R. Vinod Kumar, Dr.R.S.D.Wahida Banu AP/ECE HOD/ECE Sona College of Technology, GCE, Salem. Salem. ABSTRACT Routing protocol is a challenging issue

More information

A Comparison Study of Qos Using Different Routing Algorithms In Mobile Ad Hoc Networks

A Comparison Study of Qos Using Different Routing Algorithms In Mobile Ad Hoc Networks A Comparison Study of Qos Using Different Routing Algorithms In Mobile Ad Hoc Networks T.Chandrasekhar 1, J.S.Chakravarthi 2, K.Sravya 3 Professor, Dept. of Electronics and Communication Engg., GIET Engg.

More information

ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks

ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks ENSC 427: Communication Networks Spring 2010 OPNET Final Project Analysis of Voice over IP performance on Wi-Fi networks Group 14 members: Farzad Abasi (faa6@sfu.ca) Ehsan Arman (eaa14@sfu.ca) http://www.sfu.ca/~faa6

More information

Lecture 2.1 : The Distributed Bellman-Ford Algorithm. Lecture 2.2 : The Destination Sequenced Distance Vector (DSDV) protocol

Lecture 2.1 : The Distributed Bellman-Ford Algorithm. Lecture 2.2 : The Destination Sequenced Distance Vector (DSDV) protocol Lecture 2 : The DSDV Protocol Lecture 2.1 : The Distributed Bellman-Ford Algorithm Lecture 2.2 : The Destination Sequenced Distance Vector (DSDV) protocol The Routing Problem S S D D The routing problem

More information

MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC

MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC MLPPP Deployment Using the PA-MC-T3-EC and PA-MC-2T3-EC Overview Summary The new enhanced-capability port adapters are targeted to replace the following Cisco port adapters: 1-port T3 Serial Port Adapter

More information

CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering

CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering Islamic University of Gaza Faculty of Engineering Computer Engineering Department Networks Discussion ECOM 4021 Lecture # 2 CH1 Computer Networks and the Internet By Feb 2013 (Theoretical material: page

More information

Internet Protocol: IP packet headers. vendredi 18 octobre 13

Internet Protocol: IP packet headers. vendredi 18 octobre 13 Internet Protocol: IP packet headers 1 IPv4 header V L TOS Total Length Identification F Frag TTL Proto Checksum Options Source address Destination address Data (payload) Padding V: Version (IPv4 ; IPv6)

More information

Chapter 14: Distributed Operating Systems

Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Motivation Types of Distributed Operating Systems Network Structure Network Topology Communication Structure Communication

More information

Computer Networks Vs. Distributed Systems

Computer Networks Vs. Distributed Systems Computer Networks Vs. Distributed Systems Computer Networks: A computer network is an interconnected collection of autonomous computers able to exchange information. A computer network usually require

More information

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

More information

An Active Packet can be classified as

An Active Packet can be classified as Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,pat@ati.stevens-tech.edu Abstract-Traditionally, network management systems

More information

Routing with OSPF. Introduction

Routing with OSPF. Introduction Routing with OSPF Introduction The capabilities of an internet are largely determined by its routing protocol. An internet's scalability, its ability to quickly route around failures, and the consumption

More information

Level 2 Routing: LAN Bridges and Switches

Level 2 Routing: LAN Bridges and Switches Level 2 Routing: LAN Bridges and Switches Norman Matloff University of California at Davis c 2001, N. Matloff September 6, 2001 1 Overview In a large LAN with consistently heavy traffic, it may make sense

More information

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,

More information

The Network Layer Functions: Congestion Control

The Network Layer Functions: Congestion Control The Network Layer Functions: Congestion Control Network Congestion: Characterized by presence of a large number of packets (load) being routed in all or portions of the subnet that exceeds its link and

More information

Architecture of distributed network processors: specifics of application in information security systems

Architecture of distributed network processors: specifics of application in information security systems Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern

More information

Real-Time (Paradigms) (51)

Real-Time (Paradigms) (51) Real-Time (Paradigms) (51) 5. Real-Time Communication Data flow (communication) in embedded systems : Sensor --> Controller Controller --> Actor Controller --> Display Controller Controller Major

More information

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service? QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS By Thierno Diallo, Product Specialist With the increasing demand for advanced voice and video services, the traditional best-effort delivery model is

More information

Comparison of WCA with AODV and WCA with ACO using clustering algorithm

Comparison of WCA with AODV and WCA with ACO using clustering algorithm Comparison of WCA with AODV and WCA with ACO using clustering algorithm Deepthi Hudedagaddi, Pallavi Ravishankar, Rakesh T M, Shashikanth Dengi ABSTRACT The rapidly changing topology of Mobile Ad hoc networks

More information

Introduction to Metropolitan Area Networks and Wide Area Networks

Introduction to Metropolitan Area Networks and Wide Area Networks Introduction to Metropolitan Area Networks and Wide Area Networks Chapter 9 Learning Objectives After reading this chapter, you should be able to: Distinguish local area networks, metropolitan area networks,

More information

Computer Networks. Main Functions

Computer Networks. Main Functions Computer Networks The Network Layer 1 Routing. Forwarding. Main Functions 2 Design Issues Services provided to transport layer. How to design network-layer protocols. 3 Store-and-Forward Packet Switching

More information

Introduction. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross

Introduction. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross Introduction Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross Roadmap 1.1 what is the Inter? 1.2 work edge end systems, works, links 1.3 work core packet switching,

More information

A Survey: High Speed TCP Variants in Wireless Networks

A Survey: High Speed TCP Variants in Wireless Networks ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com A Survey:

More information

PART OF THE PICTURE: The TCP/IP Communications Architecture

PART OF THE PICTURE: The TCP/IP Communications Architecture PART OF THE PICTURE: The / Communications Architecture 1 PART OF THE PICTURE: The / Communications Architecture BY WILLIAM STALLINGS The key to the success of distributed applications is that all the terminals

More information

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University

More information

A Binary Feedback Scheme for Congestion Avoidance in Computer Networks

A Binary Feedback Scheme for Congestion Avoidance in Computer Networks A Binary Feedback Scheme for Congestion Avoidance in Computer Networks K. K. RAMAKRISHNAN and RAJ JAIN Digital Equipment Corporation We propose a scheme for congestion auoidunce in networks using a connectionless

More information

Optimization of AODV routing protocol in mobile ad-hoc network by introducing features of the protocol LBAR

Optimization of AODV routing protocol in mobile ad-hoc network by introducing features of the protocol LBAR Optimization of AODV routing protocol in mobile ad-hoc network by introducing features of the protocol LBAR GUIDOUM AMINA University of SIDI BEL ABBES Department of Electronics Communication Networks,

More information

Active Queue Management (AQM) based Internet Congestion Control

Active Queue Management (AQM) based Internet Congestion Control Active Queue Management (AQM) based Internet Congestion Control October 1 2002 Seungwan Ryu (sryu@eng.buffalo.edu) PhD Student of IE Department University at Buffalo Contents Internet Congestion Control

More information